WorldWideScience

Sample records for atlas computers

  1. ATLAS Distributed Computing Automation

    CERN Document Server

    Schovancova, J; The ATLAS collaboration; Borrego, C; Campana, S; Di Girolamo, A; Elmsheuser, J; Hejbal, J; Kouba, T; Legger, F; Magradze, E; Medrano Llamas, R; Negri, G; Rinaldi, L; Sciacca, G; Serfon, C; Van Der Ster, D C

    2012-01-01

    The ATLAS Experiment benefits from computing resources distributed worldwide at more than 100 WLCG sites. The ATLAS Grid sites provide over 100k CPU job slots, over 100 PB of storage space on disk or tape. Monitoring of status of such a complex infrastructure is essential. The ATLAS Grid infrastructure is monitored 24/7 by two teams of shifters distributed world-wide, by the ATLAS Distributed Computing experts, and by site administrators. In this paper we summarize automation efforts performed within the ATLAS Distributed Computing team in order to reduce manpower costs and improve the reliability of the system. Different aspects of the automation process are described: from the ATLAS Grid site topology provided by the ATLAS Grid Information System, via automatic site testing by the HammerCloud, to automatic exclusion from production or analysis activities.

  2. ATLAS Distributed Computing

    CERN Document Server

    Schovancova, J; The ATLAS collaboration

    2011-01-01

    The poster details the different aspects of the ATLAS Distributed Computing experience after the first year of LHC data taking. We describe the performance of the ATLAS distributed computing system and the lessons learned during the 2010 run, pointing out parts of the system which were in a good shape, and also spotting areas which required improvements. Improvements ranged from hardware upgrade on the ATLAS Tier-0 computing pools to improve data distribution rates, tuning of FTS channels between CERN and Tier-1s, and studying data access patterns for Grid analysis to improve the global processing rate. We show recent software development driven by operational needs with emphasis on data management and job execution in the ATLAS production system.

  3. The ATLAS Computing Model

    International Nuclear Information System (INIS)

    The ATLAS computing model was constructed to exploit the opportunities of the Grid in handling the large volumes of data from the Large Hadron Collider and to allow easy and relatively local access to data for all of its collaborators worldwide. Despite delays with collision data, the model has now been tested with beam-related and cosmic ray data, and much has been learned. The model has retained its overall design, but with adjustments for the actual functionality of the delivered Grid middleware and services, and for the realities of data access. While much has still to be learned, the model has worked effectively. This presentation will cover the roles of the various Tiers, the resources required and the expected workflows. (author)

  4. New ATLAS Software & Computing Organization

    CERN Multimedia

    Barberis, D

    Following the election by the ATLAS Collaboration Board of Dario Barberis (Genoa University/INFN) as Computing Coordinator and David Quarrie (LBNL) as Software Project Leader, it was considered necessary to modify the organization of the ATLAS Software & Computing ("S&C") project. The new organization is based upon the following principles: separation of the responsibilities for computing management from those of software development, with the appointment of a Computing Coordinator and a Software Project Leader who are both members of the Executive Board; hierarchical structure of responsibilities and reporting lines; coordination at all levels between TDAQ, S&C and Physics working groups; integration of the subdetector software development groups with the central S&C organization. A schematic diagram of the new organization can be seen in Fig.1. Figure 1: new ATLAS Software & Computing organization. Two Management Boards will help the Computing Coordinator and the Software Project...

  5. The ATLAS GridKa computing federation

    International Nuclear Information System (INIS)

    The ATLAS computing infrastructure in Germany consists of a federation or cloud of LCG Tier-2 sites around the GridKa Tier-1. Currently, in this cloud are 6 sites in Germany, complemented by three more sites in the neighboring countries, Poland, Czech Republic, Switzerland. According to the plans laid out in the ATLAS computing model, a common and coordinated operation of this cloud required, in order to distribute datasets, production and user-analysis jobs to the sites involved. (orig.)

  6. Evolving ATLAS Computing For Today's Networks

    International Nuclear Information System (INIS)

    The ATLAS computing infrastructure was designed many years ago based on the assumption of rather limited network connectivity between computing centres. ATLAS sites have been organized in a hierarchical model, where only a static subset of all possible network links can be exploited and a static subset of well connected sites (CERN and the Tier-1s) can cover important functional roles such as hosting master copies of the data.

  7. Analytics Platform for ATLAS Computing Services

    CERN Document Server

    Vukotic, Ilija; The ATLAS collaboration; Bryant, Lincoln

    2016-01-01

    Big Data technologies have proven to be very useful for storage, processing and visualization of derived metrics associated with ATLAS distributed computing (ADC) services. Log file data and database records, and metadata from a diversity of systems have been aggregated and indexed to create an analytics platform for ATLAS ADC operations analysis. Dashboards, wide area data access cost metrics, user analysis patterns, and resource utilization efficiency charts are produced flexibly through queries against a powerful analytics cluster. Here we explore whether these techniques and analytics ecosystem can be applied to add new modes of open, quick, and pervasive access to ATLAS event data so as to simplify access and broaden the reach of ATLAS public data to new communities of users. An ability to efficiently store, filter, search and deliver ATLAS data at the event and/or sub-event level in a widely supported format would enable or significantly simplify usage of machine learning tools like Spark, Jupyter, R, S...

  8. Exploiting Virtualization and Cloud Computing in ATLAS

    International Nuclear Information System (INIS)

    The ATLAS Computing Model was designed around the concept of grid computing; since the start of data-taking, this model has proven very successful in the federated operation of more than one hundred Worldwide LHC Computing Grid (WLCG) sites for offline data distribution, storage, processing and analysis. However, new paradigms in computing, namely virtualization and cloud computing, present improved strategies for managing and provisioning IT resources that could allow ATLAS to more flexibly adapt and scale its storage and processing workloads on varied underlying resources. In particular, ATLAS is developing a “grid-of-clouds” infrastructure in order to utilize WLCG sites that make resources available via a cloud API. This work will present the current status of the Virtualization and Cloud Computing R and D project in ATLAS Distributed Computing. First, strategies for deploying PanDA queues on cloud sites will be discussed, including the introduction of a “cloud factory” for managing cloud VM instances. Next, performance results when running on virtualized/cloud resources at CERN LxCloud, StratusLab, and elsewhere will be presented. Finally, we will present the ATLAS strategies for exploiting cloud-based storage, including remote XROOTD access to input data, management of EC2-based files, and the deployment of cloud-resident LCG storage elements.

  9. ATLAS distributed computing: experience and evolution

    CERN Document Server

    Nairz, A; The ATLAS collaboration

    2014-01-01

    The ATLAS experiment has just concluded its first running period which commenced in 2010. After two years of remarkable performance from the LHC and ATLAS, the experiment has accumulated more than 25/fb of data. The total volume of beam and simulated data products exceeds 100~PB distributed across more than 150 computing centres around the world, managed by the experiment's distributed data management system. These sites have provided up to 150,000 computing cores to ATLAS's global production and analysis processing system, enabling a rich physics programme including the discovery of the Higgs-like boson in 2012. The wealth of accumulated experience in global data-intensive computing at this massive scale, and the considerably more challenging requirements of LHC computing from 2015 when the LHC resumes operation, are driving a comprehensive design and development cycle to prepare a revised computing model together with data processing and management systems able to meet the demands of higher trigger rates, e...

  10. ATLAS Distributed Computing: Experience and Evolution

    CERN Document Server

    Nairz, A; The ATLAS collaboration

    2013-01-01

    The ATLAS experiment has just concluded its first running period which commenced in 2010. After two years of remarkable performance from the LHC and ATLAS, the experiment has accumulated more than 25 fb-1 of data. The total volume of beam and simulated data products exceeds 100 PB distributed across more than 150 computing centers around the world, managed by the experiment's distributed data management system. These sites have provided up to 150,000 computing cores to ATLAS's global production and analysis processing system, enabling a rich physics program including the discovery of the Higgs-like boson in 2012. The wealth of accumulated experience in global data-intensive computing at this massive scale, and the considerably more challenging requirements of LHC computing from 2014 when the LHC resumes operation, are driving a comprehensive design and development cycle to prepare a revised computing model together with data processing and management systems able to meet the demands of higher trigger rates, e...

  11. The Evolution of Cloud Computing in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00224309; Berghaus, Frank; Brasolin, Franco; Cordeiro, Cristovao; Desmarais, Ron; Field, Laurence; Gable, Ian; Giordano, Domenico; Di Girolamo, Alessandro; Hover, John; Leblanc, Matthew Edgar; Love, Peter; Paterson, Michael; Sobie, Randall; Zaytsev, Alexandr

    2015-01-01

    The ATLAS experiment has successfully incorporated cloud computing technology and cloud resources into its primarily grid-based model of distributed computing. Cloud R&D activities continue to mature and transition into stable production systems, while ongoing evolutionary changes are still needed to adapt and refine the approaches used, in response to changes in prevailing cloud technology. In addition, completely new developments are needed to handle emerging requirements. This paper describes the overall evolution of cloud computing in ATLAS. The current status of the virtual machine (VM) management systems used for harnessing infrastructure as a service (IaaS) resources are discussed. Monitoring and accounting systems tailored for clouds are needed to complete the integration of cloud resources within ATLAS' distributed computing framework. We are developing and deploying new solutions to address the challenge of operation in a geographically distributed multi-cloud scenario, including a system for ma...

  12. The Evolution of Cloud Computing in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00224309; The ATLAS collaboration; Berghaus, Frank; Love, Peter; Leblanc, Matthew Edgar; Di Girolamo, Alessandro; Paterson, Michael; Gable, Ian; Sobie, Randall; Field, Laurence

    2015-01-01

    The ATLAS experiment has successfully incorporated cloud computing technology and cloud resources into its primarily grid-based model of distributed computing. Cloud R&D activities continue to mature and transition into stable production systems, while ongoing evolutionary changes are still needed to adapt and refine the approaches used, in response to changes in prevailing cloud technology. In addition, completely new developments are needed to handle emerging requirements. This work will describe the overall evolution of cloud computing in ATLAS. The current status of the VM management systems used for harnessing IAAS resources will be discussed. Monitoring and accounting systems tailored for clouds are needed to complete the integration of cloud resources within ATLAS' distributed computing framework. We are developing and deploying new solutions to address the challenge of operation in a geographically distributed multi-cloud scenario, including a system for managing VM images across multiple clouds, ...

  13. AGIS: Evolution of Distributed Computing Information system for ATLAS

    CERN Document Server

    Anisenkov, Alexey; The ATLAS collaboration; Alandes Pradillo, Maria; Karavakis, Edward

    2015-01-01

    The variety of the ATLAS Computing Infrastructure requires a central information system to define the topology of computing resources and to store the different parameters and configuration data which are needed by the various ATLAS software components. The ATLAS Grid Information System is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services.

  14. ATLAS Distributed Computing in LHC Run2

    CERN Document Server

    Campana, Simone; The ATLAS collaboration

    2015-01-01

    The ATLAS Distributed Computing infrastructure has evolved after the first period of LHC data taking in order to cope with the challenges of the upcoming LHC Run2. An increased data rate and computing demands of the Monte-Carlo simulation, as well as new approaches to ATLAS analysis, dictated a more dynamic workload management system (ProdSys2) and data management system (Rucio), overcoming the boundaries imposed by the design of the old computing model. In particular, the commissioning of new central computing system components was the core part of the migration toward the flexible computing model. The flexible computing utilization exploring the opportunistic resources such as HPC, cloud, and volunteer computing is embedded in the new computing model, the data access mechanisms have been enhanced with the remote access, and the network topology and performance is deeply integrated into the core of the system. Moreover a new data management strategy, based on defined lifetime for each dataset, has been defin...

  15. ATLAS computing on CSCS HPC

    CERN Document Server

    Hostettler, Michael Artur; The ATLAS collaboration; Haug, Sigve; Walker, Rodney; Weber, Michele

    2015-01-01

    The Piz Daint Cray XC30 HPC system at CSCS, the Swiss National Supercomputing centre, was in 2014 the highest ranked European system on TOP500, also featuring GPU accelerators. Event generation and detector simulation for the ATLAS experiment have been enabled for this machine. We report on the technical solutions, performance, HPC policy challenges and possible future opportunities for HEP on extreme HPC systems. In particular a custom made integration to the ATLAS job submission system has been developed via the Advanced Resource Connector (ARC) middleware. Furthermore, some GPU acceleration of the Geant4 detector simulations has been implemented to justify the allocation request for this machine.

  16. ATLAS computing on CSCS HPC

    CERN Document Server

    Filipcic, Andrej; The ATLAS collaboration; Weber, Michele; Walker, Rodney; Hostettler, Michael Artur

    2015-01-01

    The Piz Daint Cray XC30 HPC system at CSCS, the Swiss National Supercomputing centre, is in 2014 the highest ranked European system on TOP500, also featuring GPU accelerators. Event generation and detector simulation for the ATLAS experiment has been enabled for this machine. We report on the technical solutions, performance, HPC policy challenges and possible future opportunities for HEP on extreme HPC systems. In particular a custom made integration to the ATLAS job submission system has been developed via the Advanced Resource Connector (ARC) middleware. Further, some GPU acceleration of the Geant4 detector simulations were implemented to justify the allocation request for this machine.

  17. System administration of ATLAS TDAQ computing environment

    Science.gov (United States)

    Adeel-Ur-Rehman, A.; Bujor, F.; Benes, J.; Caramarcu, C.; Dobson, M.; Dumitrescu, A.; Dumitru, I.; Leahu, M.; Valsan, L.; Oreshkin, A.; Popov, D.; Unel, G.; Zaytsev, A.

    2010-04-01

    This contribution gives a thorough overview of the ATLAS TDAQ SysAdmin group activities which deals with administration of the TDAQ computing environment supporting High Level Trigger, Event Filter and other subsystems of the ATLAS detector operating on LHC collider at CERN. The current installation consists of approximately 1500 netbooted nodes managed by more than 60 dedicated servers, about 40 multi-screen user interface machines installed in the control rooms and various hardware and service monitoring machines as well. In the final configuration, the online computer farm will be capable of hosting tens of thousands applications running simultaneously. The software distribution requirements are matched by the two level NFS based solution. Hardware and network monitoring systems of ATLAS TDAQ are based on NAGIOS and MySQL cluster behind it for accounting and storing the monitoring data collected, IPMI tools, CERN LANDB and the dedicated tools developed by the group, e.g. ConfdbUI. The user management schema deployed in TDAQ environment is founded on the authentication and role management system based on LDAP. External access to the ATLAS online computing facilities is provided by means of the gateways supplied with an accounting system as well. Current activities of the group include deployment of the centralized storage system, testing and validating hardware solutions for future use within the ATLAS TDAQ environment including new multi-core blade servers, developing GUI tools for user authentication and roles management, testing and validating 64-bit OS, and upgrading the existing TDAQ hardware components, authentication servers and the gateways.

  18. System administration of ATLAS TDAQ computing environment

    International Nuclear Information System (INIS)

    This contribution gives a thorough overview of the ATLAS TDAQ SysAdmin group activities which deals with administration of the TDAQ computing environment supporting High Level Trigger, Event Filter and other subsystems of the ATLAS detector operating on LHC collider at CERN. The current installation consists of approximately 1500 netbooted nodes managed by more than 60 dedicated servers, about 40 multi-screen user interface machines installed in the control rooms and various hardware and service monitoring machines as well. In the final configuration, the online computer farm will be capable of hosting tens of thousands applications running simultaneously. The software distribution requirements are matched by the two level NFS based solution. Hardware and network monitoring systems of ATLAS TDAQ are based on NAGIOS and MySQL cluster behind it for accounting and storing the monitoring data collected, IPMI tools, CERN LANDB and the dedicated tools developed by the group, e.g. ConfdbUI. The user management schema deployed in TDAQ environment is founded on the authentication and role management system based on LDAP. External access to the ATLAS online computing facilities is provided by means of the gateways supplied with an accounting system as well. Current activities of the group include deployment of the centralized storage system, testing and validating hardware solutions for future use within the ATLAS TDAQ environment including new multi-core blade servers, developing GUI tools for user authentication and roles management, testing and validating 64-bit OS, and upgrading the existing TDAQ hardware components, authentication servers and the gateways.

  19. The Evolution of Cloud Computing in ATLAS

    Science.gov (United States)

    Taylor, Ryan P.; Berghaus, Frank; Brasolin, Franco; Domingues Cordeiro, Cristovao Jose; Desmarais, Ron; Field, Laurence; Gable, Ian; Giordano, Domenico; Di Girolamo, Alessandro; Hover, John; LeBlanc, Matthew; Love, Peter; Paterson, Michael; Sobie, Randall; Zaytsev, Alexandr

    2015-12-01

    The ATLAS experiment at the LHC has successfully incorporated cloud computing technology and cloud resources into its primarily grid-based model of distributed computing. Cloud R&D activities continue to mature and transition into stable production systems, while ongoing evolutionary changes are still needed to adapt and refine the approaches used, in response to changes in prevailing cloud technology. In addition, completely new developments are needed to handle emerging requirements. This paper describes the overall evolution of cloud computing in ATLAS. The current status of the virtual machine (VM) management systems used for harnessing Infrastructure as a Service resources are discussed. Monitoring and accounting systems tailored for clouds are needed to complete the integration of cloud resources within ATLAS' distributed computing framework. We are developing and deploying new solutions to address the challenge of operation in a geographically distributed multi-cloud scenario, including a system for managing VM images across multiple clouds, a system for dynamic location-based discovery of caching proxy servers, and the usage of a data federation to unify the worldwide grid of storage elements into a single namespace and access point. The usage of the experiment's high level trigger farm for Monte Carlo production, in a specialized cloud environment, is presented. Finally, we evaluate and compare the performance of commercial clouds using several benchmarks.

  20. Automating usability of ATLAS Distributed Computing resources

    CERN Document Server

    "Tupputi, S A; The ATLAS collaboration

    2013-01-01

    The automation of ATLAS Distributed Computing (ADC) operations is essential to reduce manpower costs and allow performance-enhancing actions, which improve the reliability of the system. In this perspective a crucial case is the automatic exclusion/recovery of ATLAS computing sites storage resources, which are continuously exploited at the edge of their capabilities. It is challenging to adopt unambiguous decision criteria for storage resources who feature non-homogeneous types, sizes and roles. The recently developed Storage Area Automatic Blacklisting (SAAB) tool has provided a suitable solution, by employing an inference algorithm which processes SAM (Site Availability Test) site-by-site SRM tests outcome. SAAB accomplishes both the tasks of providing global monitoring as well as automatic operations on single sites.\

  1. ATLAS Distributed Computing in LHC Run2

    Science.gov (United States)

    Campana, Simone

    2015-12-01

    The ATLAS Distributed Computing infrastructure has evolved after the first period of LHC data taking in order to cope with the challenges of the upcoming LHC Run-2. An increase in both the data rate and the computing demands of the Monte-Carlo simulation, as well as new approaches to ATLAS analysis, dictated a more dynamic workload management system (Prodsys-2) and data management system (Rucio), overcoming the boundaries imposed by the design of the old computing model. In particular, the commissioning of new central computing system components was the core part of the migration toward a flexible computing model. A flexible computing utilization exploring the use of opportunistic resources such as HPC, cloud, and volunteer computing is embedded in the new computing model; the data access mechanisms have been enhanced with the remote access, and the network topology and performance is deeply integrated into the core of the system. Moreover, a new data management strategy, based on a defined lifetime for each dataset, has been defined to better manage the lifecycle of the data. In this note, an overview of an operational experience of the new system and its evolution is presented.

  2. Automating usability of ATLAS Distributed Computing resources

    Science.gov (United States)

    Tupputi, S. A.; Di Girolamo, A.; Kouba, T.; Schovancová, J.; Atlas Collaboration

    2014-06-01

    The automation of ATLAS Distributed Computing (ADC) operations is essential to reduce manpower costs and allow performance-enhancing actions, which improve the reliability of the system. In this perspective a crucial case is the automatic handling of outages of ATLAS computing sites storage resources, which are continuously exploited at the edge of their capabilities. It is challenging to adopt unambiguous decision criteria for storage resources of non-homogeneous types, sizes and roles. The recently developed Storage Area Automatic Blacklisting (SAAB) tool has provided a suitable solution, by employing an inference algorithm which processes history of storage monitoring tests outcome. SAAB accomplishes both the tasks of providing global monitoring as well as automatic operations on single sites. The implementation of the SAAB tool has been the first step in a comprehensive review of the storage areas monitoring and central management at all levels. Such review has involved the reordering and optimization of SAM tests deployment and the inclusion of SAAB results in the ATLAS Site Status Board with both dedicated metrics and views. The resulting structure allows monitoring the storage resources status with fine time-granularity and automatic actions to be taken in foreseen cases, like automatic outage handling and notifications to sites. Hence, the human actions are restricted to reporting and following up problems, where and when needed. In this work we show SAAB working principles and features. We present also the decrease of human interactions achieved within the ATLAS Computing Operation team. The automation results in a prompt reaction to failures, which leads to the optimization of resource exploitation.

  3. Automating usability of ATLAS distributed computing resources

    International Nuclear Information System (INIS)

    The automation of ATLAS Distributed Computing (ADC) operations is essential to reduce manpower costs and allow performance-enhancing actions, which improve the reliability of the system. In this perspective a crucial case is the automatic handling of outages of ATLAS computing sites storage resources, which are continuously exploited at the edge of their capabilities. It is challenging to adopt unambiguous decision criteria for storage resources of non-homogeneous types, sizes and roles. The recently developed Storage Area Automatic Blacklisting (SAAB) tool has provided a suitable solution, by employing an inference algorithm which processes history of storage monitoring tests outcome. SAAB accomplishes both the tasks of providing global monitoring as well as automatic operations on single sites. The implementation of the SAAB tool has been the first step in a comprehensive review of the storage areas monitoring and central management at all levels. Such review has involved the reordering and optimization of SAM tests deployment and the inclusion of SAAB results in the ATLAS Site Status Board with both dedicated metrics and views. The resulting structure allows monitoring the storage resources status with fine time-granularity and automatic actions to be taken in foreseen cases, like automatic outage handling and notifications to sites. Hence, the human actions are restricted to reporting and following up problems, where and when needed. In this work we show SAAB working principles and features. We present also the decrease of human interactions achieved within the ATLAS Computing Operation team. The automation results in a prompt reaction to failures, which leads to the optimization of resource exploitation.

  4. Data analytics in the ATLAS Distributed Computing

    CERN Document Server

    Vukotic, Ilija; The ATLAS collaboration; Bryant, Lincoln

    2015-01-01

    The ATLAS Data analytics effort is focused on creating systems which provide the ATLAS ADC with new capabilities for understanding distributed systems and overall operational performance. These capabilities include: warehousing information from multiple systems (the production and distributed analysis system - PanDA, the distributed data management system - Rucio, the file transfer system, various monitoring services etc. ); providing a platform to execute arbitrary data mining and machine learning algorithms over aggregated data; satisfy a variety of use cases for different user roles; host new third party analytics services on a scalable compute platform. We describe the implemented system where: data sources are existing RDBMS (Oracle) and Flume collectors; a Hadoop cluster is used to store the data; native Hadoop and Apache Pig scripts are used for data aggregation; and R for in-depth analytics. Part of the data is indexed in ElasticSearch so both simpler investigations and complex dashboards can be made ...

  5. The ATLAS Distributed Computing: the challenges of the future

    CERN Document Server

    Sakamoto, H; The ATLAS collaboration

    2013-01-01

    The ATLAS experiment has collected more than 25 fb-1 of data since LHC has started it's operation in 2010. Tens of petabytes of collision events and Monte-Carlo simulations are stored over more than 150 computing centers all over the world. The data processing is performed on grid sites providing more than 100.000 computing cores and orchestrated by the ATLAS in-house developed job and data management services. The discovery of the Higgs-like boson in 2012 would not be possible without the excellent performance of the ATLAS Distributed Computing. The future ATLAS experiment operation with increased LHC beam energy and luminosity foreseen for 2014 imposes a significant increase in computing demands the ATLAS Distributed Computing needs to satisfy. Therefore, a development of the new data-processing, storage and data-distribution systems has been started to efficiently use the computing resources exploiting current and future technologies of distributed computing.

  6. AGIS: Evolution of Distributed Computing information system for ATLAS

    Science.gov (United States)

    Anisenkov, A.; Di Girolamo, A.; Alandes, M.; Karavakis, E.

    2015-12-01

    ATLAS, a particle physics experiment at the Large Hadron Collider at CERN, produces petabytes of data annually through simulation production and tens of petabytes of data per year from the detector itself. The ATLAS computing model embraces the Grid paradigm and a high degree of decentralization of computing resources in order to meet the ATLAS requirements of petabytes scale data operations. It has been evolved after the first period of LHC data taking (Run-1) in order to cope with new challenges of the upcoming Run- 2. In this paper we describe the evolution and recent developments of the ATLAS Grid Information System (AGIS), developed in order to integrate configuration and status information about resources, services and topology of the computing infrastructure used by the ATLAS Distributed Computing applications and services.

  7. Evolving ATLAS Computing For Today’s Networks

    CERN Document Server

    Campana, S; The ATLAS collaboration; Jezequel, S; Negri, G; Serfon, C; Ueda, I

    2012-01-01

    The ATLAS computing infrastructure was designed many years ago based on the assumption of rather limited network connectivity between computing centres. ATLAS sites have been organized in a hierarchical model, where only a static subset of all possible network links can be exploited and a static subset of well connected sites (CERN and the T1s) can cover important functional roles such as hosting master copies of the data. The pragmatic adoption of such simplified approach, in respect of a more relaxed scenario interconnecting all sites, was very beneficial during the commissioning of the ATLAS distributed computing system and essential in reducing the operational cost during the first two years of LHC data taking. In the mean time, networks evolved far beyond this initial scenario: while a few countries are still poorly connected with the rest of the WLCG infrastructure, most of the ATLAS computing centres are now efficiently interlinked. Our operational experience in running the computing infrastructure in ...

  8. ATLAS@Home: Harnessing Volunteer Computing for HEP

    CERN Document Server

    Bourdarios, Claire; Filipcic, Andrej; Lancon, Eric; Wu, Wenjing

    2015-01-01

    A recent common theme among HEP computing is exploitation of opportunistic resources in order to provide the maximum statistics possible for Monte-Carlo simulation. Volunteer computing has been used over the last few years in many other scientific fields and by CERN itself to run simulations of the LHC beams. The ATLAS@Home project was started to allow volunteers to run simulations of collisions in the ATLAS detector. So far many thousands of members of the public have signed up to contribute their spare CPU cycles for ATLAS, and there is potential for volunteer computing to provide a significant fraction of ATLAS computing resources. Here we describe the design of the project, the lessons learned so far and the future plans.

  9. Common accounting system for monitoring the ATLAS distributed computing resources

    International Nuclear Information System (INIS)

    This paper covers in detail a variety of accounting tools used to monitor the utilisation of the available computational and storage resources within the ATLAS Distributed Computing during the first three years of Large Hadron Collider data taking. The Experiment Dashboard provides a set of common accounting tools that combine monitoring information originating from many different information sources; either generic or ATLAS specific. This set of tools provides quality and scalable solutions that are flexible enough to support the constantly evolving requirements of the ATLAS user community.

  10. ATLAS Distributed Computing Monitoring tools during the LHC Run I

    CERN Document Server

    Schovancova, J; The ATLAS collaboration; Di Girolamo, A; Jezequel, S; Ueda, I; Wenaus, T

    2014-01-01

    This contribution summarizes evolution of the ATLAS Distributed Computing (ADC) Monitoring project during the LHC Run I. The ADC Monitoring targets at the three groups of customers: ADC Operations team to early identify malfunctions and escalate issues to an activity or a service expert, ATLAS national contacts and sites for the real-time monitoring and long-term measurement of the performance of the provided computing resources, and the ATLAS Management for long-term trends and accounting information about the ATLAS Distributed Computing resources.\\\\ During the LHC Run I a significant development effort has been invested in standardization of the monitoring and accounting applications in order to provide extensive monitoring and accounting suite. ADC Monitoring applications separate the data layer and the visualization layer. The data layer exposes data in a predefined format. The visualization layer is designed bearing in mind visual identity of the provided graphical elements, and re-usability of the visua...

  11. ATLAS Distributed Computing Monitoring tools during the LHC Run I

    CERN Document Server

    Schovancova, J; The ATLAS collaboration; Di Girolamo, A; Jezequel, S; Ueda, I; Wenaus, T

    2013-01-01

    This contribution summarizes evolution of the ATLAS Distributed Computing (ADC) Monitoring project during the LHC Run I. The ADC Monitoring targets at the three groups of customers: ADC Operations team to early identify malfunctions and escalate issues to an activity or a service expert, ATLAS national contacts and sites for the real-time monitoring and long-term measurement of the performance of the provided computing resources, and the ATLAS Management for long-term trends and accounting information about the ATLAS Distributed Computing resources.\\\\ During the LHC Run I a significant development effort has been invested in standardization of the monitoring and accounting applications in order to provide extensive monitoring and accounting suite. ADC Monitoring applications separate the data layer and the visualization layer. The data layer exposes data in a predefined format. The visualization layer is designed bearing in mind visual identity of the provided graphical elements, and re-usability of the visua...

  12. ATLAS computing activities and developments in the Italian Grid cloud

    International Nuclear Information System (INIS)

    The large amount of data produced by the ATLAS experiment needs new computing paradigms for data processing and analysis, which involve many computing centres spread around the world. The computing workload is managed by regional federations, called “clouds”. The Italian cloud consists of a main (Tier-1) center, located in Bologna, four secondary (Tier-2) centers, and a few smaller (Tier-3) sites. In this contribution we describe the Italian cloud facilities and the activities of data processing, analysis, simulation and software development performed within the cloud, and we discuss the tests of the new computing technologies contributing to evolution of the ATLAS Computing Model.

  13. The December 2006 ATLAS Computing & Software Workshop

    CERN Multimedia

    Fred Luehring

    The 29th ATLAS Computing & Software Workshop was held on December 11-15 at CERN. With the rapidly approaching onset of data taking, the workshop participants had an air of urgency about them. There was considerable discussion on hot topics such as physics validation of the software, data analysis, actual software production on the GRID, and the schedule of work for 2007 including the Final Dress Rehearsal (FDR). However don't be fooled, the workshop was not all work - there were also two social events which were greatly enjoyed by the attendees. The workshop welcomed Wouter Verkerke as the new Physics Validation Coordinator (replacing Davide Costanzo). Most recent validation work has centered on the 12.0.X release series that will be used for the Computing System Commissioning (CSC) exercise. The validation is now a big job because it needs to be done over a variety of conditions (magnetic field on/off, aligned/misaligned geometry) for every candidate release. Luckily there have been a large number of pe...

  14. ATLAS Distributed Computing Challenges and Plans for the Future

    CERN Document Server

    Klimentov, A; The ATLAS collaboration

    2011-01-01

    The following topics will be addressed : Data Model and Data Placement evolution, evaluation of new software technologies such as Cloud computing for the LHC computing. The ATLAS collaboration has been interested in cloud computing since commercial clouds like Amazon EC2 became available. We launched R&D project (together with WLCG) to study cloud computing for ATLAS, and then to design and implement cloud awareness in Distributed Data Management system, production and distributed analysis (PanDA) and in related tools and services.

  15. ATLAS distributed computing operations in the GridKa cloud

    International Nuclear Information System (INIS)

    The ATLAS Grid Computing resources in Germany, Poland, the Czech Republic, Austria, and Switzerland consist of a cloud of 12 Tier-2 computing centers grouped around the Tier-1 center GridKa at the Steinbuch Centre for Computing at KIT. While the Tier-1 center serves as a hub for data management in the cloud and is the principal resource for reprocessing and custodial storage of raw ATLAS data, the Tier-2 centers provide the resources for user analysis and production of simulated events. During the first full year of data taking at the LHC, the GridKa cloud has successfully contributed to the overall ATLAS computing effort, enabling physicists to quickly analyze the large volume of new incoming data and the corresponding simulated events. This talk covers the computing operations in the GridKa cloud with focus on performance and experiences at both the Tier-1 and Tier-2 centers.

  16. ATLAS computing at the GridKa Tier-1 centre

    International Nuclear Information System (INIS)

    Computing in ATLAS is organized in so-called clouds lead by a Tier-1 centre. For the ''DECH'' cloud covering Germany, Poland, the Czech republic, Austria and Switzerland (without CERN) this is the GridKa computing centre at the Steinbuch Centre for Computing (FZK/KIT) in Karlsruhe. The Tier-1 provides crucial services for data management and production, which have been developed and extensively tested during the last years. After the start of the LHC, these tools have to prove their reliability. The talk present the operation of the Tier-1 centre from the ATLAS point of view with an emphasis on the performance of and the experience gained from distributing and processing the first ATLAS data. Also an overview of the current status and progress in the other areas is given.

  17. Next generation database relational solutions for ATLAS distributed computing

    CERN Document Server

    Dimitrov, G; The ATLAS collaboration; Garonne, V

    2014-01-01

    The ATLAS Distributed Computing (ADC) project delivers production tools and services for ATLAS offline activities such as data placement and data processing on the Grid. The system has been capable of sustaining with high efficiency the needed computing activities during the first run of LHC data taking, and has demonstrated flexibility in reacting promptly to new challenges. Databases are a vital part of the whole ADC system. The Oracle Relational Database Management System (RDBMS) has been addressing a majority of the ADC database requirements for many years. Much expertise was gained through the years and without a doubt will be used as a good foundation for the next generation PanDA (Production ANd Distributed Analysis) and DDM (Distributed Data Management) systems. In this paper we present the current production ADC database solutions and notably the planned changes on the PanDA system, and the next generation ATLAS DDM system called Rucio. Significant work was performed on studying different solutions t...

  18. Next generation database relational solutions for ATLAS distributed computing

    CERN Document Server

    Dimitrov, G; The ATLAS collaboration; Garonne, V

    2013-01-01

    The ATLAS Distributed Computing (ADC) project delivers production tools and services for ATLAS offline activities such as data placement and data processing on the Grid. The system has been capable of sustaining with high efficiency the needed computing activities during the first run of LHC data taking, and has demonstrated flexibility in reacting promptly to new challenges. Databases are a vital part of the whole ADC system. The Oracle Relational Database Management System (RDBMS) has been addressing a majority of the ADC database requirements for many years. Much expertise was gained through the years and without a doubt will be used as a good foundation for the next generation PanDA (Production ANd Distributed Analysis) and DDM (Distributed Data Management) systems. In this paper we present the current production ADC database solutions and notably the planned changes on the PanDA system, and the next generation ATLAS DDM system called Rucio. Significant work was performed on studying different solutions t...

  19. Distributed computing operations in the German ATLAS cloud

    International Nuclear Information System (INIS)

    Before announcing the discovery of a Higgs-like boson at the 4th of July 2012 a huge amount of data had to be distributed around the world and analysed. Moreover, to have well optimised analyses with solid background estimates, Monte Carlo simulated event samples needed to be generated. All of this, data distribution, Monte Carlo production, and also data reprocessing, is performed by the Worldwide LHC Computing Grid. The ATLAS grid computing resources in Austria, the Czech Republic, Germany, Poland, and Switzerland are organized in the GridKa cloud which is one out of 10 ATLAS computing clouds. It consists of the Tier-1 centre at KIT in Karlsruhe which serves as a hub for data management and stores raw ATLAS data and the Tier-2 centres that provide the resources for user analysis and Monte Carlo samples production. This talk gives an overview of the ATLAS grid computing operations in 2012 focusing on the performance and experiences at both the Tier-1 and Tier-2 centres and it summarises the prospects and requirements for grid computing during and after the long shut-down of the LHC in 2013/2014.

  20. ATLAS Distributed Computing Monitoring tools during the LHC Run I

    Science.gov (United States)

    Schovancová, J.; Campana, S.; Di Girolamo, A.; Jézéquel, S.; Ueda, I.; Wenaus, T.; Atlas Collaboration

    2014-06-01

    This contribution summarizes evolution of the ATLAS Distributed Computing (ADC) Monitoring project during the LHC Run I. The ADC Monitoring targets at the three groups of customers: ADC Operations team to early identify malfunctions and escalate issues to an activity or a service expert, ATLAS national contacts and sites for the real-time monitoring and long-term measurement of the performance of the provided computing resources, and the ATLAS Management for long-term trends and accounting information about the ATLAS Distributed Computing resources. During the LHC Run I a significant development effort has been invested in standardization of the monitoring and accounting applications in order to provide extensive monitoring and accounting suite. ADC Monitoring applications separate the data layer and the visualization layer. The data layer exposes data in a predefined format. The visualization layer is designed bearing in mind visual identity of the provided graphical elements, and re-usability of the visualization bits across the different tools. A rich family of various filtering and searching options enhancing available user interfaces comes naturally with the data and visualization layer separation. With a variety of reliable monitoring data accessible through standardized interfaces, the possibility of automating actions under well defined conditions correlating multiple data sources has become feasible. In this contribution we discuss also about the automated exclusion of degraded resources and their automated recovery in various activities.

  1. The Next Generation ARC Middleware and ATLAS Computing Model

    International Nuclear Information System (INIS)

    The distributed NDGF Tier-1 and associated NorduGrid clusters are well integrated into the ATLAS computing environment but follow a slightly different paradigm than other ATLAS resources. The current paradigm does not divide the sites as in the commonly used hierarchical model, but rather treats them as a single storage endpoint and a pool of distributed computing nodes. The next generation ARC middleware with its several new technologies provides new possibilities in development of the ATLAS computing model, such as pilot jobs with pre-cached input files, automatic job migration between the sites, integration of remote sites without connected storage elements, and automatic brokering for jobs with non-standard resource requirements. ARC's data transfer model provides an automatic way for the computing sites to participate in ATLAS’ global task management system without requiring centralised brokering or data transfer services. The powerful API combined with Python and Java bindings can easily be used to build new services for job control and data transfer. Integration of the ARC core into the EMI middleware provides a natural way to implement the new services using the ARC components

  2. ATLAS Distributed Computing Monitoring tools after full 2 years of LHC data taking

    CERN Document Server

    Schovancová, J; The ATLAS collaboration

    2012-01-01

    This paper details variety of Monitoring tools used within the ATLAS Distributed Computing during the first 2 years of LHC data taking. We discuss tools used to monitor data processing from the very first steps performed at the Tier-0 facility at CERN after data is read out of the ATLAS detector, through data transfers to the ATLAS computing centers distributed world-wide. We present an overview of monitoring tools used daily to track ATLAS Distributed Computing activities ranging from network performance and data transfers throughput, through data processing and readiness of the computing services at the ATLAS computing centers, to the reliability and usability of the ATLAS computing centers. Described tools provide monitoring for issues of different level of criticality: from spotting issues with the instant online monitoring to the long-term accounting information.

  3. Use of hardware accelerators for ATLAS computing

    CERN Document Server

    Bauce, Matteo; Dankel, Maik; Howard, Jacob; Kama, Sami

    2015-01-01

    Modern HEP experiments produce tremendous amounts of data. These data are processed by in-house built software frameworks which have lifetimes longer than the detector itself. Such frameworks were traditionally based on serial code and relied on advances in CPU technologies, mainly clock frequency, to cope with increasing data volumes. With the advent of many-core architectures and GPGPUs this paradigm has to shift to parallel processing and has to include the use of co-processors. However, since the design of most existing frameworks is based on the assumption of frequency scaling and predate co-processors, parallelisation and integration of co-processors are not an easy task. The ATLAS experiment is an example of such a big experiment with a big software framework called Athena. In this talk we will present the studies on parallelisation and co-processor (GPGPU) use in data preparation and tracking for trigger and offline reconstruction as well as their integration into a multiple process based Athena frame...

  4. Next generation database relational solutions for ATLAS distributed computing

    Science.gov (United States)

    Dimitrov, G.; Maeno, T.; Garonne, V.; Atlas Collaboration

    2014-06-01

    The ATLAS Distributed Computing (ADC) project delivers production tools and services for ATLAS offline activities such as data placement and data processing on the Grid. The system has been capable of sustaining with high efficiency the needed computing activities during the first run of LHC data taking, and has demonstrated flexibility in reacting promptly to new challenges. Databases are a vital part of the whole ADC system. The Oracle Relational Database Management System (RDBMS) has been addressing a majority of the ADC database requirements for many years. Much expertise was gained through the years and without a doubt will be used as a good foundation for the next generation PanDA (Production ANd Distributed Analysis) and DDM (Distributed Data Management) systems. In this paper we present the current production ADC database solutions and notably the planned changes on the PanDA system, and the next generation ATLAS DDM system called Rucio. Significant work was performed on studying different solutions to arrive at the best relational and physical database model for performance and scalability in order to be ready for deployment and operation in 2014.

  5. Next generation database relational solutions for ATLAS distributed computing

    International Nuclear Information System (INIS)

    The ATLAS Distributed Computing (ADC) project delivers production tools and services for ATLAS offline activities such as data placement and data processing on the Grid. The system has been capable of sustaining with high efficiency the needed computing activities during the first run of LHC data taking, and has demonstrated flexibility in reacting promptly to new challenges. Databases are a vital part of the whole ADC system. The Oracle Relational Database Management System (RDBMS) has been addressing a majority of the ADC database requirements for many years. Much expertise was gained through the years and without a doubt will be used as a good foundation for the next generation PanDA (Production ANd Distributed Analysis) and DDM (Distributed Data Management) systems. In this paper we present the current production ADC database solutions and notably the planned changes on the PanDA system, and the next generation ATLAS DDM system called Rucio. Significant work was performed on studying different solutions to arrive at the best relational and physical database model for performance and scalability in order to be ready for deployment and operation in 2014.

  6. The Future of PanDA in ATLAS Distributed Computing

    CERN Document Server

    De, Kaushik; The ATLAS collaboration; Maeno, Tadashi; Nilsson, Paul; Oleynik, Danila; Panitkin, Sergey; Petrosyan, Artem; Schovancova, Jaroslava; Vaniachine, Alexandre; Wenaus, Torre

    2015-01-01

    Experiments at the Large Hadron Collider (LHC) face unprecedented computing challenges. Heterogeneous resources are distributed worldwide at hundreds of sites, thousands of physicists analyze the data remotely, the volume of processed data is beyond the exabyte scale, while data processing requires more than a few billion hours of computing usage per year. The PanDA (Production and Distributed Analysis) system was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. In the process, the old batch job paradigm of locally managed computing in HEP was discarded in favor of a far more automated, flexible and scalable model. The success of PanDA in ATLAS is leading to widespread adoption and testing by other experiments. PanDA is the first exascale workload management system in HEP, already operating at more than a million computing jobs per day, and processing over an exabyte of data in 2013. There are many new challenges that PanDA will face in the near future, in addi...

  7. ATLAS Distributed Computing Shift Operation in the first 2 full years of LHC data taking

    International Nuclear Information System (INIS)

    ATLAS Distributed Computing organized 3 teams to support data processing at the Tier-0 facility at CERN, and data reprocessing, data management operations, Monte Carlo simulation production, and physics analysis at the ATLAS computing centers located worldwide. In this paper, we describe how these teams ensure that the ATLAS experiment data is delivered to the ATLAS physicists in a timely manner in the glamorous era of LHC data taking. We describe our experience in improving degraded service performance, and we detail the Distributed Analysis support during the exciting period of the computing model evolution.

  8. The future of PanDA in ATLAS distributed computing

    Science.gov (United States)

    De, K.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    Experiments at the Large Hadron Collider (LHC) face unprecedented computing challenges. Heterogeneous resources are distributed worldwide at hundreds of sites, thousands of physicists analyse the data remotely, the volume of processed data is beyond the exabyte scale, while data processing requires more than a few billion hours of computing usage per year. The PanDA (Production and Distributed Analysis) system was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. In the process, the old batch job paradigm of locally managed computing in HEP was discarded in favour of a far more automated, flexible and scalable model. The success of PanDA in ATLAS is leading to widespread adoption and testing by other experiments. PanDA is the first exascale workload management system in HEP, already operating at more than a million computing jobs per day, and processing over an exabyte of data in 2013. There are many new challenges that PanDA will face in the near future, in addition to new challenges of scale, heterogeneity and increasing user base. PanDA will need to handle rapidly changing computing infrastructure, will require factorization of code for easier deployment, will need to incorporate additional information sources including network metrics in decision making, be able to control network circuits, handle dynamically sized workload processing, provide improved visualization, and face many other challenges. In this talk we will focus on the new features, planned or recently implemented, that are relevant to the next decade of distributed computing workload management using PanDA.

  9. ATLAS computing challenges before the next LHC run

    CERN Document Server

    Barberis, D; The ATLAS collaboration

    2014-01-01

    ATLAS software and computing is in a period of intensive evolution. The current long shutdown presents an opportunity to assimilate lessons from the very successful Run 1 (2009-2013) and to prepare for the substantially increased computing requirements for Run 2 (from spring 2015). Run 2 will bring a near doubling of the energy and the data rate, high event pile-up levels, and higher event complexity from detector upgrades, meaning the number and complexity of events to be analyzed will increase dramatically. At the same time operational loads must be reduced through greater automation, a wider array of opportunistic resources must be supported, costly storage must be used with greater efficiency, a sophisticated new analysis model must be integrated, and concurrency features of new processors must be exploited. This paper surveys the distributed computing aspects of the upgrade program and the plans for 2014 to exercise the new capabilities in a large scale Data Challenge.

  10. Preparing ATLAS Distributed Computing for LHC Run 2

    CERN Document Server

    Lancon, E; The ATLAS collaboration

    2014-01-01

    ATLAS software and computing is in a period of intensive evolution. The current long shutdown presents an opportunity to assimilate lessons from the very successful Run 1 (2009-2013) and to prepare for the substantially increased computing requirements for Run 2 (from spring 2015). Run 2 will bring a near doubling of the energy and the data rate, high event pile-up levels, and higher event complexity from detector upgrades, meaning the number and complexity of events to be analyzed will increase dramatically. At the same time operational loads must be reduced through greater automation, a wider array of opportunistic resources must be supported, costly storage must be used with greater efficiency, a sophisticated new analysis model must be integrated, and concurrency features of new processors must be exploited. This presentation will survey the distributed computing aspects of the upgrade program and the plans for 2014 to exercise the new capabilities in a large scale Data Challenge.

  11. ATLAS computing challenges before the next LHC run

    CERN Document Server

    Barberis, D; The ATLAS collaboration

    2014-01-01

    ATLAS software and computing is in a period of intensive evolution. The current long shutdown presents an opportunity to assimilate lessons from the very successful Run 1 (2009-2013) and to prepare for the substantially increased computing requirements for Run 2 (from spring 2015). Run 2 will bring a near doubling of the energy and the data rate, high event pile-up levels, and higher event complexity from detector upgrades, meaning the number and complexity of events to be analyzed will increase dramatically. At the same time operational loads must be reduced through greater automation, a wider array of opportunistic resources must be supported, costly storage must be used with greater efficiency, a sophisticated new analysis model must be integrated, and concurrency features of new processors must be exploited. This presentation will survey the distributed computing aspects of the upgrade program and the plans for 2014 to exercise the new capabilities in a large scale Data Challenge.

  12. ATLAS Experience with HEP Software at the Argonne Leadership Computing Facility

    CERN Document Server

    LeCompte, T; The ATLAS collaboration; Benjamin, D

    2014-01-01

    A number of HEP software packages used by the ATLAS experiment, including GEANT4, ROOT and ALPGEN, have been adapted to run on the IBM Blue Gene supercomputers at the Argonne Leadership Computing Facility. These computers use a non-x86 architecture and have a considerably less rich operating environment than in common use in HEP, but also represent a computing capacity an order of magnitude beyond what ATLAS is presently using via the LCG. The status and potential for making use of leadership-class computing, including the status of integration with the ATLAS production system, is discussed.

  13. ATLAS experience with HEP software at the Argonne leadership computing facility

    International Nuclear Information System (INIS)

    A number of HEP software packages used by the ATLAS experiment, including GEANT4, ROOT and ALPGEN, have been adapted to run on the IBM Blue Gene supercomputers at the Argonne Leadership Computing Facility. These computers use a non-x86 architecture and have a considerably less rich operating environment than in common use in HEP, but also represent a computing capacity an order of magnitude beyond what ATLAS is presently using via the LCG. The status and potential for making use of leadership-class computing, including the status of integration with the ATLAS production system, is discussed.

  14. ATLAS Great Lakes Tier-2 Computing and Muon Calibration Center Commissioning

    CERN Document Server

    McKee, Shawn

    2009-01-01

    Large-scale computing in ATLAS is based on a grid-linked system of tiered computing centers. The ATLAS Great Lakes Tier-2 came online in September 2006 and now is commissioning with full capacity to provide significant computing power and services to the USATLAS community. Our Tier-2 Center also host the Michigan Muon Calibration Center which is responsible for daily calibrations of the ATLAS Monitored Drift Tubes for ATLAS endcap muon system. During the first LHC beam period in 2008 and following ATLAS global cosmic ray data taking period, the Calibration Center received a large data stream from the muon detector to derive the drift tube timing offsets and time-to-space functions with a turn-around time of 24 hours. We will present the Calibration Center commissioning status and our plan for the first LHC beam collisions in 2009.

  15. Analysis of Craniofacial Images using Computational Atlases and Deformation Fields

    DEFF Research Database (Denmark)

    Ólafsdóttir, Hildur

    2008-01-01

    The topic of this thesis is automatic analysis of craniofacial images. The methods proposed and applied contribute to the scientific knowledge about different craniofacial anomalies, in addition to providing tools for detailed and robust analysis of craniofacial images for clinical and research...... purposes. The basis for most of the applications is non-rigid image registration. This approach brings one image into the coordinate system of another resulting in a deformation field describing the anatomical correspondence between the two images. A computational atlas representing the average anatomy of...... findings about the craniofacial morphology and asymmetry of Crouzon mice. Moreover, a method to plan and evaluate treatment of children with deformational plagiocephaly, based on asymmetry assessment, is established. Finally, asymmetry in children with unicoronal synostosis is automatically assessed...

  16. The Architecture and Administration of the ATLAS Online Computing System

    CERN Document Server

    Dobson, M; Ertorer, E; Garitaonandia, H; Leahu, L; Leahu, M; Malciu, I M; Panikashvili, E; Topurov, A; Ünel, G; Computing In High Energy and Nuclear Physics

    2006-01-01

    The needs of ATLAS experiment at the upcoming LHC accelerator, CERN, in terms of data transmission rates and processing power require a large cluster of computers (of the order of thousands) administrated and exploited in a coherent and optimal manner. Requirements like stability, robustness and fast recovery in case of failure impose a server-client system architecture with servers distributed in a tree like structure and clients booted from the network. For security reasons, the system should be accessible only through an application gateway and, also to ensure the autonomy of the system, the network services should be provided internally by dedicated machines in synchronization with CERN IT department's central services. The paper describes a small scale implementation of the system architecture that fits the given requirements and constraints. Emphasis will be put on the mechanisms and tools used to net boot the clients via the "Boot With Me" project and to synchronize information within the cluster via t...

  17. ATLAS

    Data.gov (United States)

    Federal Laboratory Consortium — ATLAS is a particle physics experiment at the Large Hadron Collider at CERN, the European Organization for Nuclear Research. Scientists from Brookhaven have played...

  18. Evolution of the ATLAS Distributed Computing during the LHC long shutdown

    CERN Document Server

    Campana, S; The ATLAS collaboration

    2013-01-01

    The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the WLCG distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileu...

  19. Evolution of the ATLAS Distributed Computing system during the LHC Long shutdown

    CERN Document Server

    Campana, S; The ATLAS collaboration

    2014-01-01

    The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the WLCG distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileu...

  20. The ATLAS Distributed Computing project for LHC Run-2 and beyond.

    CERN Document Server

    Di Girolamo, Alessandro; The ATLAS collaboration

    2015-01-01

    The ATLAS Distributed Computing infrastructure has evolved after the first period of LHC data taking in order to cope with the challenges of the upcoming LHC Run2. An increased data rate and computing demands of the Monte-Carlo simulation, as well as new approaches to ATLAS analysis, dictated a more dynamic workload management system (ProdSys2) and data management system (Rucio), overcoming the boundaries imposed by the design of the old computing model. In particular, the commissioning of new central computing system components was the core part of the migration toward the flexible computing model. The flexible computing utilization exploring the opportunistic resources such as HPC, cloud, and volunteer computing is embedded in the new computing model, the data access mechanisms have been enhanced with the remote access, and the network topology and performance is deeply integrated into the core of the system. Moreover a new data management strategy, based on defined lifetime for each dataset, has been defin...

  1. ATLAS computing operations within the GridKa Cloud

    International Nuclear Information System (INIS)

    The organisation and operations model of the ATLAS T1-T2 federation/Cloud associated to the GridKa T1 in Karlsruhe is described. Attention is paid to Cloud level services and the experience gained during the last years of operation. The ATLAS GridKa Cloud is large and divers spanning 5 countries, 2 ROC's and is currently comprised of 13 core sites. A well defined and tested operations model in such a Cloud is of the utmost importance. We have defined the core Cloud services required by the ATLAS experiment and ensured that they are performed in a managed and sustainable manner. Services such as Distributed Data Management involving data replication,deletion and consistency checks, Monte Carlo Production, software installation and data reprocessing are described in greater detail. In addition to providing these central services we have undertaken several Cloud level stress tests and developed monitoring tools to aid with Cloud diagnostics. Furthermore we have defined good channels of communication between ATLAS, the T1 and the T2's and have pro-active contributions from the T2 manpower. A brief introduction to the GridKa Cloud is provided followed by a more detailed discussion of the operations model and ATLAS services within the Cloud.

  2. Evolution of the Atlas data and computing model for a Tier-2 in the EGI infrastructure

    CERN Document Server

    Fernandez, A; The ATLAS collaboration; AMOROS, G; VILLAPLANA, M; FASSI, F; KACI, M; LAMAS, A; OLIVER, E; SALT, J; SANCHEZ, J; SANCHEZ, V

    2012-01-01

    ABSTRAC ISCG 2012 Evolution of the Atlas data and computing model for a Tier2 in the EGI infrastructure During last years the Atlas computing model has moved from a more strict design, where every Tier2 had a liaison and a network dependence from a Tier1, to a more meshed approach where every cloud could be connected. Evolution of ATLAS data models requires changes in ATLAS Tier2s policy for the data replication, dynamic data caching and remote data access. It also requires rethinking the network infrastructure to enable any Tier2 and associated Tier3 to easily connect to any Tier1 or Tier2. Tier2s are becoming more and more important in the ATLAS computing model as it allows more data to be readily accessible for analysis jobs to all users, independently of their geographical location. The Tier2s disk space has been reserved for real, simulated, calibration and alignment, group, and user data. A buffer disk space is needed for input and output data for simulations jobs. Tier2s are going to be used more effic...

  3. Evolution of the ATLAS distributed computing system during the LHC long shutdown

    International Nuclear Information System (INIS)

    The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the Worldwide LHC Computing Grid (WLCG) distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1 PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileup. We will describe the evolution of the ADC software foreseen during this period. This includes consolidating the existing Production and Distributed Analysis framework (PanDA) and ATLAS Grid Information System (AGIS), together with the development and commissioning of next generation systems for distributed data management (DDM/Rucio) and production (Prodsys-2). We will explain how new technologies such as Cloud Computing and NoSQL databases, which ATLAS investigated as R and D projects in past years, will be integrated in production. Finally, we will describe more fundamental developments such as breaking job-to-data locality by exploiting storage federations and caches, and event level (rather than file or dataset level) workload engines.

  4. Evolution of the ATLAS distributed computing system during the LHC long shutdown

    Science.gov (United States)

    Campana, S.; Atlas Collaboration

    2014-06-01

    The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the Worldwide LHC Computing Grid (WLCG) distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1 PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileup. We will describe the evolution of the ADC software foreseen during this period. This includes consolidating the existing Production and Distributed Analysis framework (PanDA) and ATLAS Grid Information System (AGIS), together with the development and commissioning of next generation systems for distributed data management (DDM/Rucio) and production (Prodsys-2). We will explain how new technologies such as Cloud Computing and NoSQL databases, which ATLAS investigated as R&D projects in past years, will be integrated in production. Finally, we will describe more fundamental developments such as breaking job-to-data locality by exploiting storage federations and caches, and event level (rather than file or dataset level) workload engines.

  5. ATLAS

    CERN Multimedia

    Akhnazarov, V; Canepa, A; Bremer, J; Burckhart, H; Cattai, A; Voss, R; Hervas, L; Kaplon, J; Nessi, M; Werner, P; Ten kate, H; Tyrvainen, H; Vandelli, W; Krasznahorkay, A; Gray, H; Alvarez gonzalez, B; Eifert, T F; Rolando, G; Oide, H; Barak, L; Glatzer, J; Backhaus, M; Schaefer, D M; Maciejewski, J P; Milic, A; Jin, S; Von torne, E; Limbach, C; Medinnis, M J; Gregor, I; Levonian, S; Schmitt, S; Waananen, A; Monnier, E; Muanza, S G; Pralavorio, P; Talby, M; Tiouchichine, E; Tocut, V M; Rybkin, G; Wang, S; Lacour, D; Laforge, B; Ocariz, J H; Bertoli, W; Malaescu, B; Sbarra, C; Yamamoto, A; Sasaki, O; Koriki, T; Hara, K; Da silva gomes, A; Carvalho maneira, J; Marcalo da palma, A; Chekulaev, S; Tikhomirov, V; Snesarev, A; Buzykaev, A; Maslennikov, A; Peleganchuk, S; Sukharev, A; Kaplan, B E; Swiatlowski, M J; Nef, P D; Schnoor, U; Oakham, G F; Ueno, R; Orr, R S; Abouzeid, O; Haug, S; Peng, H; Kus, V; Vitek, M; Temming, K K; Dang, N P; Meier, K; Schultz-coulon, H; Geisler, M P; Sander, H; Schaefer, U; Ellinghaus, F; Rieke, S; Nussbaumer, A; Liu, Y; Richter, R; Kortner, S; Fernandez-bosman, M; Ullan comes, M; Espinal curull, J; Chiriotti alvarez, S; Caubet serrabou, M; Valladolid gallego, E; Kaci, M; Carrasco vela, N; Lancon, E C; Besson, N E; Gautard, V; Bracinik, J; Bartsch, V C; Potter, C J; Lester, C G; Moeller, V A; Rosten, J; Crooks, D; Mathieson, K; Houston, S C; Wright, M; Jones, T W; Harris, O B; Byatt, T J; Dobson, E; Hodgson, P; Hodgkinson, M C; Dris, M; Karakostas, K; Ntekas, K; Oren, D; Duchovni, E; Etzion, E; Oren, Y; Ferrer, L M; Testa, M; Doria, A; Merola, L; Sekhniaidze, G; Giordano, R; Ricciardi, S; Milazzo, A; Falciano, S; De pedis, D; Dionisi, C; Veneziano, S; Cardarelli, R; Verzegnassi, C; Soualah, R; Ochi, A; Ohshima, T; Kishiki, S; Linde, F L; Vreeswijk, M; Werneke, P; Muijs, A; Vankov, P H; Jansweijer, P P M; Dale, O; Lund, E; Bruckman de renstrom, P; Dabrowski, W; Adamek, J D; Wolters, H; Micu, L; Pantea, D; Tudorache, V; Mjoernmark, J; Klimek, P J; Ferrari, A; Abdinov, O; Akhoundov, A; Hashimov, R; Shelkov, G; Khubua, J; Ladygin, E; Lazarev, A; Glagolev, V; Dedovich, D; Lykasov, G; Zhemchugov, A; Zolnikov, Y; Ryabenko, M; Sivoklokov, S; Vasilyev, I; Shalimov, A; Lobanov, M; Paramoshkina, E; Mosidze, M; Bingul, A; Nodulman, L J; Guarino, V J; Yoshida, R; Drake, G R; Calafiura, P; Haber, C; Quarrie, D R; Alonso, J R; Anderson, C; Evans, H; Lammers, S W; Baubock, M; Anderson, K; Petti, R; Suhr, C A; Linnemann, J T; Richards, R A; Tollefson, K A; Holzbauer, J L; Stoker, D P; Pier, S; Nelson, A J; Isakov, V; Martin, A J; Adelman, J A; Paganini, M; Gutierrez, P; Snow, J M; Pearson, B L; Cleland, W E; Savinov, V; Wong, W; Goodson, J J; Li, H; Lacey, R A; Gordeev, A; Gordon, H; Lanni, F; Nevski, P; Rescia, S; Kierstead, J A; Liu, Z; Yu, W W H; Bensinger, J; Hashemi, K S; Bogavac, D; Cindro, V; Hoeferkamp, M R; Coelli, S; Iodice, M; Piegaia, R N; Alonso, F; Wahlberg, H P; Barberio, E L; Limosani, A; Rodd, N L; Jennens, D T; Hill, E C; Pospisil, S; Smolek, K; Schaile, D A; Rauscher, F G; Adomeit, S; Mattig, P M; Wahlen, H; Volkmer, F; Calvente lopez, S; Sanchis peris, E J; Pallin, D; Podlyski, F; Says, L; Boumediene, D E; Scott, W; Phillips, P W; Greenall, A; Turner, P; Gwilliam, C B; Kluge, T; Wrona, B; Sellers, G J; Millward, G; Adragna, P; Hartin, A; Alpigiani, C; Piccaro, E; Bret cano, M; Hughes jones, R E; Mercer, D; Oh, A; Chavda, V S; Carminati, L; Cavasinni, V; Fedin, O; Patrichev, S; Ryabov, Y; Nesterov, S; Grebenyuk, O; Sasso, J; Mahmood, H; Polsdofer, E; Dai, T; Ferretti, C; Liu, H; Hegazy, K H; Benjamin, D P; Zobernig, G; Ban, J; Brooijmans, G H; Keener, P; Williams, H H; Le geyt, B C; Hines, E J; Fadeyev, V; Schumm, B A; Law, A T; Kuhl, A D; Neubauer, M S; Shang, R; Gagliardi, G; Calabro, D; Conta, C; Zinna, M; Jones, G; Li, J; Stradling, A R; Hadavand, H K; Mcguigan, P; Chiu, P; Baldelomar, E; Stroynowski, R A; Kehoe, R L; De groot, N; Timmermans, C; Lach-heb, F; Addy, T N; Nakano, I; Moreno lopez, D; Grosse-knetter, J; Tyson, B; Rude, G D; Tafirout, R; Benoit, P; Danielsson, H O; Elsing, M; Fassnacht, P; Froidevaux, D; Ganis, G; Gorini, B; Lasseur, C; Lehmann miotto, G; Kollar, D; Aleksa, M; Sfyrla, A; Duehrssen-debling, K; Fressard-batraneanu, S; Van der ster, D C; Bortolin, C; Schumacher, J; Mentink, M; Geich-gimbel, C; Yau wong, K H; Lafaye, R; Crepe-renaudin, S; Albrand, S; Hoffmann, D; Pangaud, P; Meessen, C; Hrivnac, J; Vernay, E; Perus, A; Henrot versille, S L; Le dortz, O; Derue, F; Piccinini, M; Polini, A; Terada, S; Arai, Y; Ikeno, M; Fujii, H; Nagano, K; Ukegawa, F; Aguilar saavedra, J A; Conde muino, P; Castro, N F; Eremin, V; Kopytine, M; Sulin, V; Tsukerman, I; Korol, A; Nemethy, P; Bartoldus, R; Glatte, A; Chelsky, S; Van nieuwkoop, J; Bellerive, A; Sinervo, J K; Battaglia, A; Barbier, G J; Pohl, M; Rosselet, L; Alexandre, G B; Prokoshin, F; Pezoa rivera, R A; Batkova, L; Kladiva, E; Stastny, J; Kubes, T; Vidlakova, Z; Esch, H; Homann, M; Herten, L G; Zimmermann, S U; Pfeifer, B; Stenzel, H; Andrei, G V; Wessels, M; Buescher, V; Kleinknecht, K; Fiedler, F M; Schroeder, C D; Fernandez, E; Mir martinez, L; Vorwerk, V; Bernabeu verdu, J; Salt, J; Civera navarrete, J V; Bernard, R; Berriaud, C P; Chevalier, L P; Hubbard, R; Schune, P; Nikolopoulos, K; Batley, J R; Brochu, F M; Phillips, A W; Teixeira-dias, P J; Rose, M B D; Buttar, C; Buckley, A G; Nurse, E L; Larner, A B; Boddy, C; Henderson, J; Costanzo, D; Tarem, S; Maccarrone, G; Laurelli, P F; Alviggi, M; Chiaramonte, R; Izzo, V; Palumbo, V; Fraternali, M; Crosetti, G; Marchese, F; Yamaguchi, Y; Hessey, N P; Mechnich, J M; Liebig, W; Kastanas, K A; Sjursen, T B; Zalieckas, J; Cameron, D G; Banka, P; Kowalewska, A B; Dwuznik, M; Mindur, B; Boldea, V; Hedberg, V; Smirnova, O; Sellden, B; Allahverdiyev, T; Gornushkin, Y; Koultchitski, I; Tokmenin, V; Chizhov, M; Gongadze, A; Khramov, E; Sadykov, R; Krasnoslobodtsev, I; Smirnova, L; Kramarenko, V; Minaenko, A; Zenin, O; Beddall, A J; Ozcan, E V; Hou, S; Wang, S; Moyse, E; Willocq, S; Chekanov, S; Le compte, T J; Love, J R; Ciocio, A; Hinchliffe, I; Tsulaia, V; Gomez, A; Luehring, F; Zieminska, D; Huth, J E; Gonski, J L; Oreglia, M; Tang, F; Shochet, M J; Costin, T; Mcleod, A; Uzunyan, S; Martin, S P; Pope, B G; Schwienhorst, R H; Brau, J E; Ptacek, E S; Milburn, R H; Sabancilar, E; Lauer, R; Saleem, M; Mohamed meera lebbai, M R; Lou, X; Reeves, K B; Rijssenbeek, M; Novakova, P N; Rahm, D; Steinberg, P A; Wenaus, T J; Paige, F; Ye, S; Kotcher, J R; Assamagan, K A; Oliveira damazio, D; Maeno, T; Henry, A; Dushkin, A; Costa, G; Meroni, C; Resconi, S; Lari, T; Biglietti, M; Lohse, T; Gonzalez silva, M L; Monticelli, F G; Saavedra, A F; Patel, N D; Ciodaro xavier, T; Asevedo nepomuceno, A; Lefebvre, M; Albert, J E; Kubik, P; Faltova, J; Turecek, D; Solc, J; Schaile, O; Ebke, J; Losel, P J; Zeitnitz, C; Sturm, P D; Barreiro alonso, F; Modesto alapont, P; Soret medel, J; Garzon alama, E J; Gee, C N; Mccubbin, N A; Sankey, D; Emeliyanov, D; Dewhurst, A L; Houlden, M A; Klein, M; Burdin, S; Lehan, A K; Eisenhandler, E; Lloyd, S; Traynor, D P; Ibbotson, M; Marshall, R; Pater, J; Freestone, J; Masik, J; Haughton, I; Manousakis katsikakis, A; Sampsonidis, D; Krepouri, A; Roda, C; Sarri, F; Fukunaga, C; Nadtochiy, A; Kara, S O; Timm, S; Alam, S M; Rashid, T; Goldfarb, S; Espahbodi, S; Marley, D E; Rau, A W; Dos anjos, A R; Haque, S; Grau, N C; Havener, L B; Thomson, E J; Newcomer, F M; Hansl-kozanecki, G; Deberg, H A; Takeshita, T; Goggi, V; Ennis, J S; Olness, F I; Kama, S; Ordonez sanz, G; Koetsveld, F; Elamri, M; Mansoor-ul-islam, S; Lemmer, B; Kawamura, G; Bindi, M; Schulte, S; Kugel, A; Kretz, M P; Kurchaninov, L; Blanchot, G; Chromek-burckhart, D; Di girolamo, B; Francis, D; Gianotti, F; Nordberg, M Y; Pernegger, H; Roe, S; Boyd, J; Wilkens, H G; Pauly, T; Fabre, C; Tricoli, A; Bertet, D; Ruiz martinez, M A; Arnaez, O L; Lenzi, B; Boveia, A J; Gillberg, D I; Davies, J M; Zimmermann, R; Uhlenbrock, M; Kraus, J K; Narayan, R T; John, A; Dam, M; Padilla aranda, C; Bellachia, F; Le flour chollet, F M; Jezequel, S; Dumont dayot, N; Fede, E; Mathieu, M; Gensolen, F D; Alio, L; Arnault, C; Bouchel, M; Ducorps, A; Kado, M M; Lounis, A; Zhang, Z P; De vivie de regie, J; Beau, T; Bruni, A; Bruni, G; Grafstrom, P; Romano, M; Lasagni manghi, F; Massa, L; Shaw, K; Ikegami, Y; Tsuno, S; Kawanishi, Y; Benincasa, G; Blagov, M; Fedorchuk, R; Shatalov, P; Romaniouk, A; Belotskiy, K; Timoshenko, S; Hooft van huysduynen, L; Lewis, G H; Wittgen, M M; Mader, W F; Rudolph, C J; Gumpert, C; Mamuzic, J; Rudolph, G; Schmid, P; Corriveau, F; Belanger-champagne, C; Yarkoni, S; Leroy, C; Koffas, T; Harack, B D; Weber, M S; Beck, H; Leger, A; Gonzalez sevilla, S; Zhu, Y; Gao, J; Zhang, X; Blazek, T; Rames, J; Sicho, P; Kouba, T; Sluka, T; Lysak, R; Ristic, B; Kompatscher, A E; Von radziewski, H; Groll, M; Meyer, C P; Oberlack, H; Stonjek, S M; Cortiana, G; Werthenbach, U; Ibragimov, I; Czirr, H S; Cavalli-sforza, M; Puigdengoles olive, C; Tallada crespi, P; Marti i garcia, S; Gonzalez de la hoz, S; Guyot, C; Meyer, J; Schoeffel, L O; Garvey, J; Hawkes, C; Hillier, S J; Staley, R J; Salvatore, P F; Santoyo castillo, I; Carter, J; Yusuff, I B; Barlow, N R; Berry, T S; Savage, G; Wraight, K G; Steele, G E; Hughes, G; Walder, J W; Love, P A; Crone, G J; Waugh, B M; Boeser, S; Sarkar, A M; Holmes, A; Massey, R; Pinder, A; Nicholson, R; Korolkova, E; Katsoufis, I; Maltezos, S; Tsipolitis, G; Leontsinis, S; Levinson, L J; Shoa, M; Abramowicz, H E; Bella, G; Gershon, A; Urkovsky, E; Taiblum, N; Gatti, C; Della pietra, M; Lanza, A; Negri, A; Flaminio, V; Lacava, F; Petrolo, E; Pontecorvo, L; Rosati, S; Zanello, L; Pasqualucci, E; Di ciaccio, A; Giordani, M; Yamazaki, Y; Jinno, T; Nomachi, M; De jong, P J; Ferrari, P; Homma, J; Van der graaf, H; Igonkina, O B; Stugu, B S; Buanes, T; Pedersen, M; Turala, M; Olszewski, A J; Koperny, S Z; Onofre, A; Castro nunes fiolhais, M; Alexa, C; Cuciuc, C M; Akesson, T P A; Hellman, S L; Milstead, D A; Bondyakov, A; Pushnova, V; Budagov, Y; Minashvili, I; Romanov, V; Sniatkov, V; Tskhadadze, E; Kalinovskaya, L; Shalyugin, A; Tavkhelidze, A; Rumyantsev, L; Karpov, S; Soloshenko, A; Vostrikov, A; Borissov, E; Solodkov, A; Vorob'ev, A; Sidorov, S; Malyaev, V; Lee, S; Grudzinski, J J; Virzi, J S; Vahsen, S E; Lys, J; Penwell, J W; Yan, Z; Bernard, C S; Barreiro guimaraes da costa, J P; Oliver, J N; Merritt, F S; Brubaker, E M; Kapliy, A; Kim, J; Zutshi, V V; Burghgrave, B O; Abolins, M A; Arabidze, G; Caughron, S A; Frey, R E; Radloff, P T; Schernau, M; Murillo garcia, R; Porter, R A; Mccormick, C A; Karn, P J; Sliwa, K J; Demers konezny, S M; Strauss, M G; Mueller, J A; Izen, J M; Klimentov, A; Lynn, D; Polychronakos, V; Radeka, V; Sondericker, J I I I; Bathe, S; Duffin, S; Chen, H; De castro faria salgado, P E; Kersevan, B P; Lacker, H M; Schulz, H; Kubota, T; Tan, K G; Yabsley, B D; Nunes de moura junior, N; Pinfold, J; Soluk, R A; Ouellette, E A; Leitner, R; Sykora, T; Solar, M; Sartisohn, G; Hirschbuehl, D; Huning, D; Fischer, J; Terron cuadrado, J; Glasman kuguel, C B; Lacasta llacer, C; Lopez-amengual, J; Calvet, D; Chevaleyre, J; Daudon, F; Montarou, G; Guicheney, C; Calvet, S P J; Tyndel, M; Dervan, P J; Maxfield, S J; Hayward, H S; Beck, G; Cox, B; Da via, C; Paschalias, P; Manolopoulou, M; Ragusa, F; Cimino, D; Ezzi, M; Fiuza de barros, N F; Yildiz, H; Ciftci, A K; Turkoz, S; Zain, S B; Tegenfeldt, F; Chapman, J W; Panikashvili, N; Bocci, A; Altheimer, A D; Martin, F F; Fratina, S; Jackson, B D; Grillo, A A; Seiden, A; Watts, G T; Mangiameli, S; Johns, K A; O'grady, F T; Errede, D R; Darbo, G; Ferretto parodi, A; Leahu, M C; Farbin, A; Ye, J; Liu, T; Wijnen, T A; Naito, D; Takashima, R; Sandoval usme, C E; Zinonos, Z; Moreno llacer, M; Agricola, J B; Mcgovern, S A; Sakurai, Y; Trigger, I M; Qing, D; De silva, A S; Butin, F; Dell'acqua, A; Hawkings, R J; Lamanna, M; Mapelli, L; Passardi, G; Rembser, C; Tremblet, L; Andreazza, W; Dobos, D A; Koblitz, B; Bianco, M; Dimitrov, G V; Schlenker, S; Armbruster, A J; Rammensee, M C; Romao rodrigues, L F; Peters, K; Pozo astigarraga, M E; Yi, Y; Desch, K K; Huegging, F G; Muller, K K; Stillings, J A; Schaetzel, S; Xella, S; Hansen, J D; Colas, J; Daguin, G; Wingerter, I; Ionescu, G D; Ledroit, F; Lucotte, A; Clement, B E; Stark, J; Clemens, J; Djama, F; Knoops, E; Coadou, Y; Vigeolas-choury, E; Feligioni, L; Iconomidou-fayard, L; Imbert, P; Schaffer, A C; Nikolic, I; Trincaz-duvoid, S; Warin, P; Camard, A F; Ridel, M; Pires, S; Giacobbe, B; Spighi, R; Villa, M; Negrini, M; Sato, K; Gavrilenko, I; Akimov, A; Khovanskiy, V; Talyshev, A; Voronkov, A; Hakobyan, H; Mallik, U; Shibata, A; Konoplich, R; Barklow, T L; Koi, T; Straessner, A; Stelzer, B; Robertson, S H; Vachon, B; Stoebe, M; Keyes, R A; Wang, K; Billoud, T R V; Strickland, V; Batygov, M; Krieger, P; Palacino caviedes, G D; Gay, C W; Jiang, Y; Han, L; Liu, M; Zenis, T; Lokajicek, M; Staroba, P; Tasevsky, M; Popule, J; Svatos, M; Seifert, F; Landgraf, U; Lai, S T; Schmitt, K H; Achenbach, R; Schuh, N; Kiesling, C; Macchiolo, A; Nisius, R; Schacht, P; Von der schmitt, J G; Kortner, O; Atlay, N B; Segura sole, E; Grinstein, S; Neissner, C; Bruckner, D M; Oliver garcia, E; Boonekamp, M; Perrin, P; Gaillot, F M; Wilson, J A; Thomas, J P; Thompson, P D; Palmer, J D; Falk, I E; Chavez barajas, C A; Sutton, M R; Robinson, D; Kaneti, S A; Wu, T; Robson, A; Shaw, C; Buzatu, A; Qin, G; Jones, R; Bouhova-thacker, E V; Viehhauser, G; Weidberg, A R; Gilbert, L; Johansson, P D C; Orphanides, M; Vlachos, S; Behar harpaz, S; Papish, O; Lellouch, D J H; Turgeman, D; Benary, O; La rotonda, L; Vena, R; Tarasio, A; Marzano, F; Gabrielli, A; Di stante, L; Liberti, B; Aielli, G; Oda, S; Nozaki, M; Takeda, H; Hayakawa, T; Miyazaki, K; Maeda, J; Sugimoto, T; Pettersson, N E; Bentvelsen, S; Groenstege, H L; Lipniacka, A; Vahabi, M; Ould-saada, F; Chwastowski, J J; Hajduk, Z; Kaczmarska, A; Olszowska, J B; Trzupek, A; Staszewski, R P; Palka, M; Constantinescu, S; Jarlskog, G; Lundberg, B L A; Pearce, M; Ellert, M F; Bannikov, A; Fechtchenko, A; Iambourenko, V; Kukhtin, V; Pozdniakov, V; Topilin, N; Vorozhtsov, S; Khassanov, A; Fliaguine, V; Kharchenko, D; Nikolaev, K; Kotenov, K; Kozhin, A; Zenin, A; Ivashin, A; Golubkov, D; Beddall, A; Su, D; Dallapiccola, C J; Cranshaw, J M; Price, L; Stanek, R W; Gieraltowski, G; Zhang, J; Gilchriese, M; Shapiro, M; Ahlen, S; Morii, M; Taylor, F E; Miller, R J; Phillips, F H; Torrence, E C; Wheeler, S J; Benedict, B H; Napier, A; Hamilton, S F; Petrescu, T A; Boyd, G R J; Jayasinghe, A L; Smith, J M; Mc carthy, R L; Adams, D L; Le vine, M J; Zhao, X; Patwa, A M; Baker, M; Kirsch, L; Krstic, J; Simic, L; Filipcic, A; Seidel, S C; Cantore-cavalli, D; Baroncelli, A; Kind, O M; Scarcella, M J; Maidantchik, C L L; Seixas, J; Balabram filho, L E; Vorobel, V; Spousta, M; Strachota, P; Vokac, P; Slavicek, T; Bergmann, B L; Biebel, O; Kersten, S; Srinivasan, M; Trefzger, T; Vazeille, F; Insa, C; Kirk, J; Middleton, R; Burke, S; Klein, U; Morris, J D; Ellis, K V; Millward, L R; Giokaris, N; Ioannou, P; Angelidakis, S; Bouzakis, K; Andreazza, A; Perini, L; Chtcheguelski, V; Spiridenkov, E; Yilmaz, M; Kaya, U; Ernst, J; Mahmood, A; Saland, J; Kutnink, T; Holler, J; Kagan, H P; Wang, C; Pan, Y; Xu, N; Ji, H; Willis, W J; Tuts, P M; Litke, A; Wilder, M; Rothberg, J; Twomey, M S; Rizatdinova, F; Loch, P; Rutherfoord, J P; Varnes, E W; Barberis, D; Osculati-becchi, B; Brandt, A G; Turvey, A J; Benchekroun, D; Nagasaka, Y; Thanakornworakij, T; Quadt, A; Nadal serrano, J; Magradze, E; Nackenhorst, O; Musheghyan, H; Kareem, M; Chytka, L; Perez codina, E; Stelzer-chilton, O; Brunel, B; Henriques correia, A M; Dittus, F; Hatch, M; Haug, F; Hauschild, M; Huhtinen, M; Lichard, P; Schuh-erhard, S; Spigo, G; Avolio, G; Tsarouchas, C; Ahmad, I; Backes, M P; Barisits, M; Gadatsch, S; Cerv, M; Sicoe, A D; Nattamai sekar, L P; Fazio, D; Shan, L; Sun, X; Gaycken, G F; Hemperek, T; Petersen, T C; Alonso diaz, A; Moynot, M; Werlen, M; Hryn'ova, T; Gallin-martel, M; Wu, M; Touchard, F; Menouni, M; Fougeron, D; Le guirriec, E; Chollet, J C; Veillet, J; Barrillon, P; Prat, S; Krasny, M W; Roos, L; Boudarham, G; Lefebvre, G; Boscherini, D; Valentinetti, S; Acharya, B S; Miglioranzi, S; Kanzaki, J; Unno, Y; Yasu, Y; Iwasaki, H; Tokushuku, K; Maio, A; Rodrigues fernandes, B J; Pinto figueiredo raimundo ribeiro, N M; Bot, A; Shmeleva, A; Zaidan, R; Djilkibaev, R; Mincer, A I; Salnikov, A; Aracena, I A; Schwartzman, A G; Silverstein, D J; Fulsom, B G; Anulli, F; Kuhn, D; White, M J; Vetterli, M J; Stockton, M C; Mantifel, R L; Azuelos, G; Shoaleh saadi, D; Savard, P; Clark, A; Ferrere, D; Gaumer, O P; Diaz gutierrez, M A; Liu, Y; Dubnickova, A; Sykora, I; Strizenec, P; Weichert, J; Zitek, K; Naumann, T; Goessling, C; Klingenberg, R; Jakobs, K; Rurikova, Z; Werner, M W; Arnold, H R; Buscher, D; Hanke, P; Stamen, R; Dietzsch, T A; Kiryunin, A; Salihagic, D; Buchholz, P; Pacheco pages, A; Sushkov, S; Porto fernandez, M D C; Cruz josa, R; Vos, M A; Schwindling, J; Ponsot, P; Charignon, C; Kivernyk, O; Goodrick, M J; Hill, J C; Green, B J; Quarman, C V; Bates, R L; Allwood-spiers, S E; Quilty, D; Chilingarov, A; Long, R E; Barton, A E; Konstantinidis, N; Simmons, B; Davison, A R; Christodoulou, V; Wastie, R L; Gallas, E J; Cox, J; Dehchar, M; Behr, J K; Pickering, M A; Filippas, A; Panagoulias, I; Tenenbaum katan, Y D; Roth, I; Pitt, M; Citron, Z H; Benhammou, Y; Amram, N Y N; Soffer, A; Gorodeisky, R; Antonelli, M; Chiarella, V; Curatolo, M; Esposito, B; Nicoletti, G; Martini, A; Sansoni, A; Carlino, G; Del prete, T; Bini, C; Vari, R; Kuna, M; Pinamonti, M; Itoh, Y; Colijn, A P; Klous, S; Garitaonandia elejabarrieta, H; Rosendahl, P L; Taga, A V; Malecki, P; Malecki, P; Wolter, M W; Kowalski, T; Korcyl, G M; Caprini, M; Caprini, I; Dita, P; Olariu, A; Tudorache, A; Lytken, E; Hidvegi, A; Aliyev, M; Alexeev, G; Bardin, D; Kakurin, S; Lebedev, A; Golubykh, S; Chepurnov, V; Gostkin, M; Kolesnikov, V; Karpova, Z; Davkov, K I; Yeletskikh, I; Grishkevich, Y; Rud, V; Myagkov, A; Nikolaenko, V; Starchenko, E; Zaytsev, A; Fakhrutdinov, R; Cheine, I; Istin, S; Sahin, S; Teng, P; Chu, M L; Trilling, G H; Heinemann, B; Richoz, N; Degeorge, C; Youssef, S; Pilcher, J; Cheng, Y; Purohit, M V; Kravchenko, A; Calkins, R E; Blazey, G; Hauser, R; Koll, J D; Reinsch, A; Brost, E C; Allen, B W; Lankford, A J; Ciobotaru, M D; Slagle, K J; Haffa, B; Mann, A; Loginov, A; Cummings, J T; Loyal, J D; Skubic, P L; Boudreau, J F; Lee, B E; Redlinger, G; Wlodek, T; Carcassi, G; Sexton, K A; Yu, D; Deng, W; Metcalfe, J E; Panitkin, S; Sijacki, D; Mikuz, M; Kramberger, G; Tartarelli, G F; Farilla, A; Stanescu, C; Herrberg, R; Alconada verzini, M J; Brennan, A J; Varvell, K; Marroquim, F; Gomes, A A; Do amaral coutinho, Y; Gingrich, D; Moore, R W; Dolejsi, J; Valkar, S; Broz, J; Jindra, T; Kohout, Z; Kral, V; Mann, A W; Calfayan, P P; Langer, T; Hamacher, K; Sanny, B; Wagner, W; Flick, T; Redelbach, A R; Ke, Y; Higon-rodriguez, E; Donini, J N; Lafarguette, P; Adye, T J; Baines, J; Barnett, B; Wickens, F J; Martin, V J; Jackson, J N; Prichard, P; Kretzschmar, J; Martin, A J; Walker, C J; Potter, K M; Kourkoumelis, C; Tzamarias, S; Houiris, A G; Iliadis, D; Fanti, M; Bertolucci, F; Maleev, V; Sultanov, S; Rosenberg, E I; Krumnack, N E; Bieganek, C; Diehl, E B; Mc kee, S P; Eppig, A P; Harper, D R; Liu, C; Schwarz, T A; Mazor, B; Looper, K A; Wiedenmann, W; Huang, P; Stahlman, J M; Battaglia, M; Nielsen, J A; Zhao, T; Khanov, A; Kaushik, V S; Vichou, E; Liss, A M; Gemme, C; Morettini, P; Parodi, F; Passaggio, S; Rossi, L; Kuzhir, P; Ignatenko, A; Ferrari, R; Spairani, M; Pianori, E; Sekula, S J; Firan, A I; Cao, T; Hetherly, J W; Gouighri, M; Vassilakopoulos, V; Long, M C; Shimojima, M; Sawyer, L H; Brummett, R E; Losada, M A; Schorlemmer, A L; Mantoani, M; Bawa, H S; Mornacchi, G; Nicquevert, B; Palestini, S; Stapnes, S; Veness, R; Kotamaki, M J; Sorde, C; Iengo, P; Campana, S; Goossens, L; Zajacova, Z; Pribyl, L; Poveda torres, J; Marzin, A; Conti, G; Carrillo montoya, G D; Kroseberg, J; Gonella, L; Velz, T; Schmitt, S; Lobodzinska, E M; Lovschall-jensen, A E; Galster, G; Perrot, G; Cailles, M; Berger, N; Barnovska, Z; Delsart, P; Lleres, A; Tisserant, S; Grivaz, J; Matricon, P; Bellagamba, L; Bertin, A; Bruschi, M; De castro, S; Semprini cesari, N; Fabbri, L; Rinaldi, L; Quayle, W B; Truong, T N L; Kondo, T; Haruyama, T; Ng, C; Do valle wemans, A; Almeida veloso, F M; Konovalov, S; Ziegler, J M; Su, D; Lukas, W; Prince, S; Ortega urrego, E J; Teuscher, R J; Knecht, N; Pretzl, K; Borer, C; Gadomski, S; Koch, B; Kuleshov, S; Brooks, W K; Antos, J; Kulkova, I; Chudoba, J; Chyla, J; Tomasek, L; Bazalova, M; Messmer, I; Tobias, J; Sundermann, J E; Kuehn, S S; Kluge, E; Scharf, V L; Barillari, T; Kluth, S; Menke, S; Weigell, P; Schwegler, P; Ziolkowski, M; Casado lechuga, P M; Garcia, C; Sanchez, J; Costa mezquita, M J; Valero biot, J A; Laporte, J; Nikolaidou, R; Virchaux, M; Nguyen, V T H; Charlton, D; Harrison, K; Slater, M W; Newman, P R; Parker, A M; Ward, P; Mcgarvie, S A; Kilvington, G J; D'auria, S; O'shea, V; Mcglone, H M; Fox, H; Henderson, R; Kartvelishvili, V; Davies, B; Sherwood, P; Fraser, J T; Lancaster, M A; Tseng, J C; Hays, C P; Apolle, R; Dixon, S D; Parker, K A; Gazis, E; Papadopoulou, T; Panagiotopoulou, E; Karastathis, N; Hershenhorn, A D; Milov, A; Groth-jensen, J; Bilokon, H; Miscetti, S; Canale, V; Rebuzzi, D M; Capua, M; Bagnaia, P; De salvo, A; Gentile, S; Safai tehrani, F; Solfaroli camillocci, E; Sasao, N; Tsunada, K; Massaro, G; Magrath, C A; Van kesteren, Z; Beker, M G; Van den wollenberg, W; Bugge, L; Buran, T; Read, A L; Gjelsten, B K; Banas, E A; Turnau, J; Derendarz, D K; Kisielewska, D; Chesneanu, D; Rotaru, M; Maurer, J B; Wong, M L; Lund-jensen, B; Asman, B; Jon-and, K B; Silverstein, S B; Johansen, M; Alexandrov, I; Iatsounenko, I; Krumshteyn, Z; Peshekhonov, V; Rybaltchenko, K; Samoylov, V; Cheplakov, A; Kekelidze, G; Lyablin, M; Teterine, V; Bednyakov, V; Kruchonak, U; Shiyakova, M M; Demichev, M; Denisov, S P; Fenyuk, A; Djobava, T; Salukvadze, G; Cetin, S A; Brau, B P; Pais, P R; Proudfoot, J; Van gemmeren, P; Zhang, Q; Beringer, J A; Ely, R; Leggett, C; Pengg, F X; Barnett, M R; Quick, R E; Williams, S; Gardner jr, R W; Huston, J; Brock, R; Wanotayaroj, C; Unel, G N; Taffard, A C; Frate, M; Baker, K O; Tipton, P L; Hutchison, A; Walsh, B J; Norberg, S R; Su, J; Tsybyshev, D; Caballero bejar, J; Ernst, M U; Wellenstein, H; Vudragovic, D; Vidic, I; Gorelov, I V; Toms, K; Alimonti, G; Petrucci, F; Kolanoski, H; Smith, J; Jeng, G; Watson, I J; Guimaraes ferreira, F; Miranda vieira xavier, F; Araujo pereira, R; Poffenberger, P; Sopko, V; Elmsheuser, J; Wittkowski, J; Glitza, K; Gorfine, G W; Ferrer soria, A; Fuster verdu, J A; Sanchis lozano, A; Reinmuth, G; Busato, E; Haywood, S J; Mcmahon, S J; Qian, W; Villani, E G; Laycock, P J; Poll, A J; Rizvi, E S; Foster, J M; Loebinger, F; Forti, A; Plano, W G; Brown, G J A; Kordas, K; Vegni, G; Ohsugi, T; Iwata, Y; Cherkaoui el moursli, R; Sahin, M; Akyazi, E; Carlsen, A; Kanwal, B; Cochran jr, J H; Aronnax, M V; Lockner, M J; Zhou, B; Levin, D S; Weaverdyck, C J; Grom, G F; Rudge, A; Ebenstein, W L; Jia, B; Yamaoka, J; Jared, R C; Wu, S L; Banerjee, S; Lu, Q; Hughes, E W; Alkire, S P; Degenhardt, J D; Lipeles, E D; Spencer, E N; Savine, A; Cheu, E C; Lampl, W; Veatch, J R; Roberts, K; Atkinson, M J; Odino, G A; Polesello, G; Martin, T; White, A P; Stephens, R; Grinbaum sarkisyan, E; Vartapetian, A; Yu, J; Sosebee, M; Thilagar, P A; Spurlock, B; Bonde, R; Filthaut, F; Klok, P; Hoummada, A; Ouchrif, M; Pellegrini, G; Rafi tatjer, J M; Navarro, G A; Blumenschein, U; Weingarten, J C; Mueller, D; Graber, L; Gao, Y; Bode, A; Capeans garrido, M D M; Carli, T; Wells, P; Beltramello, O; Vuillermet, R; Dudarev, A; Salzburger, A; Torchiani, C I; Serfon, C L G; Sloper, J E; Duperrier, G; Lilova, P T; Knecht, M O; Lassnig, M; Anders, G; Deviveiros, P; Young, C; Sforza, F; Shaochen, C; Lu, F; Wermes, N; Wienemann, P; Schwindt, T; Hansen, P H; Hansen, J B; Pingel, A M; Massol, N; Elles, S L; Hallewell, G D; Rozanov, A; Vacavant, L; Fournier, D A; Poggioli, L; Puzo, P M; Tanaka, R; Escalier, M A; Makovec, N; Rezynkina, K; De cecco, S; Cavalleri, P G; Massa, I; Zoccoli, A; Tanaka, S; Odaka, S; Mitsui, S; Tomasio pina, J A; Santos, H F; Satsounkevitch, I; Harkusha, S; Baranov, S; Nechaeva, P; Kayumov, F; Kazanin, V; Asai, M; Mount, R P; Nelson, T K; Smith, D; Kenney, C J; Malone, C M; Kobel, M; Friedrich, F; Grohs, J P; Jais, W J; O'neil, D C; Warburton, A T; Vincter, M; Mccarthy, T G; Groer, L S; Pham, Q T; Taylor, W J; La marra, D; Perrin, E; Wu, X; Bell, W H; Delitzsch, C M; Feng, C; Zhu, C; Tokar, S; Bruncko, D; Kupco, A; Marcisovsky, M; Jakoubek, T; Bruneliere, R; Aktas, A; Narrias villar, D I; Tapprogge, S; Mattmann, J; Kroha, H; Crespo, J; Korolkov, I; Cavallaro, E; Cabrera urban, S; Mitsou, V; Kozanecki, W; Mansoulie, B; Pabot, Y; Etienvre, A; Bauer, F; Chevallier, F; Bouty, A R; Watkins, P; Watson, A; Faulkner, P J W; Curtis, C J; Murillo quijada, J A; Grout, Z J; Chapman, J D; Cowan, G D; George, S; Boisvert, V; Mcmahon, T R; Doyle, A T; Thompson, S A; Britton, D; Smizanska, M; Campanelli, M; Butterworth, J M; Loken, J; Renton, P; Barr, A J; Issever, C; Short, D; Crispin ortuzar, M; Tovey, D R; French, R; Rozen, Y; Alexander, G; Kreisel, A; Conventi, F; Raulo, A; Schioppa, M; Susinno, G; Tassi, E; Giagu, S; Luci, C; Nisati, A; Cobal, M; Ishikawa, A; Jinnouchi, O; Bos, K; Verkerke, W; Vermeulen, J; Van vulpen, I B; Kieft, G; Mora, K D; Olsen, F; Rohne, O M; Pajchel, K; Nilsen, J K; Wosiek, B K; Wozniak, K W; Badescu, E; Jinaru, A; Bohm, C; Johansson, E K; Sjoelin, J B R; Clement, C; Buszello, C P; Huseynova, D; Boyko, I; Popov, B; Poukhov, O; Vinogradov, V; Tsiareshka, P; Skvorodnev, N; Soldatov, A; Chuguev, A; Gushchin, V; Yazici, E; Lutz, M S; Malon, D; Vanyashin, A; Lavrijsen, W; Spieler, H; Biesiada, J L; Bahr, M; Kong, J; Tatarkhanov, M; Ogren, H; Van kooten, R J; Cwetanski, P; Butler, J M; Shank, J T; Chakraborty, D; Ermoline, I; Sinev, N; Whiteson, D O; Corso radu, A; Huang, J; Werth, M P; Kastoryano, M; Meirose da silva costa, B; Namasivayam, H; Hobbs, J D; Schamberger jr, R D; Guo, F; Potekhin, M; Popovic, D; Gorisek, A; Sokhrannyi, G; Hofsajer, I W; Mandelli, L; Ceradini, F; Graziani, E; Giorgi, F; Zur nedden, M E G; Grancagnolo, S; Volpi, M; Nunes hanninger, G; Rados, P K; Milesi, M; Cuthbert, C J; Black, C W; Fink grael, F; Fincke-keeler, M; Keeler, R; Kowalewski, R V; Berghaus, F O; Qi, M; Davidek, T; Tas, P; Jakubek, J; Duckeck, G; Walker, R; Mitterer, C A; Harenberg, T; Sandvoss, S A; Del peso, J; Llorente merino, J; Gonzalez millan, V; Irles quiles, A; Crouau, M; Gris, P L Y; Liauzu, S; Romano saez, S M; Gallop, B J; Jones, T J; Austin, N C; Morris, J; Duerdoth, I; Thompson, R J; Kelly, M P; Leisos, A; Garas, A; Pizio, C; Venda pinto, B A; Kudin, L; Qian, J; Wilson, A W; Mietlicki, D; Long, J D; Sang, Z; Arms, K E; Rahimi, A M; Moss, J J; Oh, S H; Parker, S I; Parsons, J; Cunitz, H; Vanguri, R S; Sadrozinski, H; Lockman, W S; Martinez-mc kinney, G; Goussiou, A; Jones, A; Lie, K; Hasegawa, Y; Olcese, M; Gilewsky, V; Harrison, P F; Janus, M; Spangenberg, M; De, K; Ozturk, N; Pal, A K; Darmora, S; Bullock, D J; Oviawe, O; Derkaoui, J E; Rahal, G; Sircar, A; Frey, A S; Stolte, P; Rosien, N; Zoch, K; Li, L; Schouten, D W; Catinaccio, A; Ciapetti, M; Delruelle, N; Ellis, N; Farthouat, P; Hoecker, A; Klioutchnikova, T; Macina, D; Malyukov, S; Spiwoks, R D; Unal, G P; Vandoni, G; Petersen, B A; Pommes, K; Nairz, A M; Wengler, T; Mladenov, D; Solans sanchez, C A; Lantzsch, K; Schmieden, K; Jakobsen, S; Ritsch, E; Sciuccati, A; Alves dos santos, A M; Ouyang, Q; Zhou, M; Brock, I C; Janssen, J; Katzy, J; Anders, C F; Nilsson, B S; Bazan, A; Di ciaccio, L; Yildizkaya, T; Collot, J; Malek, F; Trocme, B S; Breugnon, P; Godiot, S; Adam bourdarios, C; Coulon, J; Duflot, L; Petroff, P G; Zerwas, D; Lieuvin, M; Calderini, G; Laporte, D; Ocariz, J; Gabrielli, A; Ohska, T K; Kurochkin, Y; Kantserov, V; Vasilyeva, L; Speransky, M; Smirnov, S; Antonov, A; Bulekov, O; Tikhonov, Y; Sargsyan, L; Vardanyan, G; Budick, B; Kocian, M L; Luitz, S; Young, C C; Grenier, P J; Kelsey, M; Black, J E; Kneringer, E; Jussel, P; Horton, A J; Beaudry, J; Chandra, A; Ereditato, A; Topfel, C M; Mathieu, R; Bucci, F; Muenstermann, D; White, R M; He, M; Urban, J; Straka, M; Vrba, V; Schumacher, M; Parzefall, U; Mahboubi, K; Sommer, P O; Koepke, L H; Bethke, S; Moser, H; Wiesmann, M; Walkowiak, W A; Fleck, I J; Martinez-perez, M; Sanchez sanchez, C A; Jorgensen roca, S; Accion garcia, E; Sainz ruiz, C A; Valls ferrer, J A; Amoros vicente, G; Vives torrescasana, R; Ouraou, A; Formica, A; Hassani, S; Watson, M F; Cottin buracchio, G F; Bussey, P J; Saxon, D; Ferrando, J E; Collins-tooth, C L; Hall, D C; Cuhadar donszelmann, T; Dawson, I; Duxfield, R; Argyropoulos, T; Brodet, E; Livneh, R; Shougaev, K; Reinherz, E I; Guttman, N; Beretta, M M; Vilucchi, E; Aloisio, A; Patricelli, S; Caprio, M; Cevenini, F; De vecchi, C; Livan, M; Rimoldi, A; Vercesi, V; Ayad, R; Mastroberardino, A; Ciapetti, G; Luminari, L; Rescigno, M; Santonico, R; Salamon, A; Del papa, C; Kurashige, H; Homma, Y; Tomoto, M; Horii, Y; Sugaya, Y; Hanagaki, K; Bobbink, G; Kluit, P M; Koffeman, E N; Van eijk, B; Lee, H; Eigen, G; Dorholt, O; Strandlie, A; Strzempek, P B; Dita, S; Stoicea, G; Chitan, A; Leven, S S; Moa, T; Brenner, R; Ekelof, T J C; Olshevskiy, A; Roumiantsev, V; Chlachidze, G; Zimine, N; Gusakov, Y; Grigalashvili, N; Mineev, M; Potrap, I; Barashkou, A; Shoukavy, D; Shaykhatdenov, B; Pikelner, A; Gladilin, L; Ammosov, V; Abramov, A; Arik, M; Sahinsoy, M; Uysal, Z; Azizi, K; Hotinli, S C; Zhou, S; Berger, E; Blair, R; Underwood, D G; Einsweiler, K; Garcia-sciveres, M A; Siegrist, J L; Kipnis, I; Dahl, O; Holland, S; Barbaro galtieri, A; Smith, P T; Parua, N; Franklin, M; Mercurio, K M; Tong, B; Pod, E; Cole, S G; Hopkins, W H; Guest, D H; Severini, H; Marsicano, J J; Abbott, B K; Wang, Q; Lissauer, D; Ma, H; Takai, H; Rajagopalan, S; Protopopescu, S D; Snyder, S S; Undrus, A; Popescu, R N; Begel, M A; Blocker, C A; Amelung, C; Mandic, I; Macek, B; Tucker, B H; Citterio, M; Troncon, C; Orestano, D; Taccini, C; Romeo, G L; Dova, M T; Taylor, G N; Gesualdi manhaes, A; Mcpherson, R A; Sobie, R; Taylor, R P; Dolezal, Z; Kodys, P; Slovak, R; Sopko, B; Vacek, V; Sanders, M P; Hertenberger, R; Meineck, C; Becks, K; Kind, P; Sandhoff, M; Cantero garcia, J; De la torre perez, H; Castillo gimenez, V; Ros, E; Hernandez jimenez, Y; Chadelas, R; Santoni, C; Washbrook, A J; O'brien, B J; Wynne, B M; Mehta, A; Vossebeld, J H; Landon, M; Teixeira dias castanheira, M; Cerrito, L; Keates, J R; Fassouliotis, D; Chardalas, M; Manousos, A; Grachev, V; Seliverstov, D; Sedykh, E; Cakir, O; Ciftci, R; Edson, W; Prell, S A; Rosati, M; Stroman, T; Jiang, H; Neal, H A; Li, X; Gan, K K; Smith, D S; Kruse, M C; Ko, B R; Leung fook cheong, A M; Cole, B; Angerami, A R; Greene, Z S; Kroll, J I; Van berg, R P; Forbush, D A; Lubatti, H; Raisher, J; Shupe, M A; Wolin, S; Oshita, H; Gaudio, G; Das, R; Konig, A C; Croft, V A; Harvey, A; Maaroufi, F; Melo, I; Greenwood jr, Z D; Shabalina, E; Mchedlidze, G; Drechsler, E; Rieger, J K; Blackston, M; Colombo, T

    2002-01-01

    % ATLAS \\\\ \\\\ ATLAS is a general-purpose experiment for recording proton-proton collisions at LHC. The ATLAS collaboration consists of 144 participating institutions (June 1998) with more than 1750~physicists and engineers (700 from non-Member States). The detector design has been optimized to cover the largest possible range of LHC physics: searches for Higgs bosons and alternative schemes for the spontaneous symmetry-breaking mechanism; searches for supersymmetric particles, new gauge bosons, leptoquarks, and quark and lepton compositeness indicating extensions to the Standard Model and new physics beyond it; studies of the origin of CP violation via high-precision measurements of CP-violating B-decays; high-precision measurements of the third quark family such as the top-quark mass and decay properties, rare decays of B-hadrons, spectroscopy of rare B-hadrons, and $ B ^0 _{s} $-mixing. \\\\ \\\\The ATLAS dectector, shown in the Figure includes an inner tracking detector inside a 2~T~solenoid providing an axial...

  6. Tools and strategies to monitor the ATLAS online computing farm

    CERN Document Server

    Ballestrero, S; The ATLAS collaboration; Darlea, G L; Dumitru, I; Scannicchio, DA; Twomey, M S; Valsan, M L; Zaytsev, A

    2012-01-01

    In the ATLAS experiment the collection, processing, selection and conveyance of event data from the detector front-end electronics to mass storage is performed by the ATLAS online farm consisting of nearly 3000 PCs with various characteristics. To assure the correct and optimal working conditions the whole online system must be constantly monitored. The monitoring system should be able to check up to 100000 health parameters and provide alerts on a selected subset. In this paper we present the assessment of a new monitoring and alerting system based on Icinga. This is an open source monitoring system derived from Nagios, granting backward compatibility with already known configurations, plugins and add-ons, while providing new features. We also report on the evaluation of different data gathering systems and visualization interfaces.

  7. Tools and strategies to monitor the ATLAS online computing farm

    Science.gov (United States)

    Ballestrero, S.; Brasolin, F.; Dârlea, G.–L.; Dumitru, I.; Scannicchio, D. A.; Twomey, M. S.; Vâlsan, M. L.; Zaytsev, A.

    2012-12-01

    In the ATLAS experiment the collection, processing, selection and conveyance of event data from the detector front-end electronics to mass storage is performed by the ATLAS online farm consisting of nearly 3000 PCs with various characteristics. To assure the correct and optimal working conditions the whole online system must be constantly monitored. The monitoring system should be able to check up to 100000 health parameters and provide alerts on a selected subset. In this paper we present the assessment of a new monitoring and alerting system based on Icinga. This is an open source monitoring system derived from Nagios, granting backward compatibility with already known configurations, plugins and add-ons, while providing new features. We also report on the evaluation of different data gathering systems and visualization interfaces.

  8. ATLAS Grid computing activities within the Gridka cloud

    International Nuclear Information System (INIS)

    The WLCG Tier1 at GridKa in Karlsruhe Germany, has a number of Tier2 sites associated with it. Together the Tier2s, located in Germany, Austria, Czech Republic Poland and Switzerland, and the T1 at GridKa form the ATLAS Gridka-cloud. Like other clouds in WLCG, the main activities within this cloud are running Monte-Carlo production jobs, distributed data management (DDM) issues and operations, tape reading tests with data re-processing in view and monitoring of the transfer efficiencies, through-puts and networking statuses between sites. An overview talk will be presented showing the activity, progresses and current status in each of the named areas and also an evaluational overview of the cloud's readiness for the ATLAS data taking in mid 2008

  9. ATLAS FTK a – very complex – custom super computer

    CERN Document Server

    Kimura, Naoki; The ATLAS collaboration

    2016-01-01

    In the ever increasing pile-up LHC environment advanced techniques of analyzing the data are implemented in order to increase the rate of relevant physics processes with respect to background processes. The Fast TracKer (FTK) is a track finding implementation at hardware level that is designed to deliver full-scan tracks with PT above 1GeV to the ATLAS trigger system for every L1 accept (at a maximum rate of 100kHz). In order to achieve this performance a highly parallel system was designed and now it is under installation in ATLAS. In the beginning of 2016 it will provide tracks for the trigger system in a region covering the central part of the ATLAS detector, and during the year it's coverage will be extended to the full detector coverage. The system relies on matching hits coming from the silicon tracking detectors against 1 billion patterns stored in specially designed ASICS chips (Associative memory - AM06). In a first stage coarse resolution hits are matched against the patterns and the accepted hits u...

  10. Swiss ATLAS grid computing in preparation for the LHC collision data

    International Nuclear Information System (INIS)

    Computing for ATLAS in Switzerland has two Tier-3 sites with several years of experience, owned by Universities of Berne and Geneva. They have been used for ATLAS Monte Carlo production, centrally controlled via the NorduGrid, since 2005. The Tier-3 sites are under continuous development. In case of Geneva the proximity of CERN leads to additional use cases, related to commissioning of the experiment, which require processing of the latest ATLAS data using the latest software under development, normally not distributed to grid sites. The Swiss Tier-2 at the CSCS centre has a recent and powerful cluster, serving three LHC experiments, including ATLAS. The system features two implementations of the grid middleware, NorduGrid ARC and the LCG gLite, which operate simultaneously on the same resources. In this article we present our implementation choices and our experience. We will discuss the requirements of our users and how we meet them. We will present the status of our work and our plans for the ATLAS data taking period in 2009-2010.

  11. Evolution of the ATLAS data and computing model for a Tier2 in the EGI infrastructure

    CERN Document Server

    Fernández Casaní, A; The ATLAS collaboration; González de la Hoz, S; Salt Cairols, J; Fassi, F; Kaci, M; Lamas, A; Oliver, E; Sánchez, J; Sánchez, V

    2012-01-01

    Since the start of the LHC pp collisions in 2010, the ATLAS computing model has moved from a more strict design, where every Tier2 had a liaison and a network dependence from a Tier1, to a more meshed approach where every cloud could be connected. Evolution of ATLAS data models requires changes in ATLAS Tier2s policy for the data replication, dynamic data caching and remote data access. It also requires rethinking the network infrastructure to enable any Tier2 and associated Tier3 to easily connect to any Tier1 or Tier2. Tier2s are becoming more and more important in the ATLAS computing model as it allows more data to be readily accessible for analysis jobs to all users, independently of their geographical location. The Tier2s disk space has been reserved for real, simulated, calibration and alignment, group, and user data. A buffer disk space is needed for input and output data for simulations jobs. Tier2s are going to be used more efficiently. In this way Tier1s and Tier2s are becoming more equivalent for t...

  12. Design and performance of the virtualization platform for offline computing on the ATLAS TDAQ Farm

    Science.gov (United States)

    Ballestrero, S.; Batraneanu, S. M.; Brasolin, F.; Contescu, C.; Di Girolamo, A.; Lee, C. J.; Pozo Astigarraga, M. E.; Scannicchio, D. A.; Twomey, M. S.; Zaytsev, A.

    2014-06-01

    With the LHC collider at CERN currently going through the period of Long Shutdown 1 there is an opportunity to use the computing resources of the experiments' large trigger farms for other data processing activities. In the case of the ATLAS experiment, the TDAQ farm, consisting of more than 1500 compute nodes, is suitable for running Monte Carlo (MC) production jobs that are mostly CPU and not I/O bound. This contribution gives a thorough review of the design and deployment of a virtualized platform running on this computing resource and of its use to run large groups of CernVM based virtual machines operating as a single CERN-P1 WLCG site. This platform has been designed to guarantee the security and the usability of the ATLAS private network, and to minimize interference with TDAQ's usage of the farm. Openstack has been chosen to provide a cloud management layer. The experience gained in the last 3.5 months shows that the use of the TDAQ farm for the MC simulation contributes to the ATLAS data processing at the level of a large Tier-1 WLCG site, despite the opportunistic nature of the underlying computing resources being used.

  13. Design and performance of the virtualization platform for offline computing on the ATLAS TDAQ Farm

    International Nuclear Information System (INIS)

    With the LHC collider at CERN currently going through the period of Long Shutdown 1 there is an opportunity to use the computing resources of the experiments' large trigger farms for other data processing activities. In the case of the ATLAS experiment, the TDAQ farm, consisting of more than 1500 compute nodes, is suitable for running Monte Carlo (MC) production jobs that are mostly CPU and not I/O bound. This contribution gives a thorough review of the design and deployment of a virtualized platform running on this computing resource and of its use to run large groups of CernVM based virtual machines operating as a single CERN-P1 WLCG site. This platform has been designed to guarantee the security and the usability of the ATLAS private network, and to minimize interference with TDAQ's usage of the farm. Openstack has been chosen to provide a cloud management layer. The experience gained in the last 3.5 months shows that the use of the TDAQ farm for the MC simulation contributes to the ATLAS data processing at the level of a large Tier-1 WLCG site, despite the opportunistic nature of the underlying computing resources being used.

  14. Monitoring of computing resource utilization of the ATLAS experiment

    International Nuclear Information System (INIS)

    Due to the good performance of the LHC accelerator, the ATLAS experiment has seen higher than anticipated levels for both the event rate and the average number of interactions per bunch crossing. In order to respond to these changing requirements, the current and future usage of CPU, memory and disk resources has to be monitored, understood and acted upon. This requires data collection at a fairly fine level of granularity: the performance of each object written and each algorithm run, as well as a dozen per-job variables, are gathered for the different processing steps of Monte Carlo generation and simulation and the reconstruction of both data and Monte Carlo. We present a system to collect and visualize the data from both the online Tier-0 system and distributed grid production jobs. Around 40 GB of performance data are expected from up to 200k jobs per day, thus making performance optimization of the underlying Oracle database of utmost importance.

  15. Monitoring of computing resource utilization of the ATLAS experiment

    CERN Document Server

    Rousseau, D; The ATLAS collaboration; Vukotic, I; Aidel, O; Schaffer, RD; Albrand, S

    2012-01-01

    Due to the good performance of the LHC accelerator, the ATLAS experiment has seen higher than anticipated levels for both the event rate and the average number of interactions per bunch crossing. In order to respond to these changing requirements, the current and future usage of CPU, memory and disk resources has to be monitored, understood and acted upon. This requires data collection at a fairly fine level of granularity: the performance of each object written and each algorithm run, as well as a dozen per-job variables, are gathered for the different processing steps of Monte Carlo generation and simulation and the reconstruction of both data and Monte Carlo. We present a system to collect and visualize the data from both the online Tier-0 system and distributed grid production jobs. Around 40 GB of performance data are expected from up to 200k jobs per day, thus making performance optimization of the underlying Oracle database of utmost importance.

  16. Design and Performance of the Virtualization Platform for Offline computing on the ATLAS TDAQ Farm

    CERN Document Server

    Ballestrero, S; The ATLAS collaboration; Brasolin, F; Contescu, C; Di Girolamo, A; Lee, C J; Pozo Astigarraga, M E; Scannicchio, D A; Twomey, M S; Zaytsev, A

    2014-01-01

    With the LHC collider at CERN currently going through the period of Long Shutdown 1 (LS1) there is a remarkable opportunity to use the computing resources of the large trigger farms of the experiments for other data processing activities. In the case of ATLAS experiment the TDAQ farm, consisting of more than 1500 compute nodes, is particularly suitable for running Monte Carlo production jobs that are mostly CPU and not I/O bound. This contribution gives a thorough review of all the stages of Sim@P1 project dedicated to the design and deployment of a virtualized platform running on the ATLAS TDAQ computing resources and using it to run the large groups of CernVM based virtual machines operating as a single CERN-P1 WLCG site. This platform has been designed to avoid interference with TDAQ usage of the farm and to guarantee the security and the usability of the ATLAS private network; Openstack has been chosen to provide a cloud management layer. The approaches to organizing support for the sustained operation of...

  17. Design and Performance of the Virtualization Platform for Offline computing on the ATLAS TDAQ Farm

    CERN Document Server

    Ballestrero, S; The ATLAS collaboration; Brasolin, F; Contescu, C; Di Girolamo, A; Lee, C J; Pozo Astigarraga, M E; Scannicchio, D A; Twomey, M S; Zaytsev, A

    2013-01-01

    With the LHC collider at CERN currently going through the period of Long Shutdown 1 (LS1) there is a remarkable opportunity to use the computing resources of the large trigger farms of the experiments for other data processing activities. In the case of ATLAS experiment the TDAQ farm, consisting of more than 1500 compute nodes, is particularly suitable for running Monte Carlo production jobs that are mostly CPU and not I/O bound. This contribution gives a thorough review of all the stages of Sim@P1 project dedicated to the design and deployment of a virtualized platform running on the ATLAS TDAQ computing resources and using it to run the large groups of CernVM based virtual machines operating as a single CERN-P1 WLCG site. This platform has been designed to avoid interference with TDAQ usage of the farm and to guarantee the security and the usability of the ATLAS private network; Openstack has been chosen to provide a cloud management layer. The approaches to organizing support for the sustained operation of...

  18. Evolution of the ATLAS PanDA workload management system for exascale computational science

    International Nuclear Information System (INIS)

    An important foundation underlying the impressive success of data processing and analysis in the ATLAS experiment [1] at the LHC [2] is the Production and Distributed Analysis (PanDA) workload management system [3]. PanDA was designed specifically for ATLAS and proved to be highly successful in meeting all the distributed computing needs of the experiment. However, the core design of PanDA is not experiment specific. The PanDA workload management system is capable of meeting the needs of other data intensive scientific applications. Alpha-Magnetic Spectrometer [4], an astro-particle experiment on the International Space Station, and the Compact Muon Solenoid [5], an LHC experiment, have successfully evaluated PanDA and are pursuing its adoption. In this paper, a description of the new program of work to develop a generic version of PanDA will be given, as well as the progress in extending PanDA's capabilities to support supercomputers and clouds and to leverage intelligent networking. PanDA has demonstrated at a very large scale the value of automated dynamic brokering of diverse workloads across distributed computing resources. The next generation of PanDA will allow other data-intensive sciences and a wider exascale community employing a variety of computing platforms to benefit from ATLAS' experience and proven tools.

  19. The evolution of computer monitoring of real time data during the Atlas Centaur launch countdown

    Science.gov (United States)

    Thomas, W. F.

    1981-01-01

    In the last decade, improvements in computer technology have provided new 'tools' for controlling and monitoring critical missile systems. In this connection, computers have gradually taken a large role in monitoring all flights and ground systems on the Atlas Centaur. The wide body Centaur which will be launched in the Space Shuttle Cargo Bay will use computers to an even greater extent. It is planned to use the wide body Centaur to boost the Galileo spacecraft toward Jupiter in 1985. The critical systems which must be monitored prior to liftoff are examined. Computers have now been programmed to monitor all critical parameters continuously. At this time, there are two separate computer systems used to monitor these parameters.

  20. A Computer-Based Atlas of a Rat Dissection.

    Science.gov (United States)

    Quentin-Baxter, Megan; Dewhurst, David

    1990-01-01

    A hypermedia computer program that uses text, graphics, sound, and animation with associative information linking techniques to teach the functional anatomy of a rat is described. The program includes a nonintimidating tutor, to which the student may turn. (KR)

  1. A step towards a computing grid for the LHC experiments: ATLAS Data Challenge 1

    Energy Technology Data Exchange (ETDEWEB)

    Sturrock, R.; Bischof, R.; Epp, B.; Ghete, V.M.; Kuhn, D.; Mello, A.G.; Caron, B.; Vetterli, M.C.; Karapetian, G.; Martens, K.; Agarwal, A.; Poffenberger, P.; McPherson, R.A.; Sobie, R.J.; Armstrong, S.; Benekos, N.; Boisvert, V.; Boonekamp, M.; Brandt, S.; Casado, P.; Elsing, M.; Gianotti, F.; Goossens, L.; Grote, M.; Jansen, J.B.; Mair, K.; Nairz, A.; Padilla, C.; Poppleton, A.; Poulard, G.; Richter-Was, E.; Rosati, S.; Schoerner-Sadenius, T.; Wengler, T.; Xu, G.F.; Ping, J.L.; Chudoba, J.; Kosina, J.; Lokajicek, M.; Svec, J.; Tas, P.; Hansen, J.R.; Lytken, E.; Nielsen, J.L.; Waananen, A.; Tapprogge, S.; Calvet, D.; Albrand, S.; Collot, J.; Fulachier, J.; Ledroit-Guillon, F.; Ohlsson-Malek, S.; Viret, S.; Wielers, M.; Bernardet, K.; Correard, S.; Rozanov, A.; de Vivie de Regie, J-B.; Arnault, C.; Bourdarios, C.; Hrivnac, J.; Lechowski, M.; Parrour, G.; Perus, A.; Rousseau, D.; Schaffer, A.; Unal, G.; Derue, F.; Chevalier, L.; Hassani, S.; Laporte, J-F.; Nicolaidou, R.; Pomarede, D.; Virchaux, M.; Nesvadba, N.; Baranov, Sergei; Putzer, A.; Khonich, A.; Duckeck, G.; Schieferdecker, P.; Kiryunin, A.; Schieck, J.; Lagouri, Th.; Duchovni, E.; Levinson, L.; Schrager, D.; Negri, G.; Bilokon, H.; Spogli, L.; Barberis, D.; Parodi, F.; Cataldi, G.; Gorini, E.; Primavera, M.; Spagnolo, S.; Cavalli, D.; Heldmann, M.; Lari, T.; Perini, L.; Rebatto, D.; Resconi, S.; Tartarelli, F.; Vaccarossa, L.; Biglietti, M.; Carlino, G.; Conventi, F.; Doria, A.; Merola, L.; Polesello, G.; Vercesi, V.; De Salvo, A.; Di Mattia, A.; Luminari, L.; Nisati, A.; Reale, M.; Testa, M.; Farilla, A.; Verducci, M.; Cobal, M.; Santi, L.; Hasegawa, Y.; Ishino, M.; Mashimo, T.; Matsumoto, H.; Sakamoto, H.; Tanaka, J.; Ueda, I.; Bentvelsen, S.; Fornaini, A.; Gorfine, G.; Groep, D.; Templon, J.; Koster, J.; Konstantinov, A.; Myklebust, T.; Ould-Saada, F.; Bold, T.; Kaczmarska, A.; Malecki, P.; Szymocha, T.; Turala, M.; Kulchitsky, Y.; Khoreauli, G.; Gromova, N.; Tsulaia, V.; et al.

    2004-04-23

    The ATLAS Collaboration at CERN is preparing for the data taking and analysis at the LHC that will start in 2007. Therefore, a series of Data Challenges was started in 2002 whose goals are the validation of the Computing Model, of the complete software suite, of the data model, and to ensure the correctness of the technical choices to be made. A major feature of the first Data Challenge was the preparation and the deployment of the software required for the production of large event samples as a worldwide-distributed activity. It should be noted that it was not an option to ''run everything at CERN'' even if we had wanted to; the resources were not available at CERN to carry out the production on a reasonable time-scale. The great challenge of organizing and then carrying out this large-scale production at a significant number of sites around the world had the refore to be faced. However, the benefits of this are manifold: apart from realizing the required computing resources, this exercise created worldwide momentum for ATLAS computing as a whole. This report describes in detail the main steps carried out in DC1 and what has been learned from them as a step towards a computing Grid for the LHC experiments.

  2. ATLAS Tier-2 at the Compute Resource Center GoeGrid in Göttingen

    CERN Document Server

    Meyer, J; The ATLAS collaboration; Weber, P

    2010-01-01

    GoeGrid is a grid resource center located in Goettingen, Germany. The resources are commonly used, funded, and maintained by communities doing research in the fields grid development, computer science, biomedicine, high energy physics, theoretical physics, astrophysics, and the humanities. For the high energy physics community GoeGrid serves as a Tier-2 center for the ATLAS experiment as part of the world-wide LHC computing grid (WLCG). The status and performance of the Tier-2 center will be presented with a focus on the interdisciplinary setup and administration of the cluster. Given the various requirements of the different communities on the hardware and software setup the challenge of the common operation of the cluster will be detailed. The benefits are an efficient use of computer and manpower resources. Further interdisciplinary projects are commonly organized courses for students of all fields to support education on grid-computing.

  3. Atlases: Complex models of geospace

    Directory of Open Access Journals (Sweden)

    Ikonović Vesna

    2005-01-01

    Full Text Available Atlas is modeled contexture contents of treated thematic of space on optimal map union. Atlases are higher form of cartography. Atlases content composition of maps which are different by projection, scale, format methods, contents, usage and so. Atlases can be classified by multi criteria. Modern classification of atlases by technology of making would be on: 1. classical or traditional (printed on paper and 2. electronic (made on electronic media - computer or computer station. Electronic atlases divided in three large groups: view-only electronic atlases, 2. interactive electronic atlases and 3. analytical electronic atlases.

  4. The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model

    International Nuclear Information System (INIS)

    Originally the ATLAS Computing and Data Distribution model assumed that the Tier-2s should keep on disk collectively at least one copy of all “active” AOD and DPD datasets. Evolution of ATLAS Computing and Data model requires changes in ATLAS Tier-2s policy for the data replication, dynamic data caching and remote data access. Tier-2 operations take place completely asynchronously with respect to data taking. Tier-2s do simulation and user analysis. Large-scale reprocessing jobs on real data are at first taking place mostly at Tier-1s but will progressively be shared with Tier-2s as well. The availability of disk space at Tier-2s is extremely important in the ATLAS Computing model as it allows more data to be readily accessible for analysis jobs to all users, independently of their geographical location. The Tier-2s disk space has been reserved for real, simulated, calibration and alignment, group, and user data. A buffer disk space is needed for input and output data for simulations jobs. Tier-2s are going to be used more efficiently. In this way Tier-1s and Tier-2s are becoming more equivalent for the network and the hierarchy of Tier-1, 2 is less strict. This paper presents the usage of Tier-2s resources in different Grid activities, caching of data at Tier-2s, and their role in the analysis in the new ATLAS Computing and Data model.

  5. ATLAS Tier-2 at the Compute Resource Center GoeGrid in Gettingen

    International Nuclear Information System (INIS)

    GoeGrid is a grid resource center located in Gettingen, Germany. The resources are commonly used, funded, and maintained by communities doing research in the fields of grid development, computer science, biomedicine, high energy physics, theoretical physics, astrophysics, and the humanities. For the high energy physics community, GoeGrid serves as a Tier-2 center for the ATLAS experiment as part of the world-wide LHC computing grid (WLCG). The status and performance of the Tier-2 center is presented with a focus on the interdisciplinary setup and administration of the cluster. Given the various requirements of the different communities on the hardware and software setup the challenge of the common operation of the cluster is detailed. The benefits are an efficient use of computer and personpower resources.

  6. ATLAS Tier-2 at the Compute Resource Center GoeGrid in Goettingen

    CERN Document Server

    Meyer, J; The ATLAS collaboration; Weber, P

    2011-01-01

    GoeGrid is a grid resource center located in G¨ottingen, Germany. The resources are commonly used, funded, and maintained by communities doing research in the fields grid development, computer science, biomedicine, high energy physics, theoretical physics, astrophysics, and the humanities. For the high energy physics community GoeGrid serves as a Tier-2 center for the ATLAS experiment as part of the world-wide LHC computing grid (WLCG). The status and performance of the Tier-2 center is presented with a focus on the interdisciplinary setup and administration of the cluster. Given the various requirements of the different communities on the hardware and software setup the challenge of the common operation of the cluster is detailed. The benefits are an efficient use of computer and manpower resources.

  7. ATLAS distributed computing operation shift teams experience during the discovery year and beginning of the long shutdown 1

    International Nuclear Information System (INIS)

    ATLAS Distributed Computing Operation Shifts evolve to meet new requirements. New monitoring tools as well as operational changes lead to modifications in organization of shifts. In this paper we describe the structure of shifts, the roles of different shifts in ATLAS computing grid operation, the influence of a Higgs-like particle discovery on shift operation, the achievements in monitoring and automation that allowed extra focus on the experiment priority tasks, and the influence of the Long Shutdown 1 and operational changes related to the no beam period.

  8. Reconstruction and identification of electrons in the Atlas experiment. Setup of a Tier 2 of the computing grid

    International Nuclear Information System (INIS)

    The origin of the mass of elementary particles is linked to the electroweak symmetry breaking mechanism. Its study will be one of the main efforts of the Atlas experiment at the Large Hadron Collider of CERN, starting in 2008. In most cases, studies will be limited by our knowledge of the detector performances, as the precision of the energy reconstruction or the efficiency to identify particles. This manuscript presents a work dedicated to the reconstruction of electrons in the Atlas experiment with simulated data and data taken during the combined test beam of 2004. The analysis of the Atlas data implies the use of a huge amount of computing and storage resources which brought to the development of a world computing grid. (author)

  9. Integrating Network Awareness in ATLAS Distributed Computing Using the ANSE Project

    CERN Document Server

    Klimentov, Alexei; The ATLAS collaboration; Petrosyan, Artem; Batista, Jorge Horacio; Mc Kee, Shawn Patrick

    2015-01-01

    A crucial contributor to the success of the massively scaled global computing system that delivers the analysis needs of the LHC experiments is the networking infrastructure upon which the system is built. The experiments have been able to exploit excellent high-bandwidth networking in adapting their computing models for the most efficient utilization of resources. New advanced networking technologies now becoming available such as software defined networking hold the potential of further leveraging the network to optimize workflows and dataflows, through proactive control of the network fabric on the part of high level applications such as experiment workload management and data management systems. End to end monitoring of networks using perfSONAR combined with data flow performance metrics further allows applications to adapt based on real time conditions. We will describe efforts underway in ATLAS on integrating network awareness at the application level, particularly in workload management, building upon ...

  10. Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science

    CERN Document Server

    Maeno, T; The ATLAS collaboration; Klimentov, A; Nilsson, P; Oleynik, D; Panitkin, S; Petrosyan, A; Schovancova, J; Vaniachine, A; Wenaus, T; Yu, D

    2014-01-01

    An important foundation underlying the impressive success of data processing and analysis in the ATLAS experiment [1] at the LHC [2] is the Production and Distributed Analysis (PanDA) workload management system [3]. PanDA was designed specifically for ATLAS and proved to be highly successful in meeting all the distributed computing needs of the experiment. However, the core design of PanDA is not experiment specific. The PanDA workload management system is capable of meeting the needs of other data intensive scientific applications. Alpha-Magnetic Spectrometer [4], an astro-particle experiment on the International Space Station, and the Compact Muon Solenoid [5], an LHC experiment, have successfully evaluated PanDA and are pursuing its adoption. In this paper, a description of the new program of work to develop a generic version of PanDA will be given, as well as the progress in extending PanDA's capabilities to support supercomputers and clouds and to leverage intelligent networking. PanDA has demonstrated a...

  11. SynapSense wireless environmental monitoring system of the RHIC and ATLAS computing facility at BNL

    International Nuclear Information System (INIS)

    RHIC and ATLAS Computing Facility (RACF) at BNL is a 15000 sq. ft. facility hosting the IT equipment of the BNL ATLAS WLCG Tier-1 site, offline farms for the STAR and PHENIX experiments operating at the Relativistic Heavy Ion Collider (RHIC), the BNL Cloud installation, various Open Science Grid (OSG) resources, and many other small physics research oriented IT installations. The facility originated in 1990 and grew steadily up to the present configuration with 4 physically isolated IT areas with the maximum rack capacity of about 1000 racks and the total peak power consumption of 1.5 MW. In June 2012 a project was initiated with the primary goal to replace several environmental monitoring systems deployed earlier within RACF with a single commercial hardware and software solution by SynapSense Corporation based on wireless sensor groups and proprietary SynapSense™ MapSense™ software that offers a unified solution for monitoring the temperature and humidity within the rack/CRAC units as well as pressure distribution underneath the raised floor across the entire facility. The deployment was completed successfully in 2013. The new system also supports a set of additional features such as capacity planning based on measurements of total heat load, power consumption monitoring and control, CRAC unit power consumption optimization based on feedback from the temperature measurements and overall power usage efficiency estimations that are not currently implemented within RACF but may be deployed in the future.

  12. Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science

    CERN Document Server

    Maeno, T; The ATLAS collaboration; Klimentov, A; Nilsson, P; Oleynik, D; Panitkin, S; Petrosyan, A; Schovancova, J; Vaniachine, A; Wenaus, T; Yu, D

    2013-01-01

    An important foundation underlying the impressive success of data processing and analysis in the ATLAS experiment [1] at the LHC [2] is the Production and Distributed Analysis (PanDA) workload management system [3]. PanDA was designed specifically for ATLAS and proved to be highly successful in meeting all the distributed computing needs of the experiment. However, the core design of PanDA is not experiment specific. The PanDA workload management system is capable of meeting the needs of other data intensive scientific applications. Alpha-Magnetic Spectrometer [4], an astro-particle experiment on the International Space Station, and the Compact Muon Solenoid [5], an LHC experiment, have successfully evaluated PanDA and are pursuing its adoption. In this paper, a description of the new program of work to develop a generic version of PanDA will be given, as well as the progress in extending PanDA's capabilities to support supercomputers and clouds and to leverage intelligent networking. PanDA has demonstrated a...

  13. Computing challenges in the certification of ATLAS Tile Calorimeter front-end electronics during maintenance periods

    CERN Document Server

    Solans, C; The ATLAS collaboration; Kim, H Y; Moreno, P; Reed, R; Sandrock, C; Ruan, X; Shalyugin, A; Schettino, V; Souza, J; Usai, G; Valero, A

    2013-01-01

    After two years of operation of the LHC, the ATLAS Tile Calorimeter is undergoing the consolidation process of its front-end electronics. The first layer of certification of the repairs is performed in the experimental area with a portable test-bench which is capable of controlling and reading out all the inputs and outputs of one front-end module through dedicated cables. This test-bench has been redesigned to improve the quality assessment of the data until the end of Phase I. It is now possible to identify low occurrence errors due to its increased read-out bandwidth and perform more sophisticated quality checks due to its enhanced computing power. Improved results provide fast and reliable feedback to the user.

  14. Tier-1 reprocessing and other key grid computing activities within the ATLAS-Gridka cloud

    International Nuclear Information System (INIS)

    Computing in ATLAS is organized in so-called Tier-1 clouds. The Tier-1 provides crucial services for DDM and production, which had been developed and extensively tested in the last years. A further key activity of a Tier-1 is data reprocessing which requires bulk reading of RAW data from tape. It is an I/O intensive activity. Thus an efficient performance of the tape system I/O is very important. Tape reading tests have been done with an aim of optimizing the system. The talk presents the result of the progress made and the current status in line with the expected performance. Also an overview of the current status and progress in the other areas is given

  15. A Step Towards A Computing Grid For The LHC Experiments ATLAS Data Challenge 1

    CERN Document Server

    Sturrock, R; Epp, B; Ghete, V M; Kuhn, D; Mello, A G; Caron, B; Vetterli, M C; Karapetian, G V; Martens, K; Agarwal, A; Poffenberger, P R; McPherson, R A; Sobie, R J; Amstrong, S; Benekos, N C; Boisvert, V; Boonekamp, M; Brandt, S; Casado, M P; Elsing, M; Gianotti, F; Goossens, L; Grote, M; Hansen, J B; Mair, K; Nairz, A; Padilla, C; Poppleton, A; Poulard, G; Richter-Was, Elzbieta; Rosati, S; Schörner-Sadenius, T; Wengler, T; Xu, G F; Ping, J L; Chudoba, J; Kosina, J; Lokajícek, M; Svec, J; Tas, P; Hansen, J R; Lytken, E; Nielsen, J L; Wäänänen, A; Tapprogge, Stefan; Calvet, D; Albrand, S; Collot, J; Fulachier, J; Ledroit-Guillon, F; Ohlsson-Malek, F; Viret, S; Wielers, M; Bernardet, K; Corréard, S; Rozanov, A; De Vivie de Régie, J B; Arnault, C; Bourdarios, C; Hrivnác, J; Lechowski, M; Parrour, G; Perus, A; Rousseau, D; Schaffer, A; Unal, G; Derue, F; Chevalier, L; Hassani, S; Laporte, J F; Nicolaidou, R; Pomarède, D; Virchaux, M; Nesvadba, N; Baranov, S; Putzer, A; Khonich, A; Duckeck, G; Schieferdecker, P; Kiryunin, A E; Schieck, J; Lagouri, T; Duchovni, E; Levinson, L; Schrager, D; Negri, G; Bilokon, H; Spogli, L; Barberis, D; Parodi, F; Cataldi, G; Gorini, E; Primavera, M; Spagnolo, S; Cavalli, D; Heldmann, M; Lari, T; Perini, L; Rebatto, D; Resconi, S; Tatarelli, F; Vaccarossa, L; Biglietti, M; Carlino, G; Conventi, F; Doria, A; Merola, L; Polesello, G; Vercesi, V; De Salvo, A; Di Mattia, A; Luminari, L; Nisati, A; Reale, M; Testa, M; Farilla, A; Verducci, M; Cobal, M; Santi, L; Hasegawa, Y; Ishino, M; Mashimo, T; Matsumoto, H; Sakamoto, H; Tanaka, J; Ueda, I; Bentvelsen, Stanislaus Cornelius Maria; Fornaini, A; Gorfine, G; Groep, D; Templon, J; Köster, L J; Konstantinov, A; Myklebust, T; Ould-Saada, F; Bold, T; Kaczmarska, A; Malecki, P; Szymocha, T; Turala, M; Kulchitskii, Yu A; Khoreauli, G; Gromova, N; Tsulaia, V; Minaenko, A A; Rudenko, R; Slabospitskaya, E; Solodkov, A; Gavrilenko, I; Nikitine, N; Sivoklokov, S Yu; Toms, K; Zalite, A; Zalite, Yu; Kervesan, B; Bosman, M; González, S; Sánchez, J; Salt, J; Andersson, N; Nixon, L; Eerola, Paule Anna Mari; Kónya, B; Smirnova, O G; Sandgren, A; Ekelöf, T J C; Ellert, M; Gollub, N; Hellman, S; Lipniacka, A; Corso-Radu, A; Pérez-Réale, V; Lee, S C; CLin, S C; Ren, Z L; Teng, P K; Faulkner, P J W; O'Neale, S W; Watson, A; Brochu, F; Lester, C; Thompson, S; Kennedy, J; Bouhova-Thacker, E; Henderson, R; Jones, R; Kartvelishvili, V G; Smizanska, M; Washbrook, A J; Drohan, J; Konstantinidis, N P; Moyse, E; Salih, S; Loken, J; Baines, J T M; Candlin, D; Candlin, R; Clifft, R; Li, W; McCubbin, N A; George, S; Lowe, A; Buttar, C; Dawson, I; Moraes, A; Tovey, Daniel R; Gieraltowski, J; Malon, D; May, E; LeCompte, T J; Vaniachine, A; Adams, D L; Assamagan, Ketevi A; Baker, R; Deng, W; Fine, V; Fisyak, Yu; Gibbard, B; Ma, H; Nevski, P; Paige, F; Rajagopalan, S; Smith, J; Undrus, A; Wenaus, T; Yu, D; Calafiura, P; Canon, S; Costanzo, D; Hinchliffe, Ian; Lavrijsen, W; Leggett, C; Marino, M; Quarrie, D R; Sakrejda, I; Stravopoulos, G; Tull, C; Loch, P; Youssef, S; Shank, J T; Engh, D; Frank, E; Sen-Gupta, A; Gardner, R; Meritt, F; Smirnov, Y; Huth, J; Grundhoefer, L; Luehring, F C; Goldfarb, S; Severini, H; Skubic, P L; Gao, Y; Ryan, T; De, K; Sosebee, M; McGuigan, P; Ozturk, N

    2004-01-01

    The ATLAS Collaboration at CERN is preparing for the data taking and analysis at the LHC that will start in 2007. Therefore, a series of Data Challenges was started in 2002 whose goals are the validation of the Computing Model, of the complete software suite, of the data model, and to ensure the correctness of the technical choices to be made for the final offline computing environment. A major feature of the first Data Challenge (DC1) was the preparation and the deployment of the software required for the production of large event samples as a worldwide distributed activity. It should be noted that it was not an option to "run the complete production at CERN" even if we had wanted to; the resources were not available at CERN to carry out the production on a reasonable time-scale. The great challenge of organising and carrying out this large-scale production at a significant number of sites around the world had therefore to be faced. However, the benefits of this are manifold: apart from realising the require...

  16. Using Puppet to contextualize computing resources for ATLAS analysis on Google Compute Engine

    International Nuclear Information System (INIS)

    With the advent of commercial as well as institutional and national clouds, new opportunities for on-demand computing resources for the HEP community become available. The new cloud technologies also come with new challenges, and one such is the contextualization of computing resources with regard to requirements of the user and his experiment. In particular on Google's new cloud platform Google Compute Engine (GCE) upload of user's virtual machine images is not possible. This precludes application of ready to use technologies like CernVM and forces users to build and contextualize their own VM images from scratch. We investigate the use of Puppet to facilitate contextualization of cloud resources on GCE, with particular regard to ease of configuration and dynamic resource scaling.

  17. Using Puppet to contextualize computing resources for ATLAS analysis on Google Compute Engine

    CERN Document Server

    Öhman, H; The ATLAS collaboration; Hendrix, V

    2014-01-01

    With the advent of commercial as well as institutional and national clouds, new opportunities for on-demand computing resources for the HEP community become available. With the new cloud technologies come also new challenges, and one such is the contextualization of cloud resources with regard to requirements of the user and his experiment. In particular on Google's new cloud platform Google Compute Engine (GCE) upload of user's virtual machine images is not possible, which precludes application of ready to use technologies like CernVM and forces users to build and contextualize their own VM images from scratch. We investigate the use of Puppet to facilitate contextualization of cloud resources on GCE, with particular regard to ease of configuration, dynamic resource scaling, and high degree of scalability.

  18. PanDA: A New Paradigm for Distributed Computing in HEP Through the Lens of ATLAS and other Experiments

    CERN Document Server

    De, K; The ATLAS collaboration; Maeno, T; Nilsson, P; Wenaus, T

    2014-01-01

    Experiments at the Large Hadron Collider (LHC) face unprecedented computing challenges. Heterogeneous resources are distributed worldwide, thousands of physicists analyzing the data need remote access to hundreds of computing sites, the volume of processed data is beyond the exabyte scale, and data processing requires more than a billion hours of computing usage per year. The PanDA (Production and Distributed Analysis) system was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. In the process, the old batch job paradigm of computing in HEP was discarded in favor of a far more flexible and scalable model. The success of PanDA in ATLAS is leading to widespread adoption and testing by other experiments. PanDA is the first exascale workload management system in HEP, already operating at a million computing jobs per day, and processing over an exabyte of data in 2013. We will describe the design and implementation of PanDA, present data on the performance of PanDA a...

  19. Analysis of Metabolomics Datasets with High-Performance Computing and Metabolite Atlases

    Directory of Open Access Journals (Sweden)

    Yushu Yao

    2015-07-01

    Full Text Available Even with the widespread use of liquid chromatography mass spectrometry (LC/MS based metabolomics, there are still a number of challenges facing this promising technique. Many, diverse experimental workflows exist; yet there is a lack of infrastructure and systems for tracking and sharing of information. Here, we describe the Metabolite Atlas framework and interface that provides highly-efficient, web-based access to raw mass spectrometry data in concert with assertions about chemicals detected to help address some of these challenges. This integration, by design, enables experimentalists to explore their raw data, specify and refine features annotations such that they can be leveraged for future experiments. Fast queries of the data through the web using SciDB, a parallelized database for high performance computing, make this process operate quickly. By using scripting containers, such as IPython or Jupyter, to analyze the data, scientists can utilize a wide variety of freely available graphing, statistics, and information management resources. In addition, the interfaces facilitate integration with systems biology tools to ultimately link metabolomics data with biological models.

  20. Computational mouse atlases and their application to automatic assessment of craniofacial dysmorphology caused by the Crouzon mutation Fgfr2

    DEFF Research Database (Denmark)

    Ólafsdóttir, Hildur; Darvann, Tron Andre; Hermann, Nuno V.; Oubel, Estanislao; Ersbøll, Bjarne Kjær; Frangi, Alejandro F.; Larsen, Per; Perlyn, Chad A.; Morriss-Kay, Gillian M.; Kreiborg, Sven

    2007-01-01

    scannings of the skulls of wild-type mice and Crouzon mice were analysed with respect to the dysmorphology caused by Crouzon syndrome. A computational craniofacial atlas was built automatically from the set of wild-type mouse Micro CT volumes using (i) affine and (ii) nonrigid image registration....... Subsequently, the atlas was deformed to match each subject from the two groups of mice. The accuracy of these registrations was measured by a comparison of manually placed landmarks from two different observers and automatically assessed landmarks. Both of the automatic approaches were within the inter......-observer accuracy for normal specimens, and the nonrigid approach was within the inter-observer accuracy for the Crouzon specimens. Four linear measurements, skull length, height and width and inter-orbital distance, were carried out automatically using the two different approaches. Both automatic approaches...

  1. Scaling up ATLAS Event Service to production levels on opportunistic computing platforms

    CERN Document Server

    Benjamin, Douglas; The ATLAS collaboration; Ernst, Michael; Guan, Wen; Hover, John; Lesny, David; Maeno, Tadashi; Nilsson, Paul; Tsulaia, Vakhtang; van Gemmeren, Peter; Vaniachine, Alexandre; Wang, Fuquan; Wenaus, Torre

    2016-01-01

    Continued growth in public cloud and HPC resources is on track to exceed the dedicated resources available for ATLAS on the WLCG. Examples of such platforms are Amazon AWS EC2 Spot Instances, Edison Cray XC30 supercomputer, backfill at Tier 2 and Tier 3 sites, opportunistic resources at the Open Science Grid (OSG), and ATLAS High Level Trigger farm between the data taking periods. Because of specific aspects of opportunistic resources such as preemptive job scheduling and data I/O, their efficient usage requires workflow innovations provided by the ATLAS Event Service. Thanks to the finer granularity of the Event Service data processing workflow, the opportunistic resources are used more efficiently. We report on our progress in scaling opportunistic resource usage to double-digit levels in ATLAS production.

  2. Scaling up ATLAS Event Service to production levels on opportunistic computing platforms

    CERN Document Server

    Benjamin, Douglas; The ATLAS collaboration; Ernst, Michael; Guan, Wen; Hover, John; Lesny, David; Maeno, Tadashi; Nilsson, Paul; Tsulaia, Vakhtang; van Gemmeren, Peter; Vaniachine, Alexandre; Wang, Fuquan; Wenaus, Torre

    2016-01-01

    Continued growth in public cloud and HPC resources is on track to overcome the dedicated resources available for ATLAS on the WLCG. Example of such platforms are Amazon AWS EC2 Spot Instances, Edison Cray XC30 supercomputer, backfill at the Tier-2 and Tier-3 sites, opportunistic resources at the Open Science Grid, and ATLAS High Level Trigger farm between the data taking periods. Because of opportunistic resources specifics such as preemptive job scheduling and data I/O, their efficient usage requires workflow innovations provided by the ATLAS Event Service. Thanks to the finer granularity of the Event Service data processing workflow, the opportunistic resources are used more efficiently. We report on our progress in scaling opportunistic resource usage to double-digit levels in ATLAS production.

  3. The ATLAS Analysis Model

    CERN Multimedia

    Amir Farbin

    The ATLAS Analysis Model is a continually developing vision of how to reconcile physics analysis requirements with the ATLAS offline software and computing model constraints. In the past year this vision has influenced the evolution of the ATLAS Event Data Model, the Athena software framework, and physics analysis tools. These developments, along with the October Analysis Model Workshop and the planning for CSC analyses have led to a rapid refinement of the ATLAS Analysis Model in the past few months. This article introduces some of the relevant issues and presents the current vision of the future ATLAS Analysis Model. Event Data Model The ATLAS Event Data Model (EDM) consists of several levels of details, each targeted for a specific set of tasks. For example the Event Summary Data (ESD) stores calorimeter cells and tracking system hits thereby permitting many calibration and alignment tasks, but will be only accessible at particular computing sites with potentially large latency. In contrast, the Analysis...

  4. AGIS: The ATLAS Grid Information System

    CERN Document Server

    Anisenkov, A; The ATLAS collaboration; Klimentov, A; Oleynik, D; Petrosyan, A

    2014-01-01

    In this paper we describe ATLAS Grid Information System (AGIS), the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services.

  5. AGIS: The ATLAS Grid Information System

    CERN Document Server

    Anisenkov, A; The ATLAS collaboration; Klimentov, A; Oleynik, D; Petrosyan, A

    2013-01-01

    In this paper we describe ATLAS Grid Information System (AGIS), the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services.

  6. Computational neuroanatomy: mapping cell-type densities in the mouse brain, simulations from the Allen Brain Atlas

    Science.gov (United States)

    Grange, Pascal

    2015-09-01

    The Allen Brain Atlas of the adult mouse (ABA) consists of digitized expression profiles of thousands of genes in the mouse brain, co-registered to a common three-dimensional template (the Allen Reference Atlas).This brain-wide, genome-wide data set has triggered a renaissance in neuroanatomy. Its voxelized version (with cubic voxels of side 200 microns) is available for desktop computation in MATLAB. On the other hand, brain cells exhibit a great phenotypic diversity (in terms of size, shape and electrophysiological activity), which has inspired the names of some well-studied cell types, such as granule cells and medium spiny neurons. However, no exhaustive taxonomy of brain cell is available. A genetic classification of brain cells is being undertaken, and some cell types have been chraracterized by their transcriptome profiles. However, given a cell type characterized by its transcriptome, it is not clear where else in the brain similar cells can be found. The ABA can been used to solve this region-specificity problem in a data-driven way: rewriting the brain-wide expression profiles of all genes in the atlas as a sum of cell-type-specific transcriptome profiles is equivalent to solving a quadratic optimization problem at each voxel in the brain. However, the estimated brain-wide densities of 64 cell types published recently were based on one series of co-registered coronal in situ hybridization (ISH) images per gene, whereas the online ABA contains several image series per gene, including sagittal ones. In the presented work, we simulate the variability of cell-type densities in a Monte Carlo way by repeatedly drawing a random image series for each gene and solving the optimization problem. This yields error bars on the region-specificity of cell types.

  7. Computed tomography of the retroperitoneum: an anatomical and pathological atlas with emphasis on the fascial planes

    International Nuclear Information System (INIS)

    The aim of this thesis is to provide a descriptive clinical pathological CT atlas of a range of conditions involving retroperitoneum and neighbouring organs and structures (excluding the pelvic part of the retroperitoneum). Chapter 1 describes the patient material studied, some aspects of CT techniques and patient handling. Chapter 2 describes the anatomy of the renal fascia based upon reports derived from the literature and is followed by our CT observations in more than 5000 abdominal CT examinations. In short it is an anatomical CT atlas. Chapters 3, 4 and 5 deal with reactions of the fascial structures in different pathological conditions caused by major disease entities. The patients were scanned for these diseases, of which anatomical topographical appearances and spread are described in the general considerations, followed by CT findings and illustrative cases, combined with abstracted experience from other workers. (Auth.)

  8. Computing infrastructure for ATLAS data analysis in the Italian Grid cloud

    International Nuclear Information System (INIS)

    ATLAS data are distributed centrally to Tier-1 and Tier-2 sites. The first stages of data selection and analysis take place mainly at Tier-2 centres, with the final, iterative and interactive, stages taking place mostly at Tier-3 clusters. The Italian ATLAS cloud consists of a Tier-1, four Tier-2s, and Tier-3 sites at each institute. Tier-3s that are grid-enabled are used to test code that will then be run on a larger scale at Tier-2s. All Tier-3s offer interactive data access to their users and the possibility to run PROOF. This paper describes the hardware and software infrastructure choices taken, the operational experience after 10 months of LHC data, and discusses site performances.

  9. Computing challenges in the certification of ATLAS Tile Calorimeter front-end electronics during maintenance periods

    International Nuclear Information System (INIS)

    After two years of operation of the LHC, the ATLAS Tile calorimeter is undergoing a consolidation process of its front-end electronics. The certification is performed in the experimental area with a portable test-bench which is capable of controlling and reading out one front-end module through dedicated cables. This test-bench has been redesigned to improve the tests of the electronics functionality quality assessment of the data until the end of Phase I.

  10. Computing challenges in the certification of ATLAS Tile Calorimeter front-end electronics during maintenance periods

    CERN Document Server

    Solans, C; The ATLAS collaboration; Kim, H Y; Moreno, P; Reed, R; Sandrock, C; Ruan, X; Shalyugin, A; Schettino, V; Souza, J; Usai, G; Valero, A

    2014-01-01

    After two years of operation of the LHC, the ATLAS Tile calorimeter is undergoing the consolidation process of its front-end electronics. The certification is performed in the experimental area with a portable test-bench which is capable of controlling and reading out all the inputs and outputs of one front-end module through dedicated cables. This test-bench has been redesigned to improve the quality assessment of the data until the end of Phase I.

  11. A computational atlas of the hippocampal formation using ex vivo, ultra-high resolution MRI: Application to adaptive segmentation of in vivo MRI

    DEFF Research Database (Denmark)

    Iglesias, Juan Eugenio; Augustinack, Jean C.; Nguyen, Khoa;

    2015-01-01

    algorithm that can analyze multimodal data and adapt to variations in MRI contrast due to differences in acquisition hardware or pulse sequences. The applicability of the atlas, which we are releasing as part of FreeSurfer (version 6.0), is demonstrated with experiments on three different publicly available......Automated analysis of MRI data of the subregions of the hippocampus requires computational atlases built at a higher resolution than those that are typically used in current neuroimaging studies. Here we describe the construction of a statistical atlas of the hippocampal formation at the subregion...... level using ultra-high resolution, ex vivo MRI. Fifteen autopsy samples were scanned at 0.13 mm isotropic resolution (on average) using customized hardware. The images were manually segmented into 13 different hippocampal substructures using a protocol specifically designed for this study; precise...

  12. Pocket atlas of sectional anatomy: computed tomography and magnetic resonance imaging. Vol. 3. Spine, extremities, joints

    Energy Technology Data Exchange (ETDEWEB)

    Moeller, T.B.; Reif, E. [Caritas Hospital, Dillingen (Germany). Dept. of Radiology

    2007-07-01

    Magnetic resonance imaging (MRI) of the musculoskeletal system is an established and important component in the diagnosis of diseases of the joints, soft tissues, bones, and bone marrow. We are therefore pleased to collect together images of the joints and the spinal column in a separate volume on the musculoskeletal system. Demonstrating the growing importance of new developments in MRI in recent years, with ever-increasing resolution, many images were acquired with 3-tesla units. We are deeply grateful to the manufacturers, Siemens and Philips, for making this possible. We believe that colored atlases are the ideal medium to represent the highly detailed images achieved nowadays with improved resolution techniques. Volume 3 of the Pocket Atlas of Sectional Anatomay provides a color illustration facing each magnetic resonance image, as in the preceding volumes on the skull, thorax, and abdomen. To ensure the greatest possible precision in details, we still produce these illustrations ourselves. Each is accompanied by a sectional image and an orientation aid. Uniform color schemes ensure optimal clarity, as similar structures, such as arteries, veins, nerves, tendons, etc., are consistently represented in the same color. Individual muscle groups are represented uniformly, but differentiated from other muscle groups, so that classification is possible even when numerous groups of muscles are shown in the same image. Maximal lucidity prevails even in highly detailed representations. This is made possible by the high quality of the production and printing process that are characteristic of Thieme International. (orig.)

  13. Anatomic atlas for computed tomography in the mesaticephalic dog: caudal abdomen and pelvis

    International Nuclear Information System (INIS)

    The purpose of this study was to produce a comprehensive anatomic atlas of CT anatomy of the dog for use by veterinary radiologists, clinicians, and surgeons. Whole-body CT images of two mature beagle dogs were made with the dogs supported in sternal recumbency and using a slice thickness of 13 mm. At the end of the CT session, each dog was euthanized, and while carefully maintaining the same position, the body was frozen. The body was then sectioned at 13-mm intervals, with the cuts matched as closely as possible to the CT slices. The frozen sections were cleaned, photographed, and radiographed using xeroradiography. Each CT image was studied and compared with its corresponding xeroradiograph and anatomic section to assist in the accurate identification of specific structures. Clinically relevant anatomic structures were identified and labeled in the three corresponding photographs (CT image, xeroradiograph, and anatomic section). In previous papers, the head and neck, and the thorax and cranial abdomen of the mesaticephalic (beagle) dog were presented. In this paper, the caudal part of the abdomen and pelvis of the bitch and male dog are presented

  14. Anatomic atlas for computed tomography in the mesaticephalic dog: head and neck

    International Nuclear Information System (INIS)

    The purpose of this study was to produce a comprehensive anatomic atlas of CT anatomy of the dog for use by veterinary radiologists, clinicians, and surgeons. Whole-body CT images of two mature beagle dogs were made with the dogs supported in sternal recumbency and using a slice thickness of 13 mm. The head was scanned using high-resolution imaging with a slice thickness of 8 mm. At the end of the CT session, each dog was euthanized, and while carefully maintaining the same position, the body was placed in a walk-in freezer until completely frozen. The body was then sectioned at 13-mm (head at 8-mm) intervals, with the cuts matched as closely as possible to the CT slices. The forzen sections were cleaned, photographed, and radiographed using xeroradiography. Each CT image was studied and compared with its corresponding xeroradiograph and anatomic section to assist in the accurate identification of specific structures. Intact, sagittally sectioned, and disarticulated dog skulls were used as reference models. Clinically relevant anatomic structures were identified and labeled in the three corresponding photographs (CT image, xeroradiograph, and anatomic section). In this paper, the CT anatomy of the head and neck of the mesaticephalic dog is presented

  15. A novel computed method to reconstruct the bilateral digital interarticular channel of atlas and its use on the anterior upper cervical screw fixation

    Science.gov (United States)

    Wu, Ai-Min; Wang, Wenhai; Xu, Hui; Lin, Zhong-Ke; Yang, Xin-Dong; Wang, Xiang-Yang; Xu, Hua-Zi

    2016-01-01

    Purpose. To investigate a novel computed method to reconstruct the bilateral digital interarticular channel of atlas and its potential use on the anterior upper cervical screw fixation. Methods. We have used the reverse engineering software (image-processing software and computer-aided design software) to create the approximate and optimal digital interarticular channel of atlas for 60 participants. Angles of channels, diameters of inscribed circles, long and short axes of ellipses were measured and recorded, and gender-specific analysis was also performed. Results. The channels provided sufficient space for one or two screws, and the parameters of channels are described. While the channels of females were smaller than that of males, no significant difference of angles between males and females were observed. Conclusion. Our study demonstrates the radiological features of approximate digital interarticular channels, optimal digital interarticular channels of atlas, and provides the reference trajectory of anterior transarticular screws and anterior occiput-to-axis screws. Additionally, we provide a protocol that can help make a pre-operative plan for accurate placement of anterior transarticular screws and anterior occiput-to-axis screws. PMID:26925345

  16. 26th February 2009 - US Google Vice President and Chief Internet Evangelist V. Cerf signing the guest book with Director for research and Computing S. Bertolucci; visiting ATLAS control room and experimental area with Collaboration Spokesperson P. Jenni.

    CERN Multimedia

    Maximilien Brice

    2009-01-01

    HI-0902038 05: IT Department Head, F. Hemmer; US Google Vice President and Chief Internet Evangelist V. Cerf; Computing Security Officer and Colloquium Convenor D. R. Myers; Member of the Internet Society Advisory Council F. Flückiger; Director for Research and Scientific Computing, S. Bertolucci ; Honorary Staff Member, B. Segal. HI-0902038 16: Computing Security Officer and Colloquium Convenor D. R. Myers; UC Irvine, ATLAS Deputy Spokesperson elect A. J. Lankford; US Google Vice President and Chief Internet Evangelist V. Cerf; ATLAS Collaboration Spokesperson P. Jenni; IT Department Head, F. Hemmer.

  17. The evolution of the trigger and data acquisition system in the ATLAS experiment (CHEP2013: 20. international conference on computing in high energy and nuclear physics)

    International Nuclear Information System (INIS)

    The ATLAS experiment, which records the results of LHC proton-proton collisions, is upgrading its Trigger and Data Acquisition (TDAQ) system during the current LHC first long shutdown. The purpose of this upgrade is to add robustness and flexibility to the selection and the conveyance of the physics data, simplify the maintenance of the infrastructure, exploit new technologies and, overall, make ATLAS data-taking capable of dealing with increasing event rates. While the TDAQ system successfully operated well beyond the original design goals, the accumulated experience stimulated interest to explore possible evolutions. With higher luminosities, the required number and complexity of Level-1 triggers will increase in order to satisfy the physics goals of ATLAS, while keeping the total Level-1 rates at or below 100 kHz. The Central Trigger Processor will be upgraded to increase the number ofmanageable inputs and accommodate additional hardware for improved performance, and a new Topological Processor will be included. A single homogeneous high level trigger system will be deployed. The current second and third trigger levels will be executed together on a unique hardware node. This design has many advantages: the radical simplification of the architecture, the flexible and automatically balanced distribution of the computing resources, the sharing of code and services on nodes. In this paper, we report on the design and the development status of the upgraded TDAQ system, with particular attention to the tests currently on-going to identify the required performance and to spot its possible limitations.

  18. ATLAS cloud R and D

    International Nuclear Information System (INIS)

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R and D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R and D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R and D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R and D group has gained a significant insight into the cloud computing landscape and has identified points that still need to be addressed in order to fully utilize this technology. This contribution will explain the cloud integration models that are being evaluated and will discuss ATLAS' learning during the collaboration with leading commercial and academic cloud providers.

  19. Atlas Tier 3

    CERN Document Server

    Benjamin, D; The ATLAS collaboration

    2010-01-01

    ATLAS has built a powerful system for computing activities on top of three major grid infrastructures. As expected, with data finally arriving physicists need dedicated resources for analysis activities. In contrast to the existing grid infrastructure, there is a strong need to provide users with data control and high-performance (quasi) interactive data access. The ATLAS Tier3 solution is targeted to provide efficient and manageable analysis computing at each member institution. For most of sites only a small fraction of a physicist or student can be diverted for computing support. Transformative technologies have been chosen and integrated with the existing ATLAS tools. The result is a site which is substantially simpler to maintain and which is essentially operated by client tools and extensive use of caching technologies. Most promising new technologies we are using are: xroot and Lustre (distributed storage); CVMFS (experiment software distribution and condition files). We believe that this experience ha...

  20. ATLAS PHd Grants 2015

    CERN Multimedia

    Marcelloni De Oliveira, Claudia

    2015-01-01

    ATLAS PHd Grants - We are excited to announce the creation of a dedicated grant scheme (thanks to a donation from Fabiola Gianotti and Peter Jenni following their award from the Fundamental Physics Prize foundation) to encourage young and high-caliber doctoral students in particle physics research (including computing for physics) and permit them to obtain world class exposure, supervision and training within the ATLAS collaboration. This special PhD Grant is aimed at graduate students preparing a doctoral thesis in particle physics (incl. computing for physics) to spend one year at CERN followed by one year support also at the home Institute.

  1. Reconstruction and identification of electrons in the Atlas experiment. Setup of a Tier 2 of the computing grid; Reconstruction et identification des electrons dans l'experience Atlas. Participation a la mise en place d'un Tier 2 de la grille de calcul

    Energy Technology Data Exchange (ETDEWEB)

    Derue, F

    2008-03-15

    The origin of the mass of elementary particles is linked to the electroweak symmetry breaking mechanism. Its study will be one of the main efforts of the Atlas experiment at the Large Hadron Collider of CERN, starting in 2008. In most cases, studies will be limited by our knowledge of the detector performances, as the precision of the energy reconstruction or the efficiency to identify particles. This manuscript presents a work dedicated to the reconstruction of electrons in the Atlas experiment with simulated data and data taken during the combined test beam of 2004. The analysis of the Atlas data implies the use of a huge amount of computing and storage resources which brought to the development of a world computing grid. (author)

  2. ATLAS production system

    CERN Document Server

    Borodin, Mikhail; The ATLAS collaboration; De, Kaushik; Klimentov, Alexei; Golubkov, Dmitry; Maeno, Tadashi; Mashinistov, Ruslan; Wenaus, Torre; Padolski, Siarhei

    2016-01-01

    The second generation of the ATLAS production system called ProdSys2 is a distributed workload manager which used by thousands of physicists to analyze the data remotely, with the volume of processed data is beyond the exabyte scale, across a more than hundred heterogeneous sites. It achieves high utilization by combining dynamic job definition based on many criterias, such as input and output size, memory requirements and CPU consumption with manageable scheduling policies and by supporting different kind of computational resources, such as GRID, clouds, supercomputers and volunteering computers. Besides jobs definition Production System also includes flexible web user interface, which implements user-friendly environment for main ATLAS workflows, e.g. simple way of combining different data flows, and real-time monitoring, optimised for using with huge amount of information to present. We present an overview of the ATLAS Production System major components: job and task definition, workflow manager web user i...

  3. The ATLAS Distributed Data Management project: Past and Future

    CERN Document Server

    Garonne, V; The ATLAS collaboration

    2012-01-01

    ATLAS has recorded almost 8PB of RAW data since the LHC started running at the end of 2009. Many more derived data products and complimentary simulation data have also been produced by the collaboration and, in total, 90PB is currently stored in the Worldwide LHC Computing Grid by ATLAS. All this data is managed by the ATLAS Distributed Data Management system, called Don Quijote 2 (DQ2). DQ2 has evolved rapidly to help ATLAS Computing operations manage these large quantities of data across the many grid sites at which ATLAS runs, and to help ATLAS physicists get access to this data. In this paper, we describe new and improved DQ2 services, and the experience of data management operation in ATLAS computing, showing how these services enable the management of petabyte scale computing operations. We also present the concepts of the new version of the ATLAS Distributed Data Management (DDM) system, Rucio.

  4. Distributed analysis in ATLAS

    Science.gov (United States)

    Dewhurst, A.; Legger, F.

    2015-12-01

    The ATLAS experiment accumulated more than 140 PB of data during the first run of the Large Hadron Collider (LHC) at CERN. The analysis of such an amount of data is a challenging task for the distributed physics community. The Distributed Analysis (DA) system of the ATLAS experiment is an established and stable component of the ATLAS distributed computing operations. About half a million user jobs are running daily on DA resources, submitted by more than 1500 ATLAS physicists. The reliability of the DA system during the first run of the LHC and the following shutdown period has been high thanks to the continuous automatic validation of the distributed analysis sites and the user support provided by a dedicated team of expert shifters. During the LHC shutdown, the ATLAS computing model has undergone several changes to improve the analysis workflows, including the re-design of the production system, a new analysis data format and event model, and the development of common reduction and analysis frameworks. We report on the impact such changes have on the DA infrastructure, describe the new DA components, and include recent performance measurements.

  5. Distributed analysis in ATLAS

    CERN Document Server

    Legger, Federica; The ATLAS collaboration

    2015-01-01

    The ATLAS experiment accumulated more than 140 PB of data during the first run of the Large Hadron Collider (LHC) at CERN. The analysis of such an amount of data for the distributed physics community is a challenging task. The Distributed Analysis (DA) system of the ATLAS experiment is an established and stable component of the ATLAS distributed computing operations. About half a million user jobs are daily running on DA resources, submitted by more than 1500 ATLAS physicists. The reliability of the DA system during the first run of the LHC and the following shutdown period has been high thanks to the continuous automatic validation of the distributed analysis sites and the user support provided by a dedicated team of expert shifters. During the LHC shutdown, the ATLAS computing model has undergone several changes to improve the analysis workflows, including the re-design of the production system, a new analysis data format and event model, and the development of common reduction and analysis frameworks. We r...

  6. ATLAS DQ2 DELETION SERVICE

    CERN Document Server

    Oleynik, D; The ATLAS collaboration; Garonne, V; Campana, S

    2012-01-01

    ATLAS DQ2 Deletion service is a sub system of the ATLAS Distributed Data Management (DDM) project DQ2. DDM DQ2 responsible for the replication, access and bookkeeping of ATLAS data across more than 130 distributed grid sites. It also enforces data management policies decided on by the collaboration and defined in the ATLAS computing model. Responsibility of ATLAS DQ2 Deletion service is serving deletion requests on the grid by interacting with grid middleware and the DQ2 catalogues. Furthermore, it also takes care of retry strategies, check-pointing transactions, load management and fault tolerance. In this talk special attention is paid to the technical details, which are used to achieve the high performance of service, accomplished without overloading either site storage, catalogues or other DQ2 components. Also specialty of database backend implementation will be described. Special section will be devote to the deletion monitoring service that allows operators a detailed view of the working system.

  7. Methods and computing challenges of the realistic simulation of physics events in the presence of pile-up in the ATLAS experiment

    CERN Document Server

    Chapman, J D; The ATLAS collaboration

    2014-01-01

    We are now in a regime where we observe substantial multiple proton-proton collisions within each filled LHC bunch-crossing and also multiple filled bunch-crossings within the sensitive time window of the ATLAS detector. This will increase with increased luminosity in the near future. Including these effects in Monte Carlo simulation poses significant computing challenges. We present a description of the standard approach used by the ATLAS experiment and details of how we manage the conflicting demands of keeping the background dataset size as small as possible while minimizing the effect of background event re-use. We also present details of the methods used to minimize the memory footprint of these digitization jobs, to keep them within the grid limit, despite combining the information from thousands of simulated events at once. We also describe an alternative approach, known as Overlay. Here, the actual detector conditions are sampled from raw data using a special zero-bias trigger, and the simulated physi...

  8. The ATLAS Simulation Infrastructure

    CERN Document Server

    Aad, Georges; Abdallah, Jalal; Abdelalim, Ahmed Ali; Abdesselam, Abdelouahab; Abdinov, Ovsat; Abi, Babak; Abolins, Maris; Abramowicz, Halina; Abreu, Henso; Acharya, Bobby Samir; Adams, David; Addy, Tetteh; Adelman, Jahred; Adorisio, Cristina; Adragna, Paolo; Adye, Tim; Aefsky, Scott; Aguilar-Saavedra, Juan Antonio; Aharrouche, Mohamed; Ahlen, Steven; Ahles, Florian; Ahmad, Ashfaq; Ahmed, Hossain; Ahsan, Mahsana; Aielli, Giulio; Akdogan, Taylan; Åkesson, Torsten Paul Ake; Akimoto, Ginga; Akimov , Andrei; Aktas, Adil; Alam, Mohammad; Alam, Muhammad Aftab; Albrand, Solveig; Aleksa, Martin; Aleksandrov, Igor; Alexa, Calin; Alexander, Gideon; Alexandre, Gauthier; Alexopoulos, Theodoros; Alhroob, Muhammad; Aliev, Malik; Alimonti, Gianluca; Alison, John; Aliyev, Magsud; Allport, Phillip; Allwood-Spiers, Sarah; Almond, John; Aloisio, Alberto; Alon, Raz; Alonso, Alejandro; Alviggi, Mariagrazia; Amako, Katsuya; Amelung, Christoph; Amorim, Antonio; Amorós, Gabriel; Amram, Nir; Anastopoulos, Christos; Andeen, Timothy; Anders, Christoph Falk; Anderson, Kelby; Andreazza, Attilio; Andrei, George Victor; Anduaga, Xabier; Angerami, Aaron; Anghinolfi, Francis; Anjos, Nuno; Annovi, Alberto; Antonaki, Ariadni; Antonelli, Mario; Antonelli, Stefano; Antos, Jaroslav; Antunovic, Bijana; Anulli, Fabio; Aoun, Sahar; Arabidze, Giorgi; Aracena, Ignacio; Arai, Yasuo; Arce, Ayana; Archambault, John-Paul; Arfaoui, Samir; Arguin, Jean-Francois; Argyropoulos, Theodoros; Arik, Metin; Armbruster, Aaron James; Arnaez, Olivier; Arnault, Christian; Artamonov, Andrei; Arutinov, David; Asai, Makoto; Asai, Shoji; Silva, José; Asfandiyarov, Ruslan; Ask, Stefan; Åsman, Barbro; Asner, David; Asquith, Lily; Assamagan, Ketevi; Astbury, Alan; Astvatsatourov, Anatoli; Atoian, Grigor; Auerbach, Benjamin; Augsten, Kamil; Aurousseau, Mathieu; Austin, Nicholas; Avolio, Giuseppe; Avramidou, Rachel Maria; Axen, David; Ay, Cano; Azuelos, Georges; Azuma, Yuya; Baak, Max; Bach, Andre; Bachacou, Henri; Bachas, Konstantinos; Backes, Moritz; Badescu, Elisabeta; Bagnaia, Paolo; Bai, Yu; Bain, Travis; Baines, John; Baker, Mark; Baker, Oliver Keith; Baker, Sarah; Baltasar Dos Santos Pedrosa, Fernando; Banas, Elzbieta; Banerjee, Piyali; Banerjee, Swagato; Banfi, Danilo; Bangert, Andrea Michelle; Bansal, Vikas; Baranov, Sergey; Baranov, Sergei; Barashkou, Andrei; Barber, Tom; Barberio, Elisabetta Luigia; Barberis, Dario; Barbero, Marlon; Bardin, Dmitri; Barillari, Teresa; Barisonzi, Marcello; Barklow, Timothy; Barlow, Nick; Barnett, Bruce; Barnett, Michael; Baroncelli, Antonio; Barr, Alan; Barreiro, Fernando; Barreiro Guimarães da Costa, João; Barrillon, Pierre; Bartoldus, Rainer; Bartsch, Detlef; Bates, Richard; Batkova, Lucia; Batley, Richard; Battaglia, Andreas; Battistin, Michele; Bauer, Florian; Bawa, Harinder Singh; Bazalova, Magdalena; Beare, Brian; Beau, Tristan; Beauchemin, Pierre-Hugues; Beccherle, Roberto; Becerici, Neslihan; Bechtle, Philip; Beck, Graham; Beck, Hans Peter; Beckingham, Matthew; Becks, Karl-Heinz; Beddall, Ayda; Beddall, Andrew; Bednyakov, Vadim; Bee, Christopher; Begel, Michael; Behar Harpaz, Silvia; Behera, Prafulla; Beimforde, Michael; Belanger-Champagne, Camille; Bell, Paul; Bell, William; Bella, Gideon; Bellagamba, Lorenzo; Bellina, Francesco; Bellomo, Massimiliano; Belloni, Alberto; Belotskiy, Konstantin; Beltramello, Olga; Ben Ami, Sagi; Benary, Odette; Benchekroun, Driss; Bendel, Markus; Benedict, Brian Hugues; Benekos, Nektarios; Benhammou, Yan; Benincasa, Gianpaolo; Benjamin, Douglas; Benoit, Mathieu; Bensinger, James; Benslama, Kamal; Bentvelsen, Stan; Beretta, Matteo; Berge, David; Bergeaas Kuutmann, Elin; Berger, Nicolas; Berghaus, Frank; Berglund, Elina; Beringer, Jürg; Bernat, Pauline; Bernhard, Ralf; Bernius, Catrin; Berry, Tracey; Bertin, Antonio; Besana, Maria Ilaria; Besson, Nathalie; Bethke, Siegfried; Bianchi, Riccardo-Maria; Bianco, Michele; Biebel, Otmar; Biesiada, Jed; Biglietti, Michela; Bilokon, Halina; Bindi, Marcello; Binet, Sebastien; Bingul, Ahmet; Bini, Cesare; Biscarat, Catherine; Bitenc, Urban; Black, Kevin; Blair, Robert; Blanchard, Jean-Baptiste; Blanchot, Georges; Blocker, Craig; Blondel, Alain; Blum, Walter; Blumenschein, Ulrike; Bobbink, Gerjan; Bocci, Andrea; Boehler, Michael; Boek, Jennifer; Boelaert, Nele; Böser, Sebastian; Bogaerts, Joannes Andreas; Bogouch, Andrei; Bohm, Christian; Bohm, Jan; Boisvert, Veronique; Bold, Tomasz; Boldea, Venera; Bondarenko, Valery; Bondioli, Mario; Boonekamp, Maarten; Bordoni, Stefania; Borer, Claudia; Borisov, Anatoly; Borissov, Guennadi; Borjanovic, Iris; Borroni, Sara; Bos, Kors; Boscherini, Davide; Bosman, Martine; Boterenbrood, Hendrik; Bouchami, Jihene; Boudreau, Joseph; Bouhova-Thacker, Evelina Vassileva; Boulahouache, Chaouki; Bourdarios, Claire; Boveia, Antonio; Boyd, James; Boyko, Igor; Bozovic-Jelisavcic, Ivanka; Bracinik, Juraj; Braem, André; Branchini, Paolo; Brandenburg, George; Brandt, Andrew; Brandt, Gerhard; Brandt, Oleg; Bratzler, Uwe; Brau, Benjamin; Brau, James; Braun, Helmut; Brelier, Bertrand; Bremer, Johan; Brenner, Richard; Bressler, Shikma; Britton, Dave; Brochu, Frederic; Brock, Ian; Brock, Raymond; Brodet, Eyal; Bromberg, Carl; Brooijmans, Gustaaf; Brooks, William; Brown, Gareth; Bruckman de Renstrom, Pawel; Bruncko, Dusan; Bruneliere, Renaud; Brunet, Sylvie; Bruni, Alessia; Bruni, Graziano; Bruschi, Marco; Bucci, Francesca; Buchanan, James; Buchholz, Peter; Buckley, Andrew; Budagov, Ioulian; Budick, Burton; Büscher, Volker; Bugge, Lars; Bulekov, Oleg; Bunse, Moritz; Buran, Torleiv; Burckhart, Helfried; Burdin, Sergey; Burgess, Thomas; Burke, Stephen; Busato, Emmanuel; Bussey, Peter; Buszello, Claus-Peter; Butin, Françcois; Butler, Bart; Butler, John; Buttar, Craig; Butterworth, Jonathan; Byatt, Tom; Caballero, Jose; Cabrera Urbán, Susana; Caforio, Davide; Cakir, Orhan; Calafiura, Paolo; Calderini, Giovanni; Calfayan, Philippe; Calkins, Robert; Caloba, Luiz; Calvet, David; Camarri, Paolo; Cameron, David; Campana, Simone; Campanelli, Mario; Canale, Vincenzo; Canelli, Florencia; Canepa, Anadi; Cantero, Josu; Capasso, Luciano; Capeans Garrido, Maria Del Mar; Caprini, Irinel; Caprini, Mihai; Capua, Marcella; Caputo, Regina; Caramarcu, Costin; Cardarelli, Roberto; Carli, Tancredi; Carlino, Gianpaolo; Carminati, Leonardo; Caron, Bryan; Caron, Sascha; Carrillo Montoya, German D.; Carron Montero, Sebastian; Carter, Antony; Carter, Janet; Carvalho, João; Casadei, Diego; Casado, Maria Pilar; Cascella, Michele; Castaneda Hernandez, Alfredo Martin; Castaneda-Miranda, Elizabeth; Castillo Gimenez, Victoria; Castro, Nuno Filipe; Cataldi, Gabriella; Catinaccio, Andrea; Catmore, James; Cattai, Ariella; Cattani, Giordano; Caughron, Seth; Cauz, Diego; Cavalleri, Pietro; Cavalli, Donatella; Cavalli-Sforza, Matteo; Cavasinni, Vincenzo; Ceradini, Filippo; Cerqueira, Augusto Santiago; Cerri, Alessandro; Cerrito, Lucio; Cerutti, Fabio; Cetin, Serkant Ali; Chafaq, Aziz; Chakraborty, Dhiman; Chan, Kevin; Chapman, John Derek; Chapman, John Wehrley; Chareyre, Eve; Charlton, Dave; Chavda, Vikash; Cheatham, Susan; Chekanov, Sergei; Chekulaev, Sergey; Chelkov, Gueorgui; Chen, Hucheng; Chen, Shenjian; Chen, Xin; Cheplakov, Alexander; Chepurnov, Vladimir; Cherkaoui El Moursli, Rajaa; Tcherniatine, Valeri; Chesneanu, Daniela; Cheu, Elliott; Cheung, Sing-Leung; Chevalier, Laurent; Chevallier, Florent; Chiarella, Vitaliano; Chiefari, Giovanni; Chikovani, Leila; Childers, John Taylor; Chilingarov, Alexandre; Chiodini, Gabriele; Chizhov, Mihail; Choudalakis, Georgios; Chouridou, Sofia; Christidi, Illectra-Athanasia; Christov, Asen; Chromek-Burckhart, Doris; Chu, Ming-Lee; Chudoba, Jiri; Ciapetti, Guido; Ciftci, Abbas Kenan; Ciftci, Rena; Cinca, Diane; Cindro, Vladimir; Ciobotaru, Matei Dan; Ciocca, Claudia; Ciocio, Alessandra; Cirilli, Manuela; Citterio, Mauro; Clark, Allan G.; Clark, Philip James; Cleland, Bill; Clemens, Jean-Claude; Clement, Benoit; Clement, Christophe; Coadou, Yann; Cobal, Marina; Coccaro, Andrea; Cochran, James H.; Coggeshall, James; Cogneras, Eric; Colijn, Auke-Pieter; Collard, Caroline; Collins, Neil; Collins-Tooth, Christopher; Collot, Johann; Colon, German; Conde Muiño, Patricia; Coniavitis, Elias; Consonni, Michele; Constantinescu, Serban; Conta, Claudio; Conventi, Francesco; Cooke, Mark; Cooper, Ben; Cooper-Sarkar, Amanda; Cooper-Smith, Neil; Copic, Katherine; Cornelissen, Thijs; Corradi, Massimo; Corriveau, Francois; Corso-Radu, Alina; Cortes-Gonzalez, Arely; Cortiana, Giorgio; Costa, Giuseppe; Costa, María José; Costanzo, Davide; Costin, Tudor; Côté, David; Coura Torres, Rodrigo; Courneyea, Lorraine; Cowan, Glen; Cowden, Christopher; Cox, Brian; Cranmer, Kyle; Cranshaw, Jack; Cristinziani, Markus; Crosetti, Giovanni; Crupi, Roberto; Crépé-Renaudin, Sabine; Cuenca Almenar, Cristóbal; Cuhadar Donszelmann, Tulay; Curatolo, Maria; Curtis, Chris; Cwetanski, Peter; Czyczula, Zofia; D'Auria, Saverio; D'Onofrio, Monica; D'Orazio, Alessia; Da Via, Cinzia; Dabrowski, Wladyslaw; Dai, Tiesheng; Dallapiccola, Carlo; Dallison, Steve; Daly, Colin; Dam, Mogens; Danielsson, Hans Olof; Dannheim, Dominik; Dao, Valerio; Darbo, Giovanni; Darlea, Georgiana Lavinia; Davey, Will; Davidek, Tomas; Davidson, Nadia; Davidson, Ruth; Davies, Merlin; Davison, Adam; Dawson, Ian; Daya, Rozmin; De, Kaushik; de Asmundis, Riccardo; De Castro, Stefano; De Castro Faria Salgado, Pedro; De Cecco, Sandro; de Graat, Julien; De Groot, Nicolo; de Jong, Paul; De Mora, Lee; De Oliveira Branco, Miguel; De Pedis, Daniele; De Salvo, Alessandro; De Sanctis, Umberto; De Santo, Antonella; De Vivie De Regie, Jean-Baptiste; De Zorzi, Guido; Dean, Simon; Dedovich, Dmitri; Degenhardt, James; Dehchar, Mohamed; Del Papa, Carlo; Del Peso, Jose; Del Prete, Tarcisio; Dell'Acqua, Andrea; Dell'Asta, Lidia; Della Pietra, Massimo; della Volpe, Domenico; Delmastro, Marco; Delsart, Pierre-Antoine; Deluca, Carolina; Demers, Sarah; Demichev, Mikhail; Demirkoz, Bilge; Deng, Jianrong; Deng, Wensheng; Denisov, Sergey; Derkaoui, Jamal Eddine; Derue, Frederic; Dervan, Paul; Desch, Klaus Kurt; Deviveiros, Pier-Olivier; Dewhurst, Alastair; DeWilde, Burton; Dhaliwal, Saminder; Dhullipudi, Ramasudhakar; Di Ciaccio, Anna; Di Ciaccio, Lucia; Di Domenico, Antonio; Di Girolamo, Alessandro; Di Girolamo, Beniamino; Di Luise, Silvestro; Di Mattia, Alessandro; Di Nardo, Roberto; Di Simone, Andrea; Di Sipio, Riccardo; Diaz, Marco Aurelio; Diblen, Faruk; Diehl, Edward; Dietrich, Janet; Dietzsch, Thorsten; Diglio, Sara; Dindar Yagci, Kamile; Dingfelder, Jochen; Dionisi, Carlo; Dita, Petre; Dita, Sanda; Dittus, Fridolin; Djama, Fares; Djilkibaev, Rashid; Djobava, Tamar; do Vale, Maria Aline Barros; Do Valle Wemans, André; Doan, Thi Kieu Oanh; Dobos, Daniel; Dobson, Ellie; Dobson, Marc; Doglioni, Caterina; Doherty, Tom; Dolejsi, Jiri; Dolenc, Irena; Dolezal, Zdenek; Dolgoshein, Boris; Dohmae, Takeshi; Donega, Mauro; Donini, Julien; Dopke, Jens; Doria, Alessandra; Dos Anjos, Andre; Dotti, Andrea; Dova, Maria-Teresa; Doxiadis, Alexander; Doyle, Tony; Drasal, Zbynek; Dris, Manolis; Dubbert, Jörg; Duchovni, Ehud; Duckeck, Guenter; Dudarev, Alexey; Dudziak, Fanny; Dührssen , Michael; Duflot, Laurent; Dufour, Marc-Andre; Dunford, Monica; Duran Yildiz, Hatice; Dushkin, Andrei; Duxfield, Robert; Dwuznik, Michal; Düren, Michael; Ebenstein, William; Ebke, Johannes; Eckweiler, Sebastian; Edmonds, Keith; Edwards, Clive; Egorov, Kirill; Ehrenfeld, Wolfgang; Ehrich, Thies; Eifert, Till; Eigen, Gerald; Einsweiler, Kevin; Eisenhandler, Eric; Ekelof, Tord; El Kacimi, Mohamed; Ellert, Mattias; Elles, Sabine; Ellinghaus, Frank; Ellis, Katherine; Ellis, Nicolas; Elmsheuser, Johannes; Elsing, Markus; Emeliyanov, Dmitry; Engelmann, Roderich; Engl, Albert; Epp, Brigitte; Eppig, Andrew; Erdmann, Johannes; Ereditato, Antonio; Eriksson, Daniel; Ermoline, Iouri; Ernst, Jesse; Ernst, Michael; Ernwein, Jean; Errede, Deborah; Errede, Steven; Ertel, Eugen; Escalier, Marc; Escobar, Carlos; Espinal Curull, Xavier; Esposito, Bellisario; Etienvre, Anne-Isabelle; Etzion, Erez; Evans, Hal; Fabbri, Laura; Fabre, Caroline; Facius, Katrine; Fakhrutdinov, Rinat; Falciano, Speranza; Fang, Yaquan; Fanti, Marcello; Farbin, Amir; Farilla, Addolorata; Farley, Jason; Farooque, Trisha; Farrington, Sinead; Farthouat, Philippe; Fassnacht, Patrick; Fassouliotis, Dimitrios; Fatholahzadeh, Baharak; Fayard, Louis; Fayette, Florent; Febbraro, Renato; Federic, Pavol; Fedin, Oleg; Fedorko, Woiciech; Feligioni, Lorenzo; Felzmann, Ulrich; Feng, Cunfeng; Feng, Eric; Fenyuk, Alexander; Ferencei, Jozef; Ferland, Jonathan; Fernandes, Bruno; Fernando, Waruna; Ferrag, Samir; Ferrando, James; Ferrara, Valentina; Ferrari, Arnaud; Ferrari, Pamela; Ferrari, Roberto; Ferrer, Antonio; Ferrer, Maria Lorenza; Ferrere, Didier; Ferretti, Claudio; Fiascaris, Maria; Fiedler, Frank; Filipčič, Andrej; Filippas, Anastasios; Filthaut, Frank; Fincke-Keeler, Margret; Fiolhais, Miguel; Fiorini, Luca; Firan, Ana; Fischer, Gordon; Fisher, Matthew; Flechl, Martin; Fleck, Ivor; Fleckner, Johanna; Fleischmann, Philipp; Fleischmann, Sebastian; Flick, Tobias; Flores Castillo, Luis; Flowerdew, Michael; Fonseca Martin, Teresa; Formica, Andrea; Forti, Alessandra; Fortin, Dominique; Fournier, Daniel; Fowler, Andrew; Fowler, Ken; Fox, Harald; Francavilla, Paolo; Franchino, Silvia; Francis, David; Franklin, Melissa; Franz, Sebastien; Fraternali, Marco; Fratina, Sasa; Freestone, Julian; French, Sky; Froeschl, Robert; Froidevaux, Daniel; Frost, James; Fukunaga, Chikara; Fullana Torregrosa, Esteban; Fuster, Juan; Gabaldon, Carolina; Gabizon, Ofir; Gadfort, Thomas; Gadomski, Szymon; Gagliardi, Guido; Gagnon, Pauline; Galea, Cristina; Gallas, Elizabeth; Gallas, Manuel; Gallo, Valentina Santina; Gallop, Bruce; Gallus, Petr; Galyaev, Eugene; Gan, K K; Gao, Yongsheng; Gaponenko, Andrei; Garcia-Sciveres, Maurice; García, Carmen; García Navarro, José Enrique; Gardner, Robert; Garelli, Nicoletta; Garitaonandia, Hegoi; Garonne, Vincent; Gatti, Claudio; Gaudio, Gabriella; Gautard, Valerie; Gauzzi, Paolo; Gavrilenko, Igor; Gay, Colin; Gaycken, Goetz; Gazis, Evangelos; Ge, Peng; Gee, Norman; Geich-Gimbel, Christoph; Gellerstedt, Karl; Gemme, Claudia; Genest, Marie-Hélène; Gentile, Simonetta; Georgatos, Fotios; George, Simon; Gershon, Avi; Ghazlane, Hamid; Ghodbane, Nabil; Giacobbe, Benedetto; Giagu, Stefano; Giakoumopoulou, Victoria; Giangiobbe, Vincent; Gianotti, Fabiola; Gibbard, Bruce; Gibson, Adam; Gibson, Stephen; Gilbert, Laura; Gilchriese, Murdock; Gilewsky, Valentin; Gingrich, Douglas; Ginzburg, Jonatan; Giokaris, Nikos; Giordani, MarioPaolo; Giordano, Raffaele; Giorgi, Francesco Michelangelo; Giovannini, Paola; Giraud, Pierre-Francois; Girtler, Peter; Giugni, Danilo; Giusti, Paolo; Gjelsten, Børge Kile; Gladilin, Leonid; Glasman, Claudia; Glazov, Alexandre; Glitza, Karl-Walter; Glonti, George; Godfrey, Jennifer; Godlewski, Jan; Goebel, Martin; Göpfert, Thomas; Goeringer, Christian; Gössling, Claus; Göttfert, Tobias; Goggi, Virginio; Goldfarb, Steven; Goldin, Daniel; Golling, Tobias; Gomes, Agostinho; Gomez Fajardo, Luz Stella; Gonçcalo, Ricardo; Gonella, Laura; Gong, Chenwei; González de la Hoz, Santiago; Gonzalez Silva, Laura; Gonzalez-Sevilla, Sergio; Goodson, Jeremiah Jet; Goossens, Luc; Gordon, Howard; Gorelov, Igor; Gorfine, Grant; Gorini, Benedetto; Gorini, Edoardo; Gorišek, Andrej; Gornicki, Edward; Gosdzik, Bjoern; Gosselink, Martijn; Gostkin, Mikhail Ivanovitch; Gough Eschrich, Ivo; Gouighri, Mohamed; Goujdami, Driss; Goulette, Marc Phillippe; Goussiou, Anna; Goy, Corinne; Grabowska-Bold, Iwona; Grafström, Per; Grahn, Karl-Johan; Grancagnolo, Sergio; Grassi, Valerio; Gratchev, Vadim; Grau, Nathan; Gray, Heather; Gray, Julia Ann; Graziani, Enrico; Green, Barry; Greenshaw, Timothy; Greenwood, Zeno Dixon; Gregor, Ingrid-Maria; Grenier, Philippe; Griesmayer, Erich; Griffiths, Justin; Grigalashvili, Nugzar; Grillo, Alexander; Grimm, Kathryn; Grinstein, Sebastian; Grishkevich, Yaroslav; Groh, Manfred; Groll, Marius; Gross, Eilam; Grosse-Knetter, Joern; Groth-Jensen, Jacob; Grybel, Kai; Guicheney, Christophe; Guida, Angelo; Guillemin, Thibault; Guler, Hulya; Gunther, Jaroslav; Guo, Bin; Gupta, Ambreesh; Gusakov, Yury; Gutierrez, Andrea; Gutierrez, Phillip; Guttman, Nir; Gutzwiller, Olivier; Guyot, Claude; Gwenlan, Claire; Gwilliam, Carl; Haas, Andy; Haas, Stefan; Haber, Carl; Hadavand, Haleh Khani; Hadley, David; Haefner, Petra; Härtel, Roland; Hajduk, Zbigniew; Hakobyan, Hrachya; Haller, Johannes; Hamacher, Klaus; Hamilton, Andrew; Hamilton, Samuel; Han, Liang; Hanagaki, Kazunori; Hance, Michael; Handel, Carsten; Hanke, Paul; Hansen, Jørgen Beck; Hansen, Jorn Dines; Hansen, John Renner; Hansen, Peter Henrik; Hansl-Kozanecka, Traudl; Hansson, Per; Hara, Kazuhiko; Hare, Gabriel; Harenberg, Torsten; Harrington, Robert; Harris, Orin; Harrison, Karl; Hartert, Jochen; Hartjes, Fred; Harvey, Alex; Hasegawa, Satoshi; Hasegawa, Yoji; Hashemi, Kevan; Hassani, Samira; Haug, Sigve; Hauschild, Michael; Hauser, Reiner; Havranek, Miroslav; Hawkes, Christopher; Hawkings, Richard John; Hayakawa, Takashi; Hayward, Helen; Haywood, Stephen; Head, Simon; Hedberg, Vincent; Heelan, Louise; Heim, Sarah; Heinemann, Beate; Heisterkamp, Simon; Helary, Louis; Heller, Mathieu; Hellman, Sten; Helsens, Clement; Hemperek, Tomasz; Henderson, Robert; Henke, Michael; Henrichs, Anna; Henriques Correia, Ana Maria; Henrot-Versille, Sophie; Hensel, Carsten; Henß, Tobias; Hernández Jiménez, Yesenia; Hershenhorn, Alon David; Herten, Gregor; Hertenberger, Ralf; Hervas, Luis; Hessey, Nigel; Higón-Rodriguez, Emilio; Hill, John; Hiller, Karl Heinz; Hillert, Sonja; Hillier, Stephen; Hinchliffe, Ian; Hines, Elizabeth; Hirose, Minoru; Hirsch, Florian; Hirschbuehl, Dominic; Hobbs, John; Hod, Noam; Hodgkinson, Mark; Hodgson, Paul; Hoecker, Andreas; Hoeferkamp, Martin; Hoffman, Julia; Hoffmann, Dirk; Hohlfeld, Marc; Holy, Tomas; Holzbauer, Jenny; Homma, Yasuhiro; Horazdovsky, Tomas; Hori, Takuya; Horn, Claus; Horner, Stephan; Hostachy, Jean-Yves; Hou, Suen; Hoummada, Abdeslam; Howe, Travis; Hrivnac, Julius; Hryn'ova, Tetiana; Hsu, Pai-hsien Jennifer; Hsu, Shih-Chieh; Huang, Guang Shun; Hubacek, Zdenek; Hubaut, Fabrice; Huegging, Fabian; Hughes, Emlyn; Hughes, Gareth; Hurwitz, Martina; Husemann, Ulrich; Huseynov, Nazim; Huston, Joey; Huth, John; Iacobucci, Giuseppe; Iakovidis, Georgios; Ibragimov, Iskander; Iconomidou-Fayard, Lydia; Idarraga, John; Iengo, Paolo; Igonkina, Olga; Ikegami, Yoichi; Ikeno, Masahiro; Ilchenko, Yuri; Iliadis, Dimitrios; Ince, Tayfun; Ioannou, Pavlos; Iodice, Mauro; Irles Quiles, Adrian; Ishikawa, Akimasa; Ishino, Masaya; Ishmukhametov, Renat; Isobe, Tadaaki; Issakov, Vladimir; Issever, Cigdem; Istin, Serhat; Itoh, Yuki; Ivashin, Anton; Iwanski, Wieslaw; Iwasaki, Hiroyuki; Izen, Joseph; Izzo, Vincenzo; Jackson, Brett; Jackson, John; Jackson, Paul; Jaekel, Martin; Jain, Vivek; Jakobs, Karl; Jakobsen, Sune; Jakubek, Jan; Jana, Dilip; Jansen, Eric; Jantsch, Andreas; Janus, Michel; Jared, Richard; Jarlskog, Göran; Jeanty, Laura; Jen-La Plante, Imai; Jenni, Peter; Jež, Pavel; Jézéquel, Stéphane; Ji, Weina; Jia, Jiangyong; Jiang, Yi; Jimenez Belenguer, Marcos; Jin, Shan; Jinnouchi, Osamu; Joffe, David; Johansen, Marianne; Johansson, Erik; Johansson, Per; Johnert, Sebastian; Johns, Kenneth; Jon-And, Kerstin; Jones, Graham; Jones, Roger; Jones, Tim; Jorge, Pedro; Joseph, John; Juranek, Vojtech; Jussel, Patrick; Kabachenko, Vasily; Kaci, Mohammed; Kaczmarska, Anna; Kado, Marumi; Kagan, Harris; Kagan, Michael; Kaiser, Steffen; Kajomovitz, Enrique; Kalinin, Sergey; Kalinovskaya, Lidia; Kalinowski, Artur; Kama, Sami; Kanaya, Naoko; Kaneda, Michiru; Kantserov, Vadim; Kanzaki, Junichi; Kaplan, Benjamin; Kapliy, Anton; Kaplon, Jan; Kar, Deepak; Karagounis, Michael; Karagoz, Muge; Kartvelishvili, Vakhtang; Karyukhin, Andrey; Kashif, Lashkar; Kasmi, Azzedine; Kass, Richard; Kastanas, Alex; Kastoryano, Michael; Kataoka, Mayuko; Kataoka, Yousuke; Katsoufis, Elias; Katzy, Judith; Kaushik, Venkatesh; Kawagoe, Kiyotomo; Kawamoto, Tatsuo; Kawamura, Gen; Kayl, Manuel; Kayumov, Fred; Kazanin, Vassili; Kazarinov, Makhail; Keates, James Robert; Keeler, Richard; Keener, Paul; Kehoe, Robert; Keil, Markus; Kekelidze, George; Kelly, Marc; Kenyon, Mike; Kepka, Oldrich; Kerschen, Nicolas; Kerševan, Borut Paul; Kersten, Susanne; Kessoku, Kohei; Khakzad, Mohsen; Khalil-zada, Farkhad; Khandanyan, Hovhannes; Khanov, Alexander; Kharchenko, Dmitri; Khodinov, Alexander; Khomich, Andrei; Khoriauli, Gia; Khovanskiy, Nikolai; Khovanskiy, Valery; Khramov, Evgeniy; Khubua, Jemal; Kim, Hyeon Jin; Kim, Min Suk; Kim, Peter; Kim, Shinhong; Kind, Oliver; Kind, Peter; King, Barry; Kirk, Julie; Kirsch, Guillaume; Kirsch, Lawrence; Kiryunin, Andrey; Kisielewska, Danuta; Kittelmann, Thomas; Kiyamura, Hironori; Kladiva, Eduard; Klein, Max; Klein, Uta; Kleinknecht, Konrad; Klemetti, Miika; Klier, Amit; Klimentov, Alexei; Klingenberg, Reiner; Klinkby, Esben; Klioutchnikova, Tatiana; Klok, Peter; Klous, Sander; Kluge, Eike-Erik; Kluge, Thomas; Kluit, Peter; Klute, Markus; Kluth, Stefan; Knecht, Neil; Kneringer, Emmerich; Ko, Byeong Rok; Kobayashi, Tomio; Kobel, Michael; Koblitz, Birger; Kocian, Martin; Kocnar, Antonin; Kodys, Peter; Köneke, Karsten; König, Adriaan; Koenig, Sebastian; Köpke, Lutz; Koetsveld, Folkert; Koevesarki, Peter; Koffas, Thomas; Koffeman, Els; Kohn, Fabian; Kohout, Zdenek; Kohriki, Takashi; Kolanoski, Hermann; Kolesnikov, Vladimir; Koletsou, Iro; Koll, James; Kollar, Daniel; Kolos, Serguei; Kolya, Scott; Komar, Aston; Komaragiri, Jyothsna Rani; Kondo, Takahiko; Kono, Takanori; Konoplich, Rostislav; Konovalov, Serguei; Konstantinidis, Nikolaos; Koperny, Stefan; Korcyl, Krzysztof; Kordas, Kostantinos; Korn, Andreas; Korolkov, Ilya; Korolkova, Elena; Korotkov, Vladislav; Kortner, Oliver; Kortner, Sandra; Kostka, Peter; Kostyukhin, Vadim; Kotov, Serguei; Kotov, Vladislav; Kotov, Konstantin; Kourkoumelis, Christine; Koutsman, Alex; Kowalewski, Robert Victor; Kowalski, Henri; Kowalski, Tadeusz; Kozanecki, Witold; Kozhin, Anatoly; Kral, Vlastimil; Kramarenko, Viktor; Kramberger, Gregor; Krasny, Mieczyslaw Witold; Krasznahorkay, Attila; Kreisel, Arik; Krejci, Frantisek; Kretzschmar, Jan; Krieger, Nina; Krieger, Peter; Kroeninger, Kevin; Kroha, Hubert; Kroll, Joe; Kroseberg, Juergen; Krstic, Jelena; Kruchonak, Uladzimir; Krüger, Hans; Krumshteyn, Zinovii; Kubota, Takashi; Kuehn, Susanne; Kugel, Andreas; Kuhl, Thorsten; Kuhn, Dietmar; Kukhtin, Victor; Kulchitsky, Yuri; Kuleshov, Sergey; Kummer, Christian; Kuna, Marine; Kunkle, Joshua; Kupco, Alexander; Kurashige, Hisaya; Kurata, Masakazu; Kurchaninov, Leonid; Kurochkin, Yurii; Kus, Vlastimil; Kwee, Regina; La Rotonda, Laura; Labbe, Julien; Lacasta, Carlos; Lacava, Francesco; Lacker, Heiko; Lacour, Didier; Lacuesta, Vicente Ramón; Ladygin, Evgueni; Lafaye, Rémi; Laforge, Bertrand; Lagouri, Theodota; Lai, Stanley; Lamanna, Massimo; Lampen, Caleb; Lampl, Walter; Lancon, Eric; Landgraf, Ulrich; Landon, Murrough; Lane, Jenna; Lankford, Andrew; Lanni, Francesco; Lantzsch, Kerstin; Lanza, Agostino; Laplace, Sandrine; Lapoire, Cecile; Laporte, Jean-Francois; Lari, Tommaso; Larner, Aimee; Lassnig, Mario; Laurelli, Paolo; Lavrijsen, Wim; Laycock, Paul; Lazarev, Alexandre; Lazzaro, Alfio; Le Dortz, Olivier; Le Guirriec, Emmanuel; Le Menedeu, Eve; Le Vine, Micheal; Lebedev, Alexander; Lebel, Céline; LeCompte, Thomas; Ledroit-Guillon, Fabienne Agnes Marie; Lee, Hurng-Chun; Lee, Jason; Lee, Shih-Chang; Lefebvre, Michel; Legendre, Marie; LeGeyt, Benjamin; Legger, Federica; Leggett, Charles; Lehmacher, Marc; Lehmann Miotto, Giovanna; Lei, Xiaowen; Leitner, Rupert; Lellouch, Daniel; Lellouch, Jeremie; Lendermann, Victor; Leney, Katharine; Lenz, Tatiana; Lenzen, Georg; Lenzi, Bruno; Leonhardt, Kathrin; Leroy, Claude; Lessard, Jean-Raphael; Lester, Christopher; Leung Fook Cheong, Annabelle; Levêque, Jessica; Levin, Daniel; Levinson, Lorne; Leyton, Michael; Li, Haifeng; Li, Shumin; Li, Xuefei; Liang, Zhihua; Liang, Zhijun; Liberti, Barbara; Lichard, Peter; Lichtnecker, Markus; Lie, Ki; Liebig, Wolfgang; Lilley, Joseph; Lim, Heuijin; Limosani, Antonio; Limper, Maaike; Lin, Simon; Linnemann, James; Lipeles, Elliot; Lipinsky, Lukas; Lipniacka, Anna; Liss, Tony; Lissauer, David; Lister, Alison; Litke, Alan; Liu, Chuanlei; Liu, Dong; Liu, Hao; Liu, Jianbei; Liu, Minghui; Liu, Tiankuan; Liu, Yanwen; Livan, Michele; Lleres, Annick; Lloyd, Stephen; Lobodzinska, Ewelina; Loch, Peter; Lockman, William; Lockwitz, Sarah; Loddenkoetter, Thomas; Loebinger, Fred; Loginov, Andrey; Loh, Chang Wei; Lohse, Thomas; Lohwasser, Kristin; Lokajicek, Milos; Long, Robin Eamonn; Lopes, Lourenco; Lopez Mateos, David; Losada, Marta; Loscutoff, Peter; Lou, Xinchou; Lounis, Abdenour; Loureiro, Karina; Lovas, Lubomir; Love, Jeremy; Love, Peter; Lowe, Andrew; Lu, Feng; Lubatti, Henry; Luci, Claudio; Lucotte, Arnaud; Ludwig, Andreas; Ludwig, Dörthe; Ludwig, Inga; Luehring, Frederick; Luisa, Luca; Lumb, Debra; Luminari, Lamberto; Lund, Esben; Lund-Jensen, Bengt; Lundberg, Björn; Lundberg, Johan; Lundquist, Johan; Lynn, David; Lys, Jeremy; Lytken, Else; Ma, Hong; Ma, Lian Liang; Macana Goia, Jorge Andres; Maccarrone, Giovanni; Macchiolo, Anna; Maček, Boštjan; Machado Miguens, Joana; Mackeprang, Rasmus; Madaras, Ronald; Mader, Wolfgang; Maenner, Reinhard; Maeno, Tadashi; Mättig, Peter; Mättig, Stefan; Magalhaes Martins, Paulo Jorge; Magradze, Erekle; Mahalalel, Yair; Mahboubi, Kambiz; Mahmood, A.; Maiani, Camilla; Maidantchik, Carmen; Maio, Amélia; Majewski, Stephanie; Makida, Yasuhiro; Makouski, Mikhail; Makovec, Nikola; Malecki, Piotr; Malecki, Pawel; Maleev, Victor; Malek, Fairouz; Mallik, Usha; Malon, David; Maltezos, Stavros; Malyshev, Vladimir; Malyukov, Sergei; Mambelli, Marco; Mameghani, Raphael; Mamuzic, Judita; Mandelli, Luciano; Mandić, Igor; Mandrysch, Rocco; Maneira, José; Mangeard, Pierre-Simon; Manjavidze, Ioseb; Manning, Peter; Manousakis-Katsikakis, Arkadios; Mansoulie, Bruno; Mapelli, Alessandro; Mapelli, Livio; March , Luis; Marchand, Jean-Francois; Marchese, Fabrizio; Marchiori, Giovanni; Marcisovsky, Michal; Marino, Christopher; Marroquim, Fernando; Marshall, Zach; Marti-Garcia, Salvador; Martin, Alex; Martin, Andrew; Martin, Brian; Martin, Brian; Martin, Franck Francois; Martin, Jean-Pierre; Martin, Tim; Martin dit Latour, Bertrand; Martinez, Mario; Martinez Outschoorn, Verena; Martini, Agnese; Martyniuk, Alex; Marzano, Francesco; Marzin, Antoine; Masetti, Lucia; Mashimo, Tetsuro; Mashinistov, Ruslan; Masik, Jiri; Maslennikov, Alexey; Massa, Ignazio; Massol, Nicolas; Mastroberardino, Anna; Masubuchi, Tatsuya; Matricon, Pierre; Matsunaga, Hiroyuki; Matsushita, Takashi; Mattravers, Carly; Maxfield, Stephen; Mayne, Anna; Mazini, Rachid; Mazur, Michael; Mazzanti, Marcello; Mc Donald, Jeffrey; Mc Kee, Shawn Patrick; McCarn, Allison; McCarthy, Robert; McCubbin, Norman; McFarlane, Kenneth; McGlone, Helen; Mchedlidze, Gvantsa; McMahon, Steve; McPherson, Robert; Meade, Andrew; Mechnich, Joerg; Mechtel, Markus; Medinnis, Mike; Meera-Lebbai, Razzak; Meguro, Tatsuma; Mehlhase, Sascha; Mehta, Andrew; Meier, Karlheinz; Meirose, Bernhard; Melachrinos, Constantinos; Mellado Garcia, Bruce Rafael; Mendoza Navas, Luis; Meng, Zhaoxia; Menke, Sven; Meoni, Evelin; Mermod, Philippe; Merola, Leonardo; Meroni, Chiara; Merritt, Frank; Messina, Andrea; Metcalfe, Jessica; Mete, Alaettin Serhan; Meyer, Jean-Pierre; Meyer, Jochen; Meyer, Joerg; Meyer, Thomas Christian; Meyer, W. Thomas; Miao, Jiayuan; Michal, Sebastien; Micu, Liliana; Middleton, Robin; Migas, Sylwia; Mijović, Liza; Mikenberg, Giora; Mikestikova, Marcela; Mikuž, Marko; Miller, David; Mills, Corrinne; Mills, Bill; Milov, Alexander; Milstead, David; Milstein, Dmitry; Minaenko, Andrey; Miñano, Mercedes; Minashvili, Irakli; Mincer, Allen; Mindur, Bartosz; Mineev, Mikhail; Ming, Yao; Mir, Lluisa-Maria; Mirabelli, Giovanni; Misawa, Shigeki; Miscetti, Stefano; Misiejuk, Andrzej; Mitrevski, Jovan; Mitsou, Vasiliki A.; Miyagawa, Paul; Mjörnmark, Jan-Ulf; Mladenov, Dimitar; Moa, Torbjoern; Moed, Shulamit; Moeller, Victoria; Mönig, Klaus; Möser, Nicolas; Mohr, Wolfgang; Mohrdieck-Möck, Susanne; Moles-Valls, Regina; Molina-Perez, Jorge; Monk, James; Monnier, Emmanuel; Montesano, Simone; Monticelli, Fernando; Moore, Roger; Mora Herrera, Clemencia; Moraes, Arthur; Morais, Antonio; Morel, Julien; Morello, Gianfranco; Moreno, Deywis; Moreno Llácer, María; Morettini, Paolo; Morii, Masahiro; Morley, Anthony Keith; Mornacchi, Giuseppe; Morozov, Sergey; Morris, John; Moser, Hans-Guenther; Mosidze, Maia; Moss, Josh; Mount, Richard; Mountricha, Eleni; Mouraviev, Sergei; Moyse, Edward; Mudrinic, Mihajlo; Mueller, Felix; Mueller, James; Mueller, Klemens; Müller, Thomas; Muenstermann, Daniel; Muir, Alex; Munwes, Yonathan; Murillo Garcia, Raul; Murray, Bill; Mussche, Ido; Musto, Elisa; Myagkov, Alexey; Myska, Miroslav; Nadal, Jordi; Nagai, Koichi; Nagano, Kunihiro; Nagasaka, Yasushi; Nairz, Armin Michael; Nakamura, Koji; Nakano, Itsuo; Nakatsuka, Hiroki; Nanava, Gizo; Napier, Austin; Nash, Michael; Nation, Nigel; Nattermann, Till; Naumann, Thomas; Navarro, Gabriela; Nderitu, Simon Kirichu; Neal, Homer; Nebot, Eduardo; Nechaeva, Polina; Negri, Andrea; Negri, Guido; Nelson, Andrew; Nelson, Timothy Knight; Nemecek, Stanislav; Nemethy, Peter; Nepomuceno, Andre Asevedo; Nessi, Marzio; Neubauer, Mark; Neusiedl, Andrea; Neves, Ricardo; Nevski, Pavel; Newcomer, Mitchel; Nickerson, Richard; Nicolaidou, Rosy; Nicolas, Ludovic; Nicoletti, Giovanni; Nicquevert, Bertrand; Niedercorn, Francois; Nielsen, Jason; Nikiforov, Andriy; Nikolaev, Kirill; Nikolic-Audit, Irena; Nikolopoulos, Konstantinos; Nilsen, Henrik; Nilsson, Paul; Nisati, Aleandro; Nishiyama, Tomonori; Nisius, Richard; Nodulman, Lawrence; Nomachi, Masaharu; Nomidis, Ioannis; Nordberg, Markus; Nordkvist, Bjoern; Notz, Dieter; Novakova, Jana; Nozaki, Mitsuaki; Nožička, Miroslav; Nugent, Ian Michael; Nuncio-Quiroz, Adriana-Elizabeth; Nunes Hanninger, Guilherme; Nunnemann, Thomas; Nurse, Emily; O'Neil, Dugan; O'Shea, Val; Oakham, Gerald; Oberlack, Horst; Ochi, Atsuhiko; Oda, Susumu; Odaka, Shigeru; Odier, Jerome; Ogren, Harold; Oh, Alexander; Oh, Seog; Ohm, Christian; Ohshima, Takayoshi; Ohshita, Hidetoshi; Ohsugi, Takashi; Okada, Shogo; Okawa, Hideki; Okumura, Yasuyuki; Okuyama, Toyonobu; Olchevski, Alexander; Oliveira, Miguel Alfonso; Oliveira Damazio, Denis; Oliver, John; Oliver Garcia, Elena; Olivito, Dominick; Olszewski, Andrzej; Olszowska, Jolanta; Omachi, Chihiro; Onofre, António; Onyisi, Peter; Oram, Christopher; Oreglia, Mark; Oren, Yona; Orestano, Domizia; Orlov, Iliya; Oropeza Barrera, Cristina; Orr, Robert; Ortega, Eduardo; Osculati, Bianca; Ospanov, Rustem; Osuna, Carlos; Ottersbach, John; Ould-Saada, Farid; Ouraou, Ahmimed; Ouyang, Qun; Owen, Mark; Owen, Simon; Oyarzun, Alejandro; Ozcan, Veysi Erkcan; Ozone, Kenji; Ozturk, Nurcan; Pacheco Pages, Andres; Padilla Aranda, Cristobal; Paganis, Efstathios; Pahl, Christoph; Paige, Frank; Pajchel, Katarina; Palestini, Sandro; Pallin, Dominique; Palma, Alberto; Palmer, Jody; Pan, Yibin; Panagiotopoulou, Evgenia; Panes, Boris; Panikashvili, Natalia; Panitkin, Sergey; Pantea, Dan; Panuskova, Monika; Paolone, Vittorio; Papadopoulou, Theodora; Park, Su-Jung; Park, Woochun; Parker, Andy; Parker, Sherwood; Parodi, Fabrizio; Parsons, John; Parzefall, Ulrich; Pasqualucci, Enrico; Passeri, Antonio; Pastore, Fernanda; Pastore, Francesca; Pásztor , Gabriella; Pataraia, Sophio; Pater, Joleen; Patricelli, Sergio; Patwa, Abid; Pauly, Thilo; Peak, Lawrence; Pecsy, Martin; Pedraza Morales, Maria Isabel; Peleganchuk, Sergey; Peng, Haiping; Penson, Alexander; Penwell, John; Perantoni, Marcelo; Perez, Kerstin; Perez Codina, Estel; Pérez García-Estañ, María Teresa; Perez Reale, Valeria; Perini, Laura; Pernegger, Heinz; Perrino, Roberto; Persembe, Seda; Perus, Antoine; Peshekhonov, Vladimir; Petersen, Brian; Petersen, Troels; Petit, Elisabeth; Petridou, Chariclia; Petrolo, Emilio; Petrucci, Fabrizio; Petschull, Dennis; Petteni, Michele; Pezoa, Raquel; Phan, Anna; Phillips, Alan; Piacquadio, Giacinto; Piccinini, Maurizio; Piegaia, Ricardo; Pilcher, James; Pilkington, Andrew; Pina, João Antonio; Pinamonti, Michele; Pinfold, James; Pinto, Belmiro; Pizio, Caterina; Placakyte, Ringaile; Plamondon, Mathieu; Pleier, Marc-Andre; Poblaguev, Andrei; Poddar, Sahill; Podlyski, Fabrice; Poffenberger, Paul; Poggioli, Luc; Pohl, Martin; Polci, Francesco; Polesello, Giacomo; Policicchio, Antonio; Polini, Alessandro; Poll, James; Polychronakos, Venetios; Pomeroy, Daniel; Pommès, Kathy; Ponsot, Patrick; Pontecorvo, Ludovico; Pope, Bernard; Popeneciu, Gabriel Alexandru; Popovic, Dragan; Poppleton, Alan; Popule, Jiri; Portell Bueso, Xavier; Porter, Robert; Pospelov, Guennady; Pospisil, Stanislav; Potekhin, Maxim; Potrap, Igor; Potter, Christina; Potter, Christopher; Potter, Keith; Poulard, Gilbert; Poveda, Joaquin; Prabhu, Robindra; Pralavorio, Pascal; Prasad, Srivas; Pravahan, Rishiraj; Pribyl, Lukas; Price, Darren; Price, Lawrence; Prichard, Paul; Prieur, Damien; Primavera, Margherita; Prokofiev, Kirill; Prokoshin, Fedor; Protopopescu, Serban; Proudfoot, James; Prudent, Xavier; Przysiezniak, Helenka; Psoroulas, Serena; Ptacek, Elizabeth; Puigdengoles, Carles; Purdham, John; Purohit, Milind; Puzo, Patrick; Pylypchenko, Yuriy; Qi, Ming; Qian, Jianming; Qian, Weiming; Qin, Zhonghua; Quadt, Arnulf; Quarrie, David; Quayle, William; Quinonez, Fernando; Raas, Marcel; Radeka, Veljko; Radescu, Voica; Radics, Balint; Rador, Tonguc; Ragusa, Francesco; Rahal, Ghita; Rahimi, Amir; Rajagopalan, Srinivasan; Rammensee, Michael; Rammes, Marcus; Rauscher, Felix; Rauter, Emanuel; Raymond, Michel; Read, Alexander Lincoln; Rebuzzi, Daniela; Redelbach, Andreas; Redlinger, George; Reece, Ryan; Reeves, Kendall; Reinherz-Aronis, Erez; Reinsch, Andreas; Reisinger, Ingo; Reljic, Dusan; Rembser, Christoph; Ren, Zhongliang; Renkel, Peter; Rescia, Sergio; Rescigno, Marco; Resconi, Silvia; Resende, Bernardo; Reznicek, Pavel; Rezvani, Reyhaneh; Richards, Alexander; Richards, Ronald; Richter, Robert; Richter-Was, Elzbieta; Ridel, Melissa; Rijpstra, Manouk; Rijssenbeek, Michael; Rimoldi, Adele; Rinaldi, Lorenzo; Rios, Ryan Randy; Riu, Imma; Rizatdinova, Flera; Rizvi, Eram; Roa Romero, Diego Alejandro; Robertson, Steven; Robichaud-Veronneau, Andree; Robinson, Dave; Robinson, James; Robinson, Mary; Robson, Aidan; Rocha de Lima, Jose Guilherme; Roda, Chiara; Roda Dos Santos, Denis; Rodriguez, Diego; Rodriguez Garcia, Yohany; Roe, Shaun; Røhne, Ole; Rojo, Victoria; Rolli, Simona; Romaniouk, Anatoli; Romanov, Victor; Romeo, Gaston; Romero Maltrana, Diego; Roos, Lydia; Ros, Eduardo; Rosati, Stefano; Rosenbaum, Gabriel; Rosselet, Laurent; Rossetti, Valerio; Rossi, Leonardo Paolo; Rotaru, Marina; Rothberg, Joseph; Rousseau, David; Royon, Christophe; Rozanov, Alexander; Rozen, Yoram; Ruan, Xifeng; Ruckert, Benjamin; Ruckstuhl, Nicole; Rud, Viacheslav; Rudolph, Gerald; Rühr, Frederik; Ruggieri, Federico; Ruiz-Martinez, Aranzazu; Rumyantsev, Leonid; Rurikova, Zuzana; Rusakovich, Nikolai; Rutherfoord, John; Ruwiedel, Christoph; Ruzicka, Pavel; Ryabov, Yury; Ryan, Patrick; Rybkin, Grigori; Rzaeva, Sevda; Saavedra, Aldo; Sadrozinski, Hartmut; Sadykov, Renat; Sakamoto, Hiroshi; Salamanna, Giuseppe; Salamon, Andrea; Saleem, Muhammad; Salihagic, Denis; Salnikov, Andrei; Salt, José; Salvachua Ferrando, Belén; Salvatore, Daniela; Salvatore, Pasquale Fabrizio; Salvucci, Antonio; Salzburger, Andreas; Sampsonidis, Dimitrios; Samset, Björn Hallvard; Sandaker, Heidi; Sander, Heinz Georg; Sanders, Michiel; Sandhoff, Marisa; Sandhu, Pawan; Sandstroem, Rikard; Sandvoss, Stephan; Sankey, Dave; Sanny, Bernd; Sansoni, Andrea; Santamarina Rios, Cibran; Santoni, Claudio; Santonico, Rinaldo; Saraiva, João; Sarangi, Tapas; Sarkisyan-Grinbaum, Edward; Sarri, Francesca; Sasaki, Osamu; Sasao, Noboru; Satsounkevitch, Igor; Sauvage, Gilles; Savard, Pierre; Savine, Alexandre; Savinov, Vladimir; Sawyer, Lee; Saxon, David; Says, Louis-Pierre; Sbarra, Carla; Sbrizzi, Antonio; Scannicchio, Diana; Schaarschmidt, Jana; Schacht, Peter; Schäfer, Uli; Schaetzel, Sebastian; Schaffer, Arthur; Schaile, Dorothee; Schamberger, R.~Dean; Schamov, Andrey; Schegelsky, Valery; Scheirich, Daniel; Schernau, Michael; Scherzer, Max; Schiavi, Carlo; Schieck, Jochen; Schioppa, Marco; Schlenker, Stefan; Schmidt, Evelyn; Schmieden, Kristof; Schmitt, Christian; Schmitz, Martin; Schott, Matthias; Schouten, Doug; Schovancova, Jaroslava; Schram, Malachi; Schreiner, Alexander; Schroeder, Christian; Schroer, Nicolai; Schroers, Marcel; Schultes, Joachim; Schultz-Coulon, Hans-Christian; Schumacher, Jan; Schumacher, Markus; Schumm, Bruce; Schune, Philippe; Schwanenberger, Christian; Schwartzman, Ariel; Schwemling, Philippe; Schwienhorst, Reinhard; Schwierz, Rainer; Schwindling, Jerome; Scott, Bill; Searcy, Jacob; Sedykh, Evgeny; Segura, Ester; Seidel, Sally; Seiden, Abraham; Seifert, Frank; Seixas, José; Sekhniaidze, Givi; Seliverstov, Dmitry; Sellden, Bjoern; Semprini-Cesari, Nicola; Serfon, Cedric; Serin, Laurent; Seuster, Rolf; Severini, Horst; Sevior, Martin; Sfyrla, Anna; Shabalina, Elizaveta; Shamim, Mansoora; Shan, Lianyou; Shank, James; Shao, Qi Tao; Shapiro, Marjorie; Shatalov, Pavel; Shaw, Kate; Sherman, Daniel; Sherwood, Peter; Shibata, Akira; Shimojima, Makoto; Shin, Taeksu; Shmeleva, Alevtina; Shochet, Mel; Shupe, Michael; Sicho, Petr; Sidoti, Antonio; Siegert, Frank; Siegrist, James; Sijacki, Djordje; Silbert, Ohad; Silver, Yiftah; Silverstein, Daniel; Silverstein, Samuel; Simak, Vladislav; Simic, Ljiljana; Simion, Stefan; Simmons, Brinick; Simonyan, Margar; Sinervo, Pekka; Sinev, Nikolai; Sipica, Valentin; Siragusa, Giovanni; Sisakyan, Alexei; Sivoklokov, Serguei; Sjölin, Jörgen; Sjursen, Therese; Skovpen, Kirill; Skubic, Patrick; Slater, Mark; Slavicek, Tomas; Sliwa, Krzysztof; Sloper, John erik; Sluka, Tomas; Smakhtin, Vladimir; Smirnov, Sergei; Smirnov, Yuri; Smirnova, Lidia; Smirnova, Oxana; Smith, Ben Campbell; Smith, Douglas; Smith, Kenway; Smizanska, Maria; Smolek, Karel; Snesarev, Andrei; Snow, Steve; Snow, Joel; Snuverink, Jochem; Snyder, Scott; Soares, Mara; Sobie, Randall; Sodomka, Jaromir; Soffer, Abner; Solans, Carlos; Solar, Michael; Solc, Jaroslav; Solfaroli Camillocci, Elena; Solodkov, Alexander; Solovyanov, Oleg; Soluk, Richard; Sondericker, John; Sopko, Vit; Sopko, Bruno; Sosebee, Mark; Soukharev, Andrey; Spagnolo, Stefania; Spanò, Francesco; Spencer, Edwin; Spighi, Roberto; Spigo, Giancarlo; Spila, Federico; Spiwoks, Ralf; Spousta, Martin; Spreitzer, Teresa; Spurlock, Barry; St. Denis, Richard Dante; Stahl, Thorsten; Stahlman, Jonathan; Stamen, Rainer; Stancu, Stefan Nicolae; Stanecka, Ewa; Stanek, Robert; Stanescu, Cristian; Stapnes, Steinar; Starchenko, Evgeny; Stark, Jan; Staroba, Pavel; Starovoitov, Pavel; Stastny, Jan; Stavina, Pavel; Stavropoulos, Georgios; Steele, Genevieve; Steinbach, Peter; Steinberg, Peter; Stekl, Ivan; Stelzer, Bernd; Stelzer, Harald Joerg; Stelzer-Chilton, Oliver; Stenzel, Hasko; Stevenson, Kyle; Stewart, Graeme; Stockton, Mark; Stoerig, Kathrin; Stoicea, Gabriel; Stonjek, Stefan; Strachota, Pavel; Stradling, Alden; Straessner, Arno; Strandberg, Jonas; Strandberg, Sara; Strandlie, Are; Strauss, Michael; Strizenec, Pavol; Ströhmer, Raimund; Strom, David; Stroynowski, Ryszard; Strube, Jan; Stugu, Bjarne; Su, Dong; Soh, Dart-yin; Sugaya, Yorihito; Sugimoto, Takuya; Suhr, Chad; Suk, Michal; Sulin, Vladimir; Sultansoy, Saleh; Sumida, Toshi; Sun, Xiaohu; Sundermann, Jan Erik; Suruliz, Kerim; Sushkov, Serge; Susinno, Giancarlo; Sutton, Mark; Suzuki, Takuya; Suzuki, Yu; Sykora, Ivan; Sykora, Tomas; Szymocha, Tadeusz; Sánchez, Javier; Ta, Duc; Tackmann, Kerstin; Taffard, Anyes; Tafirout, Reda; Taga, Adrian; Takahashi, Yuta; Takai, Helio; Takashima, Ryuichi; Takeda, Hiroshi; Takeshita, Tohru; Talby, Mossadek; Talyshev, Alexey; Tamsett, Matthew; Tanaka, Junichi; Tanaka, Reisaburo; Tanaka, Satoshi; Tanaka, Shuji; Tapprogge, Stefan; Tardif, Dominique; Tarem, Shlomit; Tarrade, Fabien; Tartarelli, Giuseppe Francesco; Tas, Petr; Tasevsky, Marek; Tassi, Enrico; Tatarkhanov, Mous; Taylor, Christopher; Taylor, Frank; Taylor, Geoffrey; Taylor, Ryan P.; Taylor, Wendy; Teixeira-Dias, Pedro; Ten Kate, Herman; Teng, Ping-Kun; Tennenbaum-Katan, Yaniv-David; Terada, Susumu; Terashi, Koji; Terron, Juan; Terwort, Mark; Testa, Marianna; Teuscher, Richard; Thioye, Moustapha; Thoma, Sascha; Thomas, Juergen; Thompson, Stan; Thompson, Emily; Thompson, Peter; Thompson, Paul; Thompson, Ray; Thomson, Evelyn; Thun, Rudolf; Tic, Tomas; Tikhomirov, Vladimir; Tikhonov, Yury; Tipton, Paul; Tique Aires Viegas, Florbela De Jes; Tisserant, Sylvain; Toczek, Barbara; Todorov, Theodore; Todorova-Nova, Sharka; Toggerson, Brokk; Tojo, Junji; Tokár, Stanislav; Tokushuku, Katsuo; Tollefson, Kirsten; Tomasek, Lukas; Tomasek, Michal; Tomoto, Makoto; Tompkins, Lauren; Toms, Konstantin; Tonoyan, Arshak; Topfel, Cyril; Topilin, Nikolai; Torrence, Eric; Torró Pastor, Emma; Toth, Jozsef; Touchard, Francois; Tovey, Daniel; Trefzger, Thomas; Tremblet, Louis; Tricoli, Alesandro; Trigger, Isabel Marian; Trincaz-Duvoid, Sophie; Trinh, Thi Nguyet; Tripiana, Martin; Triplett, Nathan; Trischuk, William; Trivedi, Arjun; Trocmé, Benjamin; Troncon, Clara; Trzupek, Adam; Tsarouchas, Charilaos; Tseng, Jeffrey; Tsiakiris, Menelaos; Tsiareshka, Pavel; Tsionou, Dimitra; Tsipolitis, Georgios; Tsiskaridze, Vakhtang; Tskhadadze, Edisher; Tsukerman, Ilya; Tsulaia, Vakhtang; Tsung, Jieh-Wen; Tsuno, Soshi; Tsybychev, Dmitri; Tuggle, Joseph; Turecek, Daniel; Turk Cakir, Ilkay; Turlay, Emmanuel; Tuts, Michael; Twomey, Matthew Shaun; Tylmad, Maja; Tyndel, Mike; Uchida, Kirika; Ueda, Ikuo; Ugland, Maren; Uhlenbrock, Mathias; Uhrmacher, Michael; Ukegawa, Fumihiko; Unal, Guillaume; Undrus, Alexander; Unel, Gokhan; Unno, Yoshinobu; Urbaniec, Dustin; Urkovsky, Evgeny; Urquijo, Phillip; Urrejola, Pedro; Usai, Giulio; Uslenghi, Massimiliano; Vacavant, Laurent; Vacek, Vaclav; Vachon, Brigitte; Vahsen, Sven; Valente, Paolo; Valentinetti, Sara; Valkar, Stefan; Valladolid Gallego, Eva; Vallecorsa, Sofia; Valls Ferrer, Juan Antonio; Van Berg, Richard; van der Graaf, Harry; van der Kraaij, Erik; van der Poel, Egge; van der Ster, Daniel; van Eldik, Niels; van Gemmeren, Peter; van Kesteren, Zdenko; van Vulpen, Ivo; Vandelli, Wainer; Vaniachine, Alexandre; Vankov, Peter; Vannucci, Francois; Vari, Riccardo; Varnes, Erich; Varouchas, Dimitris; Vartapetian, Armen; Varvell, Kevin; Vasilyeva, Lidia; Vassilakopoulos, Vassilios; Vazeille, Francois; Vellidis, Constantine; Veloso, Filipe; Veneziano, Stefano; Ventura, Andrea; Ventura, Daniel; Venturi, Manuela; Venturi, Nicola; Vercesi, Valerio; Verducci, Monica; Verkerke, Wouter; Vermeulen, Jos; Vetterli, Michel; Vichou, Irene; Vickey, Trevor; Viehhauser, Georg; Villa, Mauro; Villani, Giulio; Villaplana Perez, Miguel; Vilucchi, Elisabetta; Vincter, Manuella; Vinek, Elisabeth; Vinogradov, Vladimir; Viret, Sébastien; Virzi, Joseph; Vitale , Antonio; Vitells, Ofer; Vivarelli, Iacopo; Vives Vaque, Francesc; Vlachos, Sotirios; Vlasak, Michal; Vlasov, Nikolai; Vogel, Adrian; Vokac, Petr; Volpi, Matteo; von der Schmitt, Hans; von Loeben, Joerg; von Radziewski, Holger; von Toerne, Eckhard; Vorobel, Vit; Vorwerk, Volker; Vos, Marcel; Voss, Rudiger; Voss, Thorsten Tobias; Vossebeld, Joost; Vranjes, Nenad; Vranjes Milosavljevic, Marija; Vrba, Vaclav; Vreeswijk, Marcel; Vu Anh, Tuan; Vudragovic, Dusan; Vuillermet, Raphael; Vukotic, Ilija; Wagner, Peter; Walbersloh, Jorg; Walder, James; Walker, Rodney; Walkowiak, Wolfgang; Wall, Richard; Wang, Chiho; Wang, Haichen; Wang, Jin; Wang, Song-Ming; Warburton, Andreas; Ward, Patricia; Warsinsky, Markus; Wastie, Roy; Watkins, Peter; Watson, Alan; Watson, Miriam; Watts, Gordon; Watts, Stephen; Waugh, Anthony; Waugh, Ben; Weber, Marc; Weber, Manuel; Weber, Michele; Weber, Pavel; Weidberg, Anthony; Weingarten, Jens; Weiser, Christian; Wellenstein, Hermann; Wells, Phillippa; Wen, Mei; Wenaus, Torre; Wendler, Shanti; Wengler, Thorsten; Wenig, Siegfried; Wermes, Norbert; Werner, Matthias; Werner, Per; Werth, Michael; Werthenbach, Ulrich; Wessels, Martin; Whalen, Kathleen; White, Andrew; White, Martin; White, Sebastian; Whitehead, Samuel Robert; Whiteson, Daniel; Whittington, Denver; Wicek, Francois; Wicke, Daniel; Wickens, Fred; Wiedenmann, Werner; Wielers, Monika; Wienemann, Peter; Wiglesworth, Craig; Wiik, Liv Antje Mari; Wildauer, Andreas; Wildt, Martin Andre; Wilkens, Henric George; Williams, Eric; Williams, Hugh; Willocq, Stephane; Wilson, John; Wilson, Michael Galante; Wilson, Alan; Wingerter-Seez, Isabelle; Winklmeier, Frank; Wittgen, Matthias; Wolter, Marcin Wladyslaw; Wolters, Helmut; Wosiek, Barbara; Wotschack, Jorg; Woudstra, Martin; Wraight, Kenneth; Wright, Catherine; Wright, Dennis; Wrona, Bozydar; Wu, Sau Lan; Wu, Xin; Wulf, Evan; Wynne, Benjamin; Xaplanteris, Leonidas; Xella, Stefania; Xie, Song; Xu, Da; Xu, Neng; Yamada, Miho; Yamamoto, Akira; Yamamoto, Kyoko; Yamamoto, Shimpei; Yamamura, Taiki; Yamaoka, Jared; Yamazaki, Takayuki; Yamazaki, Yuji; Yan, Zhen; Yang, Haijun; Yang, Un-Ki; Yang, Zhaoyu; Yao, Weiming; Yao, Yushu; Yasu, Yoshiji; Ye, Jingbo; Ye, Shuwei; Yilmaz, Metin; Yoosoofmiya, Reza; Yorita, Kohei; Yoshida, Riktura; Young, Charles; Youssef, Saul; Yu, Dantong; Yu, Jaehoon; Yuan, Li; Yurkewicz, Adam; Zaidan, Remi; Zaitsev, Alexander; Zajacova, Zuzana; Zambrano, Valentina; Zanello, Lucia; Zaytsev, Alexander; Zeitnitz, Christian; Zeller, Michael; Zemla, Andrzej; Zendler, Carolin; Zenin, Oleg; Ženiš, Tibor; Zenonos, Zenonas; Zenz, Seth; Zerwas, Dirk; Zevi della Porta, Giovanni; Zhan, Zhichao; Zhang, Huaqiao; Zhang, Jinlong; Zhang, Qizhi; Zhang, Xueyao; Zhao, Long; Zhao, Tianchi; Zhao, Zhengguo; Zhemchugov, Alexey; Zhong, Jiahang; Zhou, Bing; Zhou, Ning; Zhou, Yue; Zhu, Cheng Guang; Zhu, Hongbo; Zhu, Yingchun; Zhuang, Xuai; Zhuravlov, Vadym; Zimmermann, Robert; Zimmermann, Simone; Zimmermann, Stephanie; Ziolkowski, Michael; Živković, Lidija; Zobernig, Georg; Zoccoli, Antonio; zur Nedden, Martin; Zutshi, Vishnu

    2010-01-01

    The simulation software for the ATLAS Experiment at the Large Hadron Collider is being used for large-scale production of events on the LHC Computing Grid. This simulation requires many components, from the generators that simulate particle collisions, through packages simulating the response of the various detectors and triggers. All of these components come together under the ATLAS simulation infrastructure. In this paper, that infrastructure is discussed, including that supporting the detector description, interfacing the event generation, and combining the GEANT4 simulation of the response of the individual detectors. Also described are the tools allowing the software validation, performance testing, and the validation of the simulated output against known physics processes.

  9. ATLAS Cloud R&D

    CERN Document Server

    Panitkin, S; The ATLAS collaboration; Caballero Bejar, J; Benjamin, D; DiGirolamo, A; Gable, I; Hendrix, V; Hover, J; Kucharczuk, K; Medrano LLamas, R; Love, P; Ohman, H; Paterson, M; Sobie, R; Taylor, R; Walker, R; Zaytsev, A

    2014-01-01

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R&D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R&D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R&D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R&D group has gained...

  10. The effect of morphometric atlas selection on multi-atlas-based automatic brachial plexus segmentation

    International Nuclear Information System (INIS)

    The present study aimed to measure the effect of a morphometric atlas selection strategy on the accuracy of multi-atlas-based BP autosegmentation using the commercially available software package ADMIRE® and to determine the optimal number of selected atlases to use. Autosegmentation accuracy was measured by comparing all generated automatic BP segmentations with anatomically validated gold standard segmentations that were developed using cadavers. Twelve cadaver computed tomography (CT) atlases were included in the study. One atlas was selected as a patient in ADMIRE®, and multi-atlas-based BP autosegmentation was first performed with a group of morphometrically preselected atlases. In this group, the atlases were selected on the basis of similarity in the shoulder protraction position with the patient. The number of selected atlases used started at two and increased up to eight. Subsequently, a group of randomly chosen, non-selected atlases were taken. In this second group, every possible combination of 2 to 8 random atlases was used for multi-atlas-based BP autosegmentation. For both groups, the average Dice similarity coefficient (DSC), Jaccard index (JI) and Inclusion index (INI) were calculated, measuring the similarity of the generated automatic BP segmentations and the gold standard segmentation. Similarity indices of both groups were compared using an independent sample t-test, and the optimal number of selected atlases was investigated using an equivalence trial. For each number of atlases, average similarity indices of the morphometrically selected atlas group were significantly higher than the random group (p < 0,05). In this study, the highest similarity indices were achieved using multi-atlas autosegmentation with 6 selected atlases (average DSC = 0,598; average JI = 0,434; average INI = 0,733). Morphometric atlas selection on the basis of the protraction position of the patient significantly improves multi-atlas-based BP autosegmentation accuracy

  11. Supporting ATLAS

    CERN Multimedia

    maximilien brice

    2003-01-01

    Eighteen feet made of stainless steel will support the barrel ATLAS detector in the cavern at Point 1. In total, the ATLAS feet system will carry approximately 6000 tons, and will give the same inclination to the detector as the LHC accelerator.

  12. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  13. Supporting ATLAS

    CERN Multimedia

    2003-01-01

    Eighteen feet made of stainless steel will support the barrel ATLAS detector in the cavern at Point 1. In total, the ATLAS feet system will carry approximately 6000 tons, and will give the same inclination to the detector as the LHC accelerator. The installation of the feet is scheduled to finish during January 2004 with an installation precision at the 1 mm level despite their height of 5.3 metres. The manufacture was carried out in Russia (Company Izhorskiye Zavody in St. Petersburg), as part of a Russian and JINR Dubna in-kind contribution to ATLAS. Involved in the installation is a team from IHEP-Protvino (Russia), the ATLAS technical co-ordination team at CERN, and the CERN survey team. In all, about 15 people are involved. After the feet are in place, the barrel toroid magnet and the barrel calorimeters will be installed. This will keep the ATLAS team busy for the entire year 2004.

  14. The ATLAS distributed analysis system

    Science.gov (United States)

    Legger, F.; Atlas Collaboration

    2014-06-01

    In the LHC operations era, analysis of the multi-petabyte ATLAS data sample by globally distributed physicists is a challenging task. To attain the required scale the ATLAS Computing Model was designed around the concept of Grid computing, realized in the Worldwide LHC Computing Grid (WLCG), the largest distributed computational resource existing in the sciences. The ATLAS experiment currently stores over 140 PB of data and runs about 140,000 concurrent jobs continuously at WLCG sites. During the first run of the LHC, the ATLAS Distributed Analysis (DA) service has operated stably and scaled as planned. More than 1600 users submitted jobs in 2012, with 2 million or more analysis jobs per week, peaking at about a million jobs per day. The system dynamically distributes popular data to expedite processing and maximally utilize resources. The reliability of the DA service is high and steadily improving; Grid sites are continually validated against a set of standard tests, and a dedicated team of expert shifters provides user support and communicates user problems to the sites. Both the user support techniques and the direct feedback of users have been effective in improving the success rate and user experience when utilizing the distributed computing environment. In this contribution a description of the main components, activities and achievements of ATLAS distributed analysis is given. Several future improvements being undertaken will be described.

  15. The ATLAS distributed analysis system

    International Nuclear Information System (INIS)

    In the LHC operations era, analysis of the multi-petabyte ATLAS data sample by globally distributed physicists is a challenging task. To attain the required scale the ATLAS Computing Model was designed around the concept of Grid computing, realized in the Worldwide LHC Computing Grid (WLCG), the largest distributed computational resource existing in the sciences. The ATLAS experiment currently stores over 140 PB of data and runs about 140,000 concurrent jobs continuously at WLCG sites. During the first run of the LHC, the ATLAS Distributed Analysis (DA) service has operated stably and scaled as planned. More than 1600 users submitted jobs in 2012, with 2 million or more analysis jobs per week, peaking at about a million jobs per day. The system dynamically distributes popular data to expedite processing and maximally utilize resources. The reliability of the DA service is high and steadily improving; Grid sites are continually validated against a set of standard tests, and a dedicated team of expert shifters provides user support and communicates user problems to the sites. Both the user support techniques and the direct feedback of users have been effective in improving the success rate and user experience when utilizing the distributed computing environment. In this contribution a description of the main components, activities and achievements of ATLAS distributed analysis is given. Several future improvements being undertaken will be described.

  16. ATLAS status and physics program

    International Nuclear Information System (INIS)

    Full text: The ATLAS detector will observe proton collisions in the Large Hadron Collider (LHC) at CERN, which is scheduled for commissioning in 2007. When operational the LHC will collide protons at a centre-of-mass energy of 14 TeV with nominally 2 X 108 collisions per second at each of four beam-crossing points. ATLAS has been optimised for the detection of the hypothesised Higgs Boson, the only missing component of the otherwise experimentally well-verified electro-weak theory. In addition ATLAS is also sensitive to many other physics processes including QCD, b-physics, heavy ion interactions and those that could provide first evidence for super-symmetry. The current status of the LHC and the various aspects of the ATLAS detector will be discussed as well as the ability of ATLAS to observe new physics. The Australian contributions to the ATLAS project will also be described. These include: 1. Development and implementation of components of the Semi-Conductor Tracker (SCT), which provides spatial information for charged particles traversing the ATLAS inner detector. 2. Fast algorithms for simulating electromagnetic events in the calorimeter. 3. Development and application of fast reconstruction algorithms within the ATLAS software framework. 4. Analysis of Monte-Carlo data produced using simulated models of the ATLAS detector. The information provided will determine the most efficient strategies in searching for new physics once collisions at the LHC commence. 5. Advances in grid computing to handle the storage, transfer and offline processing of data amassed by LHC experiments, which totals over 2.4 P-bytes per annum. Copyright (2005) Australian Institute of Physics

  17. ATLAS Fact Sheet : To raise awareness of the ATLAS detector and collaboration on the LHC

    CERN Multimedia

    ATLAS Outreach

    2010-01-01

    Facts on the Detector, Calorimeters, Muon System, Inner Detector, Pixel Detector, Semiconductor Tracker, Transition Radiation Tracker,, Surface hall, Cavern, Detector, Magnet system, Solenoid, Toroid, Event rates, Physics processes, Supersymmetric particles, Comparing LHC with Cosmic rays, Heavy ion collisions, Trigger and Data Acquisition TDAQ, Computing, the LHC and the ATLAS collaboration. This fact sheet also contains images of ATLAS and the collaboration as well as a short list of videos on ATLAS available for viewing.

  18. Mongolian Atlas

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Climatic atlas dated 1985, in Mongolian, with introductory material also in Russian and English. One hundred eight pages in single page PDFs.

  19. Glance Information System for ATLAS Management

    Science.gov (United States)

    Grael, F. F.; Maidantchik, C.; Évora, L. H. R. A.; Karam, K.; Moraes, L. O. F.; Cirilli, M.; Nessi, M.; Pommès, K.; ATLAS Collaboration

    2011-12-01

    ATLAS Experiment is an international collaboration where more than 37 countries, 172 institutes and laboratories, 2900 physicists, engineers, and computer scientists plus 700 students participate. The management of this teamwork involves several aspects such as institute contribution, employment records, members' appointment, authors' list, preparation and publication of papers and speakers nomination. Previously, most of the information was accessible by a limited group and developers had to face problems such as different terminology, diverse data modeling, heterogeneous databases and unlike users needs. Moreover, the systems were not designed to handle new requirements. The maintenance has to be an easy task due to the long lifetime experiment and professionals turnover. The Glance system, a generic mechanism for accessing any database, acts as an intermediate layer isolating the user from the particularities of each database. It retrieves, inserts and updates the database independently of its technology and modeling. Relying on Glance, a group of systems were built to support the ATLAS management and operation aspects: ATLAS Membership, ATLAS Appointments, ATLAS Speakers, ATLAS Analysis Follow-Up, ATLAS Conference Notes, ATLAS Thesis, ATLAS Traceability and DSS Alarms Viewer. This paper presents the overview of the Glance information framework and describes the privilege mechanism developed to grant different level of access for each member and system.

  20. Glance Information System for ATLAS Management

    International Nuclear Information System (INIS)

    ATLAS Experiment is an international collaboration where more than 37 countries, 172 institutes and laboratories, 2900 physicists, engineers, and computer scientists plus 700 students participate. The management of this teamwork involves several aspects such as institute contribution, employment records, members' appointment, authors' list, preparation and publication of papers and speakers nomination. Previously, most of the information was accessible by a limited group and developers had to face problems such as different terminology, diverse data modeling, heterogeneous databases and unlike users needs. Moreover, the systems were not designed to handle new requirements. The maintenance has to be an easy task due to the long lifetime experiment and professionals turnover. The Glance system, a generic mechanism for accessing any database, acts as an intermediate layer isolating the user from the particularities of each database. It retrieves, inserts and updates the database independently of its technology and modeling. Relying on Glance, a group of systems were built to support the ATLAS management and operation aspects: ATLAS Membership, ATLAS Appointments, ATLAS Speakers, ATLAS Analysis Follow-Up, ATLAS Conference Notes, ATLAS Thesis, ATLAS Traceability and DSS Alarms Viewer. This paper presents the overview of the Glance information framework and describes the privilege mechanism developed to grant different level of access for each member and system.

  1. Spanish ATLAS Tier-2: facing up to LHC Run 2

    CERN Document Server

    Gonzalez de la Hoz, Santiago; Fassi, Farida; Fernandez Casani, Alvaro; Kaci, Mohammed; Lacort Pellicer, Victor Ruben; Montiel Gonzalez, Almudena Del Rocio; Oliver Garcia, Elena; Pacheco Pages, Andres; Sánchez, Javier; Sanchez Martinez, Victoria; Salt, José; Villaplana Perez, Miguel

    2015-01-01

    The goal of this work is to describe the way of addressing the main challenges of Run-2 by the Spanish ATLAS Tier-2. The considerable increase of energy and luminosity for the upcoming Run-2 with respect to Run-1 has led to a revision of the ATLAS computing model as well as some of the main ATLAS computing tools. The adaptation on these changes will be shown, with the peculiarities that it is a distributed Tier-2 composed of three sites and its members are involved on ATLAS computing tasks with a hub of research, innovation and education.

  2. ATLAS@Home looks for CERN volunteers

    CERN Multimedia

    Rosaria Marraffino

    2014-01-01

    ATLAS@Home is a CERN volunteer computing project that runs simulated ATLAS events. As the project ramps up, the project team is looking for CERN volunteers to test the system before planning a bigger promotion for the public.   The ATLAS@home outreach website. ATLAS@Home is a large-scale research project that runs ATLAS experiment simulation software inside virtual machines hosted by volunteer computers. “People from all over the world offer up their computers’ idle time to run simulation programmes to help physicists extract information from the large amount of data collected by the detector,” explains Claire Adam Bourdarios of the ATLAS@Home project. “The ATLAS@Home project aims to extrapolate the Standard Model at a higher energy and explore what new physics may look like. Everything we’re currently running is preparation for next year's run.” ATLAS@Home became an official BOINC (Berkeley Open Infrastructure for Network ...

  3. Computer-aided evaluation as an adjunct to revised BI-RADS Atlas: improvement in positive predictive value at screening breast MRI

    Energy Technology Data Exchange (ETDEWEB)

    Gweon, Hye Mi; Cho, Nariya; Seo, Mirinae; Chu, A. Jung; Moon, Woo Kyung [Seoul National University College of Medicine and Seoul National University Hospital, Department of Radiology, Seoul (Korea, Republic of)

    2014-08-15

    To investigate whether kinetic features via magnetic resonance (MR)-computer-aided evaluation (CAE) can improve the positive predictive value (PPV) of morphological descriptors for suspicious lesions at screening breast MRI. One hundred and sixteen consecutive, suspiciously enhancing lesions detected at contralateral breast MRI screening in 116 women with newly-diagnosed breast cancers were included. Morphological descriptors according to the revised BI-RADS Atlas and kinetic features from MR-CAE were analysed. The PPV of each descriptor was analysed to identify subgroups in which PPV could be improved by the addition of MR-CAE. When biopsy recommendations were downgraded to follow-up in cases where there were both the absence of enhancement at a 50 % threshold and the absence of delayed washout, PPV increased from 0.328 (95 % CI, 0.249-0.417) to 0.500 (95 % CI, 0.387- 0.613). Two ductal carcinoma in situ (DCIS) non-mass enhancement (NME) lesions were missed. Application of downgrading criteria to foci or masses led to increased PPV from 0.310 (95 % CI, 0.216-0.419) to 0.437 (95 % CI, 0.331-0.547) without missing cancers. MR-CAE has the potential to improve the PPV of breast MR imaging by reducing the number of false positives. When suspicious mass lesions do not show enhancement at a 50 % threshold nor delayed washout, follow-up rather than biopsy can be considered. (orig.)

  4. Computer-aided evaluation as an adjunct to revised BI-RADS Atlas: improvement in positive predictive value at screening breast MRI

    International Nuclear Information System (INIS)

    To investigate whether kinetic features via magnetic resonance (MR)-computer-aided evaluation (CAE) can improve the positive predictive value (PPV) of morphological descriptors for suspicious lesions at screening breast MRI. One hundred and sixteen consecutive, suspiciously enhancing lesions detected at contralateral breast MRI screening in 116 women with newly-diagnosed breast cancers were included. Morphological descriptors according to the revised BI-RADS Atlas and kinetic features from MR-CAE were analysed. The PPV of each descriptor was analysed to identify subgroups in which PPV could be improved by the addition of MR-CAE. When biopsy recommendations were downgraded to follow-up in cases where there were both the absence of enhancement at a 50 % threshold and the absence of delayed washout, PPV increased from 0.328 (95 % CI, 0.249-0.417) to 0.500 (95 % CI, 0.387- 0.613). Two ductal carcinoma in situ (DCIS) non-mass enhancement (NME) lesions were missed. Application of downgrading criteria to foci or masses led to increased PPV from 0.310 (95 % CI, 0.216-0.419) to 0.437 (95 % CI, 0.331-0.547) without missing cancers. MR-CAE has the potential to improve the PPV of breast MR imaging by reducing the number of false positives. When suspicious mass lesions do not show enhancement at a 50 % threshold nor delayed washout, follow-up rather than biopsy can be considered. (orig.)

  5. The Next Generation ATLAS Production System

    CERN Document Server

    Borodin, Mikhail; The ATLAS collaboration; Golubkov, Dmitry; Klimentov, Alexei; Maeno, Tadashi; Mashinistov, Ruslan; Vaniachine, Alexandre

    2015-01-01

    The ATLAS experiment at LHC data processing and simulation grows continuously, as more data and more use cases emerge. For data processing the ATLAS experiment adopted the data transformation approach, where software applications transform the input data into outputs. In the ATLAS production system, each data transformation is represented by a task, a collection of many jobs, dynamically submitted by the ATLAS workload management system (PanDA/JEDI) and executed on the Grid, clouds and supercomputers. Patterns in ATLAS data transformation workflows composed of many tasks provided a scalable production system framework for template definitions of the many-tasks workflows. User interface and system logic of these workflows are being implemented in the Database Engine for Tasks (DEFT). Such development required using modern computing technologies and approaches. We report technical details of this development: database implementation, server logic and Web user interface technologies.

  6. Class Generation for Numerical Wind Atlases

    DEFF Research Database (Denmark)

    Cutler, N.J.; Jørgensen, B.H.; Ersbøll, Bjarne Kjær; Badger, J.

    2006-01-01

    A new optimised clustering method is presented for generating wind classes for mesoscale modelling to produce numerical wind atlases. It is compared with the existing method of dividing the data in 12 to 16 sectors, 3 to 7 wind-speed bins and dividing again according to the stability of the...... atmosphere. Wind atlases are typically produced using many years of on-site wind observations at many locations. Numerical wind atlases are the result of mesoscale model integrations based on synoptic scale wind climates and can be produced in a number of hours of computation. 40 years of twice daily NCEP...... optimising the representation of the data and by automating the procedure more. The Karlsruhe Atmospheric Mesoscale Model (KAMM) is combined with the WAsP analysis to produce numerical wind atlases for two sites, Ireland and Egypt. The model results are compared with wind atlases made from measurements at...

  7. Renewable Energy Atlas of the United States

    Energy Technology Data Exchange (ETDEWEB)

    Kuiper, J. [Environmental Science Division; Hlava, K. [Environmental Science Division; Greenwood, H. [Environmentall Science Division; Carr, A. [Environmental Science Division

    2013-12-13

    The Renewable Energy Atlas (Atlas) of the United States is a compilation of geospatial data focused on renewable energy resources, federal land ownership, and base map reference information. This report explains how to add the Atlas to your computer and install the associated software. The report also includes: A description of each of the components of the Atlas; Lists of the Geographic Information System (GIS) database content and sources; and A brief introduction to the major renewable energy technologies. The Atlas includes the following: A GIS database organized as a set of Environmental Systems Research Institute (ESRI) ArcGIS Personal GeoDatabases, and ESRI ArcReader and ArcGIS project files providing an interactive map visualization and analysis interface.

  8. The ATLAS Distributed Analysis System

    CERN Document Server

    Legger, F; The ATLAS collaboration; Pacheco Pages, A; Stradling, A

    2013-01-01

    In the LHC operations era, analysis of the multi-petabyte ATLAS data sample by globally distributed physicists is a challenging task. To attain the required scale the ATLAS Computing Model was designed around the concept of grid computing, realized in the Worldwide LHC Computing Grid (WLCG), the largest distributed computational resource existing in the sciences. The ATLAS experiment currently stores over 140 PB of data and runs about 140,000 concurrent jobs continuously at WLCG sites. During the first run of the LHC, the ATLAS Distributed Analysis (DA) service has operated stably and scaled as planned. More than 1600 users submitted jobs in 2012, with 2 million or more analysis jobs per week, peaking at about a million jobs per day. The system dynamically distributes popular data to expedite processing and maximally utilize resources. The reliability of the DA service is high but steadily improving; grid sites are continually validated against a set of standard tests, and a dedicated team of expert shifters ...

  9. The ATLAS Distributed Analysis System

    CERN Document Server

    Legger, F; The ATLAS collaboration

    2014-01-01

    In the LHC operations era, analysis of the multi-petabyte ATLAS data sample by globally distributed physicists is a challenging task. To attain the required scale the ATLAS Computing Model was designed around the concept of grid computing, realized in the Worldwide LHC Computing Grid (WLCG), the largest distributed computational resource existing in the sciences. The ATLAS experiment currently stores over 140 PB of data and runs about 140,000 concurrent jobs continuously at WLCG sites. During the first run of the LHC, the ATLAS Distributed Analysis (DA) service has operated stably and scaled as planned. More than 1600 users submitted jobs in 2012, with 2 million or more analysis jobs per week, peaking at about a million jobs per day. The system dynamically distributes popular data to expedite processing and maximally utilize resources. The reliability of the DA service is high but steadily improving; grid sites are continually validated against a set of standard tests, and a dedicated team of expert shifters ...

  10. Development, deployment and operations of ATLAS databases

    International Nuclear Information System (INIS)

    In preparation for ATLAS data taking, a coordinated shift from development towards operations has occurred in ATLAS database activities. In addition to development and commissioning activities in databases, ATLAS is active in the development and deployment (in collaboration with the WLCG 3D project) of the tools that allow the worldwide distribution and installation of databases and related datasets, as well as the actual operation of this system on ATLAS multi-grid infrastructure. We describe development and commissioning of major ATLAS database applications for online and offline. We present the first scalability test results and ramp-up schedule over the initial LHC years of operations towards the nominal year of ATLAS running, when the database storage volumes are expected to reach 6.1 TB for the Tag DB and 1.0 TB for the Conditions DB. ATLAS database applications require robust operational infrastructure for data replication between online and offline at Tier-0, and for the distribution of the offline data to Tier-1 and Tier-2 computing centers. We describe ATLAS experience with Oracle Streams and other technologies for coordinated replication of databases in the framework of the WLCG 3D services

  11. ATLAS Story

    CERN Multimedia

    Nordberg, Markus

    2012-01-01

    This film produced in July 2012 explains how fundamental research connects to Society and what benefits collaborative way of working can and may generate in the future, using ATLAS Collaboration as a case study. The film is intellectually inspired by the book "Collisions and Collaboration" (OUP) by Max Boisot (ed.), see: collisionsandcollaboration.com. The film is directed by Andrew Millington (OMNI Communications)

  12. ATLAS Job Transforms

    CERN Document Server

    Stewart, G A; The ATLAS collaboration; Maddocks, H J; Harenberg, T; Sandhoff, M; Sarrazin, B

    2013-01-01

    The need to run complex workflows for a high energy physics experiment such as ATLAS has always been present. However, as computing resources have become even more constrained, compared to the wealth of data generated by the LHC, the need to use resources efficiently and manage complex workflows within a single grid job have increased. In ATLAS, a new Job Transform framework has been developed that we describe in this paper. This framework manages the multiple execution steps needed to 'transform' one data type into another (e.g., RAW data to ESD to AOD to final ntuple) and also provides a consistent interface for the ATLAS production system. The new framework uses a data driven workflow definition which is both easy to manage and powerful. After a transform is defined, jobs are expressed simply by specifying the input data and the desired output data. The transform infrastructure then executes only the necessary substeps to produce the final data products. The global execution cost of running the job is mini...

  13. ATLAS Job Transforms

    CERN Document Server

    Stewart, G A; The ATLAS collaboration; Maddocks, H J; Harenberg, T; Sandhoff, M; Sarrazin, B

    2013-01-01

    The need to run complex workflows for a high energy physics experiment such as ATLAS has always been present. However, as computing resources have become even more constrained, compared to the wealth of data generated by the LHC, the need to use resources efficiently and manage complex workflows within a single grid job have increased. In ATLAS, a new Job Transform framework has been developed that we describe in this paper. This framework manages the multiple execution steps needed to `transform' one data type into another (e.g., RAW data to ESD to AOD to final ntuple) and also provides a consistent interface for the ATLAS production system. The new framework uses a data driven workflow definition which is both easy to manage and powerful. After a transform is defined, jobs are expressed simply by specifying the input data and the desired output data. The transform infrastructure then executes only the necessary substeps to produce the final data products. The global execution cost of running the job is mini...

  14. Web Exhibition – ATLASES: Poetics, Politics, and Performance

    Directory of Open Access Journals (Sweden)

    Nedjeljko Frančula

    2013-12-01

    Full Text Available ATLASES: Poetics, Politics, and Performance is a web exhibition of atlases from the Special Collections and School of Geographical Sciences of the University of Bristol (http://uobatlases.net/. It includes atlases produced between 1570 to approximately 1970.The exhibition consists of four thematic parts. Renaissance Theatres contains famous and les famous atlases produced between the end of the 16th century to the middle of the 17th century, such as atlases by Ortelius (1574, Camden (1610, Speed (1611 and four atlas tomes by Blaeu (1645. Rhetoric of Truth contains geological and archaeological atlases from the 18th and the beginning of the 19th century. However, Rhetoric of Truth is not only limited to renaissance, but it also encompasses first computer generated atlases, e.g. Atlas of Breeding Birds in England and Ireland (1976 and others. The Colonial Gaze focuses on atlases applied in colonial projects and land exploitation in Africa and the Caribbean Islands, as well as in circulation of race theories in Europe and North America at the end of the 19th century. The last part, National Identities and Conflict explores the role of atlas as a powerful instrument for visualizing conflicts and shaping territorial-political ideas in the 20th century.

  15. ATLAS DQ2 Deletion Service

    CERN Document Server

    OLEYNIK, D; The ATLAS collaboration; GARONNE, V; CAMPANA, S

    2012-01-01

    The ATLAS Distributed Data Management project DQ2 is responsible for the replication, access and bookkeeping of ATLAS data across more than 100 distributed grid sites. It also enforces data management policies decided on by the collaboration and defined in the ATLAS computing model. The DQ2 Deletion Service is one of the most important DDM services. This distributed service interacts with 3rd party grid middleware and the DQ2 catalogues to serve data deletion requests on the grid. Furthermore, it also takes care of retry strategies, check-pointing transactions, load management and fault tolerance. In this paper special attention is paid to the technical details which are used to achieve the high performance of service, accomplished without overloading either site storage, catalogues or other DQ2 components. Special attention is also paid to the deletion monitoring service that allows operators a detailed view of the working system.

  16. ATLAS DQ2 Deletion Service

    CERN Document Server

    OLEYNIK, D; The ATLAS collaboration; GARONNE, V; CAMPANA, S

    2012-01-01

    The ATLAS Distributed Data Management project DQ2 is responsible for the replication, access and bookkeeping of ATLAS data across more than 100 distributed grid sites. It also enforces data management policies decided on by the collaboration and defined in the ATLAS computing model. The DQ2 deletion service is one of the most important DDM services. This distributed service interacts with 3rd party grid middleware and the DQ2 catalogs to serve data deletion requests on the grid. Furthermore, it also takes care of retry strategies, check-pointing transactions, load management and fault tolerance. In this paper special attention is paid to the technical details which are used to achieve the high performance of service (peaking at more than 4 millions files deleted per day), accomplished without overloading either site storage, catalogs or other DQ2 components. Special attention is also paid to the deletion monitoring service that allows operators a detailed view of the working system.

  17. The ATLAS Glasgow Overview Week

    CERN Multimedia

    Richard Hawkings

    2007-01-01

    The ATLAS Overview Weeks always provide a good opportunity to see the status and progress throughout the experiment, and the July week at Glasgow University was no exception. The setting, amidst the traditional buildings of one of the UK's oldest universities, provided a nice counterpoint to all the cutting-edge research and technology being discussed. And despite predictions to the contrary, the weather at these northern latitudes was actually a great improvement on the previous few weeks in Geneva. The meeting sessions comprehensively covered the whole ATLAS project, from the subdetector and TDAQ systems and their commissioning, through to offline computing, analysis and physics. As a long-time ATLAS member who remembers plenary meetings in 1991 with 30 people drawing detector layouts on a whiteboard, the hardware and installation sessions were particularly impressive - to see how these dreams have been translated into 7000 tons of reality (and with attendant cabling, supports and services, which certainly...

  18. A service-based SLA (Service Level Agreement) for the RACF (RHIC and ATLAS computing facility) at brookhaven national lab

    International Nuclear Information System (INIS)

    The RACF provides computing support to a broad spectrum of scientific programs at Brookhaven. The continuing growth of the facility, the diverse needs of the scientific programs and the increasingly prominent role of distributed computing requires the RACF to change from a system to a service-based SLA with our user communities. A service-based SLA allows the RACF to coordinate more efficiently the operation, maintenance and development of the facility by mapping out a matrix of system and service dependencies and by creating a new, configurable alarm management layer that automates service alerts and notification of operations staff. This paper describes the adjustments made by the RACF to transition to a service-based SLA, including the integration of its monitoring software, alarm notification mechanism and service ticket system at the facility to make the new SLA a reality.

  19. Pseudospread of the atlas: false sign of Jefferson fracture in young children

    International Nuclear Information System (INIS)

    Jefferson fractures are rare prior to teen-age. Three young children examined after trauma exhibited the characteristic spread appearance of the atlas, but fractures were excluded radiographically and clinically. A retrospective study demonstrated a similar appearance, termed pseudospread, in most children aged 3 months to 4 years, including over 90% during the second year. Pseudospread results from a discrepancy between the neural growth pattern of the atlas and the somatic pattern of the axis. An atlas spread index is defined and a normal range presented. When an atlas fracture is suggested by apparent lateral spread of the lateral atlas masses, computed tomography is useful to demonstrate an intact atlas ring

  20. ATLAS software packaging

    Science.gov (United States)

    Rybkin, Grigory

    2012-12-01

    Software packaging is indispensable part of build and prerequisite for deployment processes. Full ATLAS software stack consists of TDAQ, HLT, and Offline software. These software groups depend on some 80 external software packages. We present tools, package PackDist, developed and used to package all this software except for TDAQ project. PackDist is based on and driven by CMT, ATLAS software configuration and build tool, and consists of shell and Python scripts. The packaging unit used is CMT project. Each CMT project is packaged as several packages—platform dependent (one per platform available), source code excluding header files, other platform independent files, documentation, and debug information packages (the last two being built optionally). Packaging can be done recursively to package all the dependencies. The whole set of packages for one software release, distribution kit, also includes configuration packages and contains some 120 packages for one platform. Also packaged are physics analysis projects (currently 6) used by particular physics groups on top of the full release. The tools provide an installation test for the full distribution kit. Packaging is done in two formats for use with the Pacman and RPM package managers. The tools are functional on the platforms supported by ATLAS—GNU/Linux and Mac OS X. The packaged software is used for software deployment on all ATLAS computing resources from the detector and trigger computing farms, collaboration laboratories computing centres, grid sites, to physicist laptops, and CERN VMFS and covers the use cases of running all applications as well as of software development.

  1. ATLAS software packaging

    International Nuclear Information System (INIS)

    Software packaging is indispensable part of build and prerequisite for deployment processes. Full ATLAS software stack consists of TDAQ, HLT, and Offline software. These software groups depend on some 80 external software packages. We present tools, package PackDist, developed and used to package all this software except for TDAQ project. PackDist is based on and driven by CMT, ATLAS software configuration and build tool, and consists of shell and Python scripts. The packaging unit used is CMT project. Each CMT project is packaged as several packages—platform dependent (one per platform available), source code excluding header files, other platform independent files, documentation, and debug information packages (the last two being built optionally). Packaging can be done recursively to package all the dependencies. The whole set of packages for one software release, distribution kit, also includes configuration packages and contains some 120 packages for one platform. Also packaged are physics analysis projects (currently 6) used by particular physics groups on top of the full release. The tools provide an installation test for the full distribution kit. Packaging is done in two formats for use with the Pacman and RPM package managers. The tools are functional on the platforms supported by ATLAS—GNU/Linux and Mac OS X. The packaged software is used for software deployment on all ATLAS computing resources from the detector and trigger computing farms, collaboration laboratories computing centres, grid sites, to physicist laptops, and CERN VMFS and covers the use cases of running all applications as well as of software development.

  2. Atlas of liver imaging

    International Nuclear Information System (INIS)

    This atlas is an outcome of an IAEA co-ordinated research programme. In addition to Japan, nine other Asian countries participated in the project and 293 liver scintigrams (116 from Japanese institutions and 177 from seven Asian countries) were evaluated by physicians from the participating Asian countries. The computer analysis of the scan findings of the individual physicians was carried out and individual scores have been separately tabulated for: (a) scan abnormality; (b) space occupying lesions; (c) cirrhosis and (d) diffuse liver diseases like hepatitis. Refs, figs and tabs

  3. ATLAS DQ2 to Rucio renaming infrastructure

    CERN Document Server

    Serfon, C; The ATLAS collaboration; Beermann, T; Garonne, V; Goossens, L; Lassnig, M; Nairz, A; Stewart, G; Vigne, V; Molfetas, A

    2014-01-01

    To prepare the migration to the new ATLAS Data Management system called Rucio, a renaming campaign of all the physical files produced by ATLAS is needed. It represents around 300M files split between $\\sim$120 sites with 6 different storage technologies. It must be done in a transparent way in order not to disrupt the ongoing computing activities. An infrastructure to perform this renaming has been developed and is presented in this paper as well as its performances.

  4. ATLAS DQ2 to Rucio renaming infrastructure

    CERN Document Server

    Serfon, C; The ATLAS collaboration; Beermann, T; Garonne, V; Goossens, L; Lassnig, M; Nairz, A; Stewart, G; Vigne, V

    2013-01-01

    To prepare the migration to the new ATLAS Data Management system called Rucio, a renaming campaign of all the physical files produced by ATLAS is needed. It represents around 300M files split between $\\sim$120 sites with 6 different storage technologies. It must be done in a transparent way in order not to disrupt the ongoing computing activities. An infrastructure to perform this renaming has been developed and is presented in this paper as well as its performances.

  5. ATLAS DQ2 to Rucio renaming infrastructure

    International Nuclear Information System (INIS)

    To prepare the migration to the new ATLAS Data Management system called Rucio, a renaming campaign of all the physical files produced by ATLAS is needed. It represents around 300 million files split between ∼120 sites with 6 different storage technologies. It must be done in a transparent way in order not to disrupt the ongoing computing activities. An infrastructure to perform this renaming has been developed and is presented in this paper as well as its performance.

  6. ATLAS Recordings

    CERN Multimedia

    Steven Goldfarb; Mitch McLachlan; Homer A. Neal

    Web Archives of ATLAS Plenary Sessions, Workshops, Meetings, and Tutorials from 2005 until this past month are available via the University of Michigan portal here. Most recent additions include the Trigger-Aware Analysis Tutorial by Monika Wielers on March 23 and the ROOT Workshop held at CERN on March 26-27.Viewing requires a standard web browser with RealPlayer plug-in (included in most browsers automatically) and works on any major platform. Lectures can be viewed directly over the web or downloaded locally.In addition, you will find access to a variety of general tutorials and events via the portal.Feedback WelcomeOur group is making arrangements now to record plenary sessions, tutorials, and other important ATLAS events for 2007. Your suggestions for potential recording, as well as your feedback on existing archives is always welcome. Please contact us at wlap@umich.edu. Thank you.Enjoy the Lectures!

  7. Using the Hadoop/MapReduce approach for monitoring the CERN storage system and improving the ATLAS computing model

    CERN Document Server

    Russo, Stefano Alberto; Lamanna, M

    The processing of huge amounts of data, an already fundamental task for the research in the elementary particle physics field, is becoming more and more important also for companies operating in the Information Technology (IT) industry. In this context, if conventional approaches are adopted several problems arise, starting from the congestion of the communication channels. In the IT sector, one of the approaches designed to minimize this congestion on is to exploit the data locality, or in other words, to bring the computation as closer as possible to where the data resides. The most common implementation of this concept is the Hadoop/MapReduce framework. In this thesis work I evaluate the usage of Hadoop/MapReduce in two areas: a standard one similar to typical IT analyses, and an innovative one related to high energy physics analyses. The first consists in monitoring the history of the storage cluster which stores the data generated by the LHC experiments, the second in the physics analysis of the latter, ...

  8. The TRIDEC Virtual Tsunami Atlas - customized value-added simulation data products for Tsunami Early Warning generated on compute clusters

    Science.gov (United States)

    Löwe, P.; Hammitzsch, M.; Babeyko, A.; Wächter, J.

    2012-04-01

    The development of new Tsunami Early Warning Systems (TEWS) requires the modelling of spatio-temporal spreading of tsunami waves both recorded from past events and hypothetical future cases. The model results are maintained in digital repositories for use in TEWS command and control units for situation assessment once a real tsunami occurs. Thus the simulation results must be absolutely trustworthy, in a sense that the quality of these datasets is assured. This is a prerequisite as solid decision making during a crisis event and the dissemination of dependable warning messages to communities under risk will be based on them. This requires data format validity, but even more the integrity and information value of the content, being a derived value-added product derived from raw tsunami model output. Quality checking of simulation result products can be done in multiple ways, yet the visual verification of both temporal and spatial spreading characteristics for each simulation remains important. The eye of the human observer still remains an unmatched tool for the detection of irregularities. This requires the availability of convenient, human-accessible mappings of each simulation. The improvement of tsunami models necessitates the changes in many variables, including simulation end-parameters. Whenever new improved iterations of the general models or underlying spatial data are evaluated, hundreds to thousands of tsunami model results must be generated for each model iteration, each one having distinct initial parameter settings. The use of a Compute Cluster Environment (CCE) of sufficient size allows the automated generation of all tsunami-results within model iterations in little time. This is a significant improvement to linear processing on dedicated desktop machines or servers. This allows for accelerated/improved visual quality checking iterations, which in turn can provide a positive feedback into the overall model improvement iteratively. An approach to set

  9. Atlas Distributed Analysis Tools

    Science.gov (United States)

    de La Hoz, Santiago Gonzalez; Ruiz, Luis March; Liko, Dietrich

    2008-06-01

    The ATLAS production system has been successfully used to run production of simulation data at an unprecedented scale. Up to 10000 jobs were processed in one day. The experiences obtained operating the system on several grid flavours was essential to perform a user analysis using grid resources. First tests of the distributed analysis system were then performed. In the preparation phase data was registered in the LHC File Catalog (LFC) and replicated in external sites. For the main test, few resources were used. All these tests are only a first step towards the validation of the computing model. The ATLAS management computing board decided to integrate the collaboration efforts in distributed analysis in only one project, GANGA. The goal is to test the reconstruction and analysis software in a large scale Data production using Grid flavors in several sites. GANGA allows trivial switching between running test jobs on a local batch system and running large-scale analyses on the Grid; it provides job splitting and merging, and includes automated job monitoring and output retrieval.

  10. ATLAS Distributed Analysis Tools

    CERN Document Server

    Gonzalez de la Hoz, Santiago; Liko, Dietrich

    2008-01-01

    The ATLAS production system has been successfully used to run production of simulation data at an unprecedented scale. Up to 10000 jobs were processed in one day. The experiences obtained operating the system on several grid flavours was essential to perform a user analysis using grid resources. First tests of the distributed analysis system were then performed. In the preparation phase data was registered in the LHC File Catalog (LFC) and replicated in external sites. For the main test, few resources were used. All these tests are only a first step towards the validation of the computing model. The ATLAS management computing board decided to integrate the collaboration efforts in distributed analysis in only one project, GANGA. The goal is to test the reconstruction and analysis software in a large scale Data production using Grid flavors in several sites. GANGA allows trivial switching between running test jobs on a local batch system and running large-scale analyses on the Grid; it provides job splitting a...

  11. ATLAS analysis model and SUSY searches in lepton channels

    International Nuclear Information System (INIS)

    The ATLAS experiment built at CERN will start to take data in some months.The computing model for data analysis includes many tools.The new ATLAS Event Data Model will be investigated here.As an example the sensitivity of a SUSY search requiring 2/3/4 jets plus one lepton will be shown

  12. 23 April 2010 - Her Majesty’s Ambassador to Switzerland and Liechtenstein, United Kingdom of Great Britain and Northern Ireland, S. Gillett CMG CVO, accompanied by Beams Department Head P. Collier, visiting the ATLAS control room with Collaboration Deputy Spokesperson, University of Birmingham, D. Charlton and signing the guest book with Director for Research and Scientific Computing S. Bertolucci.

    CERN Multimedia

    Maximilien Brice

    2010-01-01

    23 April 2010 - Her Majesty’s Ambassador to Switzerland and Liechtenstein, United Kingdom of Great Britain and Northern Ireland, S. Gillett CMG CVO, accompanied by Beams Department Head P. Collier, visiting the ATLAS control room with Collaboration Deputy Spokesperson, University of Birmingham, D. Charlton and signing the guest book with Director for Research and Scientific Computing S. Bertolucci.

  13. 28 March 2014 - Italian Minister of Education, University and Research S. Giannini welcomed by CERN Director-General R. Heuer and Director for Research and Scientific Computing S. Bertolucci in the ATLAS experimental cavern with Former Collaboration Spokesperson F. Gianotti. Signature of the guest book with Belgian State Secretary for the Scientific Policy P. Courard.

    CERN Multimedia

    Gadmer, Jean-Claude

    2014-01-01

    28 March 2014 - Italian Minister of Education, University and Research S. Giannini welcomed by CERN Director-General R. Heuer and Director for Research and Scientific Computing S. Bertolucci in the ATLAS experimental cavern with Former Collaboration Spokesperson F. Gianotti. Signature of the guest book with Belgian State Secretary for the Scientific Policy P. Courard.

  14. 11 July 2011 - Carleton University Ottawa, Canada Vice President (Research and International) K. Matheson in the ATLAS visitor centre with Collaboration Spokesperson F. Gianotti, accompanied by Adviser J. Ellis and signing the guest book with CERN Director for Research and Scientific Computing S. Bertolucci.

    CERN Multimedia

    Jean-Claude Gadmer

    2011-01-01

    11 July 2011 - Carleton University Ottawa, Canada Vice President (Research and International) K. Matheson in the ATLAS visitor centre with Collaboration Spokesperson F. Gianotti, accompanied by Adviser J. Ellis and signing the guest book with CERN Director for Research and Scientific Computing S. Bertolucci.

  15. 28th February 2011 - Turkish Minister of Foreign Affairs A. Davutoğlu signing the guest book with CERN Director for Research and Scientific Computing S. Bertolucci and Head of International Relations F. Pauss; meeting the CERN Turkish Community at Point 1; visiting the ATLAS control room with Former Collaboration Spokesperson P. Jenni.

    CERN Multimedia

    Maximilien Brice

    2011-01-01

    28th February 2011 - Turkish Minister of Foreign Affairs A. Davutoğlu signing the guest book with CERN Director for Research and Scientific Computing S. Bertolucci and Head of International Relations F. Pauss; meeting the CERN Turkish Community at Point 1; visiting the ATLAS control room with Former Collaboration Spokesperson P. Jenni.

  16. 14th March 2011 - Australian Senator the Hon. K. Carr Minister for Innovation, Industry, Science and Research in the ATLAS Visitor Centre with Collaboration Spokesperson F. Gianotti,visiting the SM18 area with G. De Rijk,the Computing centre with Department Head F. Hemmer, signing the guest book with Director-General R. Heuer with Head of International relations F. Pauss

    CERN Multimedia

    Jean-claude Gadmer

    2011-01-01

    14th March 2011 - Australian Senator the Hon. K. Carr Minister for Innovation, Industry, Science and Research in the ATLAS Visitor Centre with Collaboration Spokesperson F. Gianotti,visiting the SM18 area with G. De Rijk,the Computing centre with Department Head F. Hemmer, signing the guest book with Director-General R. Heuer with Head of International relations F. Pauss

  17. 30 January 2012 - Danish National Research Foundation Chairman of board K. Bock and University of Copenhagen Rector R. Hemmingsen visiting ATLAS underground experimental area, CERN Control Centre and ALICE underground experimental area, throughout accompanied by J. Dines Hansen and B. Svane Nielsen; signing the guest book with CERN Director for Research and Scientific Computing S. Bertolucci and Head of International Relations F. Pauss.

    CERN Multimedia

    Jean-Claude Gadmer

    2012-01-01

    30 January 2012 - Danish National Research Foundation Chairman of board K. Bock and University of Copenhagen Rector R. Hemmingsen visiting ATLAS underground experimental area, CERN Control Centre and ALICE underground experimental area, throughout accompanied by J. Dines Hansen and B. Svane Nielsen; signing the guest book with CERN Director for Research and Scientific Computing S. Bertolucci and Head of International Relations F. Pauss.

  18. ATLAS Fast Tracker Simulation Challenges

    CERN Document Server

    Adelman, Jahred; The ATLAS collaboration; Borodin, Mikhail; Chakraborty, Dhiman; García Navarro, José Enrique; Golubkov, Dmitry; Kama, Sami; Panitkin, Sergey; Smirnov, Yuri; Stewart, Graeme; Tompkins, Lauren; Vaniachine, Alexandre; Volpi, Guido

    2015-01-01

    To deal with Big Data flood from the ATLAS detector most events have to be rejected in the trigger system. the trigger rejection is complicated by the presence of a large number of minimum-bias events – the pileup. To limit pileup effects in the high luminosity environment of the LHC Run-2, ATLAS relies on full tracking provided by the Fast TracKer (FTK) implemented with custom electronics. The FTK data processing pipeline has to be simulated in preparation for LHC upgrades to support electronics design and develop trigger strategies at high luminosity. The simulation of the FTK - a highly parallelized system - has inherent performance bottlenecks on general-purpose CPUs. To take advantage of the Grid Computing power, the FTK simulation is integrated with Monte Carlo simulations at the Production System level above the ATLAS workload management system PanDA. We report on ATLAS experience with FTK simulations on the Grid and next steps for accommodating the growing requirements for resources during the LHC R...

  19. A Lego version of ATLAS

    CERN Multimedia

    Laëtitia Pedroso

    2010-01-01

    There's nothing very unusual about a small child making simple objects out of Lego. But wouldn't you be surprised to learn that one six-year old has just made a life-like model of the ATLAS detector?   Bastian with his Lego ATLAS detector. © Photo provided by Kai Nicklas, Bastian's father. It all began a month ago when the boy's father was watching a video about the construction of the ATLAS detector on the Internet. He hadn't noticed that his son was watching it over his shoulder. The small boy was fascinated by what he was seeing on the computer screen and his first reaction was to exclaim: "Wow! That's a terrific machine! I think the people who built it must be really clever." The detector must have really fired his imagination because, after asking his father a few questions, he decided to make a Lego model of it. Look at the photo and you will see how closely the model he produced resembles the actual ATLAS detector. Is the little boy in question, Bastia...

  20. ATLAS Recordings

    CERN Multimedia

    Jeremy Herr; Homer A. Neal; Mitch McLachlan

    The University of Michigan Web Archives for the 2006 ATLAS Week Plenary Sessions, as well as the first of 2007, are now online. In addition, there are a wide variety of Software and Physics Tutorial sessions, recorded over the past couple years, to chose from. All ATLAS-specific archives are accessible here.Viewing requires a standard web browser with RealPlayer plug-in (included in most browsers automatically) and works on any major platform. Lectures can be viewed directly over the web or downloaded locally.In addition, you will find access to a variety of general tutorials and events via the portal. Shaping Collaboration 2006The Michigan group is happy to announce a complete set of recordings from the Shaping Collaboration conference held last December at the CICG in Geneva.The event hosted a mix of Collaborative Tool experts and LHC Users, and featured presentations by the CERN Deputy Director General, Prof. Jos Engelen, the President of Internet2, and chief developers from VRVS/EVO, WLAP, and other tools...

  1. Advances in Service and Operations for ATLAS Data Management

    CERN Document Server

    Stewart, GA; The ATLAS collaboration

    2011-01-01

    ATLAS has recorded almost 5PB of RAW data since the LHC started running at the end of 2009. Many more derived data products and complimentary simulation data have also been produced by the collaboration and, in total, 55PB is currently stored in the Worldwide LHC Computing Grid by ATLAS. All of this data is managed by the ATLAS Distributed Data Management system, called Don Quixote 2 (DQ2). DQ2 has evolved rapidly to help ATLAS Computing operations to manage these large quantities of data across the many grid sites at which ATLAS runs and to help ATLAS physicists get access to this data. In this paper we describe new and improved DQ2 services: - Popularity service, which measures usage of data across ATLAS. - Space monitoring and accounting at sites. - Automated blacklisting service. - Cleaning agents, which trigger deletion of unused data at sites. - Deletion agents, to reliably delete unwanted data from sites. We describe the experience of data management operation in ATLAS computing, showing how these serv...

  2. Electroweak Physics with ATLAS

    OpenAIRE

    Akhundov, Arif

    2008-01-01

    The precision measurements of electroweak parameters of the Standard Model with the ATLAS detector at LHC are reviewed. An emphasis is put on the bridge connecting the ATLAS measurements with the SM analysis at LEP/SLC and the Tevatron.

  3. The ATLAS Fast Tracker

    CERN Document Server

    Volpi, Guido; The ATLAS collaboration

    2015-01-01

    The use of tracking information at the trigger level in the LHC Run II period is crucial for the trigger an data acquisition (TDAQ) system. The tracking precision is in fact important to identify specific decay products of the Higgs boson or new phenomena, a well as to distinguish the contributions coming from many contemporary collisions that occur at every bunch crossing. However, the track reconstruction is among the most demanding tasks performed by the TDAQ computing farm; in fact, full reconstruction at full Level-1 trigger accept rate (100 KHz) is not possible. In order to overcome this limitation, the ATLAS experiment is planning the installation of a specific processor: the Fast Tracker (FTK), which is aimed at achieving this goal. The FTK is a pipeline of high performance electronic, based on custom and commercial devices, which is expected to reconstruct, with high resolution, the trajectories of charged tracks with a transverse momentum above 1 GeV, using the ATLAS inner tracker information. Patte...

  4. Triggering events with GPUs at ATLAS

    Science.gov (United States)

    Kama, S.; Soares, J. Augusto; Baines, J.; Bauce, M.; Bold, T.; Conde Muino, P.; Emeliyanov, D.; Goncalo, R.; Messina, A.; Negrini, M.; Rinaldi, L.; Sidoti, A.; Tavares Delgado, A.; Tupputi, S.; Vaz Gil Lopes, L.

    2015-12-01

    The growing complexity of events produced in LHC collisions demands increasing computing power both for the online selection and for the offline reconstruction of events. In recent years there have been significant advances in the performance of Graphics Processing Units (GPUs) both in terms of increased compute power and reduced power consumption that make GPUs extremely attractive for use in a complex particle physics experiments such as ATLAS. A small scale prototype of the full ATLAS High Level Trigger has been implemented that exploits reconstruction algorithms optimized for this new massively parallel paradigm. We discuss the integration procedure followed for this prototype and present the performance achieved and the prospects for the future.

  5. ATLAS Distributed Data Analysis: challenges and performance

    CERN Document Server

    Fassi, Farida; The ATLAS collaboration

    2015-01-01

    In the LHC operations era the key goal is to analyse the results of the collisions of high-energy particles as a way of probing the fundamental forces of nature. The ATLAS experiment at the LHC at CERN is recording and simulating several 10's of PetaBytes of data per year. The ATLAS Computing Model was designed around the concepts of Grid Computing. Large data volumes from the detectors and simulations require a large number of CPUs and storage space for data processing. To cope with this challenge a global network known as the Worldwide LHC Computing Grid (WLCG) was built. This is the most sophisticated data taking and analysis system ever built. ATLAS accumulated more than 140 PB of data between 2009 and 2014. To analyse these data ATLAS developed, deployed and now operates a mature and stable distributed analysis (DA) service on the WLCG. The service is actively used: more than half a million user jobs run daily on DA resources, submitted by more than 1500 ATLAS physicists. A significant reliability of the...

  6. ATLAS Distributed Data Analysis: performance and challenges

    CERN Document Server

    Fassi, Farida; The ATLAS collaboration

    2015-01-01

    In the LHC operations era the key goal is to analyse the results of the collisions of high-energy particles as a way of probing the fundamental forces of nature. The ATLAS experiment at the LHC at CERN is recording and simulating several 10's of PetaBytes of data per year. The ATLAS Computing Model was designed around the concepts of Grid Computing. Large data volumes from the detectors and simulations require a large number of CPUs and storage space for data processing. To cope with this challenge a global network known as the Worldwide LHC Computing Grid (WLCG) was built. This is the most sophisticated data taking and analysis system ever built. ATLAS accumulated more than 140 PB of data between 2009 and 2014. To analyse these data ATLAS developed, deployed and now operates a mature and stable distributed analysis (DA) service on the WLCG. The service is actively used: more than half a million user jobs run daily on DA resources, submitted by more than 1500 ATLAS physicists. A significant reliability of the...

  7. The Scalable Brain Atlas: Instant Web-Based Access to Public Brain Atlases and Related Content.

    Science.gov (United States)

    Bakker, Rembrandt; Tiesinga, Paul; Kötter, Rolf

    2015-07-01

    The Scalable Brain Atlas (SBA) is a collection of web services that provide unified access to a large collection of brain atlas templates for different species. Its main component is an atlas viewer that displays brain atlas data as a stack of slices in which stereotaxic coordinates and brain regions can be selected. These are subsequently used to launch web queries to resources that require coordinates or region names as input. It supports plugins which run inside the viewer and respond when a new slice, coordinate or region is selected. It contains 20 atlas templates in six species, and plugins to compute coordinate transformations, display anatomical connectivity and fiducial points, and retrieve properties, descriptions, definitions and 3d reconstructions of brain regions. The ambition of SBA is to provide a unified representation of all publicly available brain atlases directly in the web browser, while remaining a responsive and light weight resource that specializes in atlas comparisons, searches, coordinate transformations and interactive displays. PMID:25682754

  8. The magnetically driven imploding liner parameter space of the ATLAS capacitor bank

    CERN Document Server

    Lindemuth, I R; Faehl, R J; Reinovsky, R E

    2001-01-01

    Summary form only given, as follows. The Atlas capacitor bank (23 MJ, 30 MA) is now operational at Los Alamos. Atlas was designed primarily to magnetically drive imploding liners for use as impactors in shock and hydrodynamic experiments. We have conducted a computational "mapping" of the high-performance imploding liner parameter space accessible to Atlas. The effect of charge voltage, transmission inductance, liner thickness, liner initial radius, and liner length has been investigated. One conclusion is that Atlas is ideally suited to be a liner driver for liner-on-plasma experiments in a magnetized target fusion (MTF) context . The parameter space of possible Atlas reconfigurations has also been investigated.

  9. Simulation of the heat transfer around the ATLAS muon chambers

    CERN Multimedia

    2005-01-01

    This 2D simulation recently carried out on the ATLAS muon chambers by a small team of CERN engineers specialises in the numerical computation of fluid dynamics, in other words the flow of fluids and heat.

  10. Spanish ATLAS Tier-2 facing up to Run-2 period of LHC

    CERN Document Server

    Gonzalez de la Hoz, Santiago; The ATLAS collaboration; Fassi, Farida; Fernandez Casani, Alvaro; Kaci, Mohammed; Lacort Pellicer, Victor Ruben; Montiel Gonzalez, Almudena Del Rocio; Oliver Garcia, Elena; Pacheco Pages, Andres; Salt, José; Villaplana Perez, Miguel; Sanchez Martinez, Victoria; Sánchez, Javier

    2015-01-01

    The goal of this work is to describe the way of addressing the main challenges of Run-2 by the Spanish ATLAS Tier-2. The considerable increase of energy and luminosity for the upcoming Run-2 w.r.t. Run-1 has led to a revision of the ATLAS computing model as well as some of the main ATLAS computing tools. The adaptation to these changes will be shown, with the peculiarities that it is a distributed Tier-2 composed of three sites and its members are involved on ATLAS computing tasks with a hub of research, innovation and education.

  11. EnviroAtlas - Phoenix, AZ - Atlas Area Boundary

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset shows the boundary of the Phoenix, AZ Atlas Area. It represents the outside edge of all the block groups included in the EnviroAtlas Area....

  12. EnviroAtlas - Portland, OR - Atlas Area Boundary

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset shows the boundary of the Portland, OR Atlas Area. It represents the outside edge of all the block groups included in the EnviroAtlas Area....

  13. Development and Implementation of a Corriedale Ovine Brain Atlas for Use in Atlas-Based Segmentation

    Science.gov (United States)

    Steward, Christopher; Moffat, Bradford Armstrong; Opie, Nicholas Lachlan; Rind, Gil Simon; John, Sam Emmanuel; Ronayne, Stephen; May, Clive Newton; O’Brien, Terence John; Milne, Marjorie Eileen; Oxley, Thomas James

    2016-01-01

    Segmentation is the process of partitioning an image into subdivisions and can be applied to medical images to isolate anatomical or pathological areas for further analysis. This process can be done manually or automated by the use of image processing computer packages. Atlas-based segmentation automates this process by the use of a pre-labelled template and a registration algorithm. We developed an ovine brain atlas that can be used as a model for neurological conditions such as Parkinson’s disease and focal epilepsy. 17 female Corriedale ovine brains were imaged in-vivo in a 1.5T (low-resolution) MRI scanner. 13 of the low-resolution images were combined using a template construction algorithm to form a low-resolution template. The template was labelled to form an atlas and tested by comparing manual with atlas-based segmentations against the remaining four low-resolution images. The comparisons were in the form of similarity metrics used in previous segmentation research. Dice Similarity Coefficients were utilised to determine the degree of overlap between eight independent, manual and atlas-based segmentations, with values ranging from 0 (no overlap) to 1 (complete overlap). For 7 of these 8 segmented areas, we achieved a Dice Similarity Coefficient of 0.5–0.8. The amygdala was difficult to segment due to its variable location and similar intensity to surrounding tissues resulting in Dice Coefficients of 0.0–0.2. We developed a low resolution ovine brain atlas with eight clinically relevant areas labelled. This brain atlas performed comparably to prior human atlases described in the literature and to intra-observer error providing an atlas that can be used to guide further research using ovine brains as a model and is hosted online for public access. PMID:27285947

  14. The ATLAS Detector Control System

    Science.gov (United States)

    Lantzsch, K.; Arfaoui, S.; Franz, S.; Gutzwiller, O.; Schlenker, S.; Tsarouchas, C. A.; Mindur, B.; Hartert, J.; Zimmermann, S.; Talyshev, A.; Oliveira Damazio, D.; Poblaguev, A.; Braun, H.; Hirschbuehl, D.; Kersten, S.; Martin, T.; Thompson, P. D.; Caforio, D.; Sbarra, C.; Hoffmann, D.; Nemecek, S.; Robichaud-Veronneau, A.; Wynne, B.; Banas, E.; Hajduk, Z.; Olszowska, J.; Stanecka, E.; Bindi, M.; Polini, A.; Deliyergiyev, M.; Mandic, I.; Ertel, E.; Marques Vinagre, F.; Ribeiro, G.; Santos, H. F.; Barillari, T.; Habring, J.; Huber, J.; Arabidze, G.; Boterenbrood, H.; Hart, R.; Iakovidis, G.; Karakostas, K.; Leontsinis, S.; Mountricha, E.; Ntekas, K.; Filimonov, V.; Khomutnikov, V.; Kovalenko, S.; Grassi, V.; Mitrevski, J.; Phillips, P.; Chekulaev, S.; D'Auria, S.; Nagai, K.; Tartarelli, G. F.; Aielli, G.; Marchese, F.; Lafarguette, P.; Brenner, R.

    2012-12-01

    The ATLAS experiment is one of the multi-purpose experiments at the Large Hadron Collider (LHC) at CERN, constructed to study elementary particle interactions in collisions of high-energy proton beams. Twelve different sub detectors as well as the common experimental infrastructure are controlled and monitored by the Detector Control System (DCS) using a highly distributed system of 140 server machines running the industrial SCADA product PVSS. Higher level control system layers allow for automatic control procedures, efficient error recognition and handling, manage the communication with external systems such as the LHC controls, and provide a synchronization mechanism with the ATLAS data acquisition system. Different databases are used to store the online parameters of the experiment, replicate a subset used for physics reconstruction, and store the configuration parameters of the systems. This contribution describes the computing architecture and software tools to handle this complex and highly interconnected control system.

  15. SU-E-J-128: Two-Stage Atlas Selection in Multi-Atlas-Based Image Segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, T; Ruan, D [UCLA School of Medicine, Los Angeles, CA (United States)

    2015-06-15

    Purpose: In the new era of big data, multi-atlas-based image segmentation is challenged by heterogeneous atlas quality and high computation burden from extensive atlas collection, demanding efficient identification of the most relevant atlases. This study aims to develop a two-stage atlas selection scheme to achieve computational economy with performance guarantee. Methods: We develop a low-cost fusion set selection scheme by introducing a preliminary selection to trim full atlas collection into an augmented subset, alleviating the need for extensive full-fledged registrations. More specifically, fusion set selection is performed in two successive steps: preliminary selection and refinement. An augmented subset is first roughly selected from the whole atlas collection with a simple registration scheme and the corresponding preliminary relevance metric; the augmented subset is further refined into the desired fusion set size, using full-fledged registration and the associated relevance metric. The main novelty of this work is the introduction of an inference model to relate the preliminary and refined relevance metrics, based on which the augmented subset size is rigorously derived to ensure the desired atlases survive the preliminary selection with high probability. Results: The performance and complexity of the proposed two-stage atlas selection method were assessed using a collection of 30 prostate MR images. It achieved comparable segmentation accuracy as the conventional one-stage method with full-fledged registration, but significantly reduced computation time to 1/3 (from 30.82 to 11.04 min per segmentation). Compared with alternative one-stage cost-saving approach, the proposed scheme yielded superior performance with mean and medium DSC of (0.83, 0.85) compared to (0.74, 0.78). Conclusion: This work has developed a model-guided two-stage atlas selection scheme to achieve significant cost reduction while guaranteeing high segmentation accuracy. The benefit

  16. SU-E-J-128: Two-Stage Atlas Selection in Multi-Atlas-Based Image Segmentation

    International Nuclear Information System (INIS)

    Purpose: In the new era of big data, multi-atlas-based image segmentation is challenged by heterogeneous atlas quality and high computation burden from extensive atlas collection, demanding efficient identification of the most relevant atlases. This study aims to develop a two-stage atlas selection scheme to achieve computational economy with performance guarantee. Methods: We develop a low-cost fusion set selection scheme by introducing a preliminary selection to trim full atlas collection into an augmented subset, alleviating the need for extensive full-fledged registrations. More specifically, fusion set selection is performed in two successive steps: preliminary selection and refinement. An augmented subset is first roughly selected from the whole atlas collection with a simple registration scheme and the corresponding preliminary relevance metric; the augmented subset is further refined into the desired fusion set size, using full-fledged registration and the associated relevance metric. The main novelty of this work is the introduction of an inference model to relate the preliminary and refined relevance metrics, based on which the augmented subset size is rigorously derived to ensure the desired atlases survive the preliminary selection with high probability. Results: The performance and complexity of the proposed two-stage atlas selection method were assessed using a collection of 30 prostate MR images. It achieved comparable segmentation accuracy as the conventional one-stage method with full-fledged registration, but significantly reduced computation time to 1/3 (from 30.82 to 11.04 min per segmentation). Compared with alternative one-stage cost-saving approach, the proposed scheme yielded superior performance with mean and medium DSC of (0.83, 0.85) compared to (0.74, 0.78). Conclusion: This work has developed a model-guided two-stage atlas selection scheme to achieve significant cost reduction while guaranteeing high segmentation accuracy. The benefit

  17. Optimal number of atlases and label fusion for automatic multi-atlas-based brachial plexus contouring in radiotherapy treatment planning

    International Nuclear Information System (INIS)

    The present study aimed to define the optimal number of atlases for automatic multi-atlas-based brachial plexus (BP) segmentation and to compare Simultaneous Truth and Performance Level Estimation (STAPLE) label fusion with Patch label fusion using the ADMIRE® software. The accuracy of the autosegmentations was measured by comparing all of the generated autosegmentations with the anatomically validated gold standard segmentations that were developed using cadavers. Twelve cadaver computed tomography (CT) atlases were used for automatic multi-atlas-based segmentation. To determine the optimal number of atlases, one atlas was selected as a patient and the 11 remaining atlases were registered onto this patient using a deformable image registration algorithm. Next, label fusion was performed by using every possible combination of 2 to 11 atlases, once using STAPLE and once using Patch. This procedure was repeated for every atlas as a patient. The similarity of the generated automatic BP segmentations and the gold standard segmentation was measured by calculating the average Dice similarity (DSC), Jaccard (JI) and True positive rate (TPR) for each number of atlases. These similarity indices were compared for the different number of atlases using an equivalence trial and for the two label fusion groups using an independent sample-t test. DSC’s and JI’s were highest when using nine atlases with both STAPLE (average DSC = 0,532; JI = 0,369) and Patch (average DSC = 0,530; JI = 0,370). When comparing both label fusion algorithms using 9 atlases for both, DSC and JI values were not significantly different. However, significantly higher TPR values were achieved in favour of STAPLE (p < 0,001). When fewer than four atlases were used, STAPLE produced significantly lower DSC, JI and TPR values than did Patch (p = 0,0048). Using 9 atlases with STAPLE label fusion resulted in the most accurate BP autosegmentations (average DSC = 0,532; JI = 0,369 and TPR = 0,760). Only when

  18. The evolution of the Trigger and Data Acquisition System in the ATLAS experiment (ACAT2013: 15. international workshop on advanced computing and analysis techniques in physics research)

    International Nuclear Information System (INIS)

    The ATLAS experiment, aimed at recording the results of LHC proton-proton collisions, is upgrading its Trigger and Data Acquisition (TDAQ) system during the current LHC first long shutdown. The purpose of the upgrade is to add robustness and flexibility to the selection and the conveyance of the physics data, simplify the maintenance of the infrastructure, exploit new technologies and, overall, make ATLAS data-taking capable of dealing with increasing event rates. The TDAQ system used to date is organised in a three-level selection scheme, including a hardware-based first-level trigger and second- and third-level triggers implemented as separate software systems distributed on separate, commodity hardware nodes. While this architecture was successfully operated well beyond the original design goals, the accumulated experience stimulated interest to explore possible evolutions. We will also be upgrading the hardware of the TDAQ system by introducing new elements to it. For the high-level trigger, the current plan is to deploy a single homogeneous system, which merges the execution of the second and third trigger levels, still separated, on a unique hardware node. Prototyping efforts already demonstrated many benefits to the simplified design. In this paper we report on the design and the development status of this new system

  19. ATLAS TDAQ application gateway upgrade during LS1

    CERN Document Server

    KOROL, A; The ATLAS collaboration; BOGDANCHIKOV, A; BRASOLIN, F; CONTESCU, A C; DUBROV, S; HAFEEZ, M; LEE, C J; SCANNICCHIO, D A; TWOMEY, M; VORONKOV, A; ZAYTSEV, A

    2014-01-01

    The ATLAS Gateway service is implemented with a set of dedicated computer nodes to provide a fine-grained access control between CERN General Public Network (GPN) and ATLAS Technical Control Network (ATCN). ATCN connects the ATLAS online farm used for ATLAS Operations and data taking, including the ATLAS TDAQ (Trigger and Data Aquisition) and DCS (Detector Control System) nodes. In particular, it provides restricted access to the web services (proxy), general login sessions (via SSH and RDP protocols), NAT and mail relay from ATCN. At the Operating System level the implementation is based on virtualization technologies. Here we report on the Gateway upgrade during Long Shutdown 1 (LS1) period: it includes the transition to the last production release of the CERN Linux distribution (SLC6), the migration to the centralized configuration management system (based on Puppet) and the redesign of the internal system architecture.

  20. Global Data Grid Efforts for ATLAS

    CERN Multimedia

    Gardner, R.

    2001-01-01

    Over the past two years computational data grids have emerged as a promising new technology for large scale, data-intensive computing required by the LHC experiments, as outlined by the recent "Hoffman" review panel that addressed the LHC computing challenge. The problem essentially is to seamlessly link physicists to petabyte-scale data and computing resources, distributed worldwide, and connected by high-bandwidth research networks. Several new collaborative initiatives in Europe, the United States, and Asia have formed to address the problem. These projects are of great interest to ATLAS physicists and software developers since their objective is to offer tools that can be integrated into the core ATLAS application framework for distributed event reconstruction, Monte Carlo simulation, and data analysis, making it possible for individuals and groups of physicists to share information, data, and computing resources in new ways and at scales not previously attempted. In addition, much of the distributed IT...

  1. Two-stage atlas subset selection in multi-atlas based image segmentation

    International Nuclear Information System (INIS)

    Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors

  2. Two-stage atlas subset selection in multi-atlas based image segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu [The Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States)

    2015-06-15

    Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors

  3. The ATLAS ARC backend to HPC

    Science.gov (United States)

    Haug, S.; Hostettler, M.; Sciacca, F. G.; Weber, M.

    2015-12-01

    The current distributed computing resources used for simulating and processing collision data collected by ATLAS and the other LHC experiments are largely based on dedicated x86 Linux clusters. Access to resources, job control and software provisioning mechanisms are quite different from the common concept of self-contained HPC applications run by particular users on specific HPC systems. We report on the development and the usage in ATLAS of a SSH backend to the Advanced Resource Connector (ARC) middleware to enable HPC compliant access and on the corresponding software provisioning mechanisms.

  4. The Irish Wind Atlas

    Energy Technology Data Exchange (ETDEWEB)

    Watson, R. [Univ. College Dublin, Dept. of Electronic and Electrical Engineering, Dublin (Ireland); Landberg, L. [Risoe National Lab., Meteorology and Wind Energy Dept., Roskilde (Denmark)

    1999-03-01

    The development work on the Irish Wind Atlas is nearing completion. The Irish Wind Atlas is an updated improved version of the Irish section of the European Wind Atlas. A map of the irish wind resource based on a WA{sup s}P analysis of the measured data and station description of 27 measuring stations is presented. The results of previously presented WA{sup s}P/KAMM runs show good agreement with these results. (au)

  5. All 2006 ATLAS Tutorials online

    CERN Multimedia

    Steven Goldfarb,; Mitch McLachlan,; Homer A. Neal

    The University of Michigan has completed its full agenda of Web Lecture recording for ATLAS for 2006. The archives include all three ATLAS Week Plenary Sessions, as well as a large variety of tutorials. They are accessible at target="_top" this location. Viewing requires a standard web browser with RealPlayer plug-in (included in most browsers automatically) and works on any major platform. This is the first year our group has been asked to provide this complete service to the collaboration, so any and all feedback is welcome. We would especially like to know if you had any difficulties viewing the lectures, if you found the selection of material to be useful, and/or if you think there are any other specific events we ought to cover in 2007. Please send you comments to wlap@umich.edu. We look forward to bringing you a rich variety of new lectures in 2007, starting with the ATLAS Distributed Computing Tutorial on Feb 1, 2 in Edinburgh and concluding with the Higgs discovery talk (of course). Enjoy the Lec...

  6. Computer tomographic imaging and anatomic correlation of the human brain: A comparative atlas of thin CT-scan sections and correlated neuro-anatomic preparations

    International Nuclear Information System (INIS)

    It is of the greatest importance to the radiologist, the neurologist and the neurosurgeon to be able to localize topographically a pathological brain process on the CT scan as precisely as possible. For that purpose, the identification of as many anatomical structures as possible on the CT scan image are necessary and indispensable. In this atlas a great number of detailed anatomical data on frontal horizontal CT scan sections, each being only 2 mm thick, are indicated, e.g. the cortical gyri, the basal ganglia, details of the white matter, extracranial muscles and blood vessels, parts of the base and the vault of the skull, etc. The very precise topographical description of the numerous CT scan images was realized by the author by confrontation of these images with the corresponding anatomical sections of the same brain specimen, performed by an original technique

  7. Distributed data management in the ATLAS experiment

    International Nuclear Information System (INIS)

    Full text: ATLAS presents data management requirements on an unprecedented scale. Without the advent of grid computing it would be near impossible to process and analyze the vast amounts of data generated by the experiment in a timely manner. We have developed a novel system, DQ2, which has been designed to address these problems and provide scientists easy access to a global distributed grid-storage infrastructure. We present the system's design, discuss its fault tolerance and scalability properties, and describe results from its daily usage in the experiment. We focus on the challenges faced during the last years and describe the solutions we have implemented to accommodate the changing ATLAS requirements. Finally, we anticipate the evolution of distributed data management for the next years as ATLAS moves into the physics analysis phase. (author)

  8. Advances in Service and Operations for ATLAS Data Management

    CERN Document Server

    Stewart, G A; The ATLAS collaboration; Lassnig, M; Molfetas, A; Baristis, M; Zhang, D; Calvet, I; Beermann, T; Barreiro Megino, F; Tykhonov, A; Campana, S; Serfon, C; Oleynik, O; Petrosyan, A

    2012-01-01

    ATLAS has recorded almost 5PB of RAW data since the LHC started running at the end of 2009. Many more derived data products and complimentary simulation data have also been produced by the collaboration and, in total, 70PB is currently stored in the Worldwide LHC Computing Grid by ATLAS. All of this data is managed by the ATLAS Distributed Data Management system, called Don Quixote 2 (DQ2). DQ2 has evolved rapidly to help ATLAS Computing operations manage these large quantities of data across the many grid sites at which ATLAS runs and to help ATLAS physicists get access to this data. In this paper we describe new and improved DQ2 services: egin{itemize} item hspace{2mm} Popularity service, which measures usage of data across ATLAS. item hspace{2mm} Space monitoring and accounting at sites. item hspace{2mm} Automated exclusion service. item hspace{2mm} Cleaning agents, which trigger deletion of unused data at sites. item hspace{2mm} Deletion agents, to reliably delete unwanted data from sites. end{itemize} We...

  9. Renewable energy atlas of the United States.

    Energy Technology Data Exchange (ETDEWEB)

    Kuiper, J.A.; Hlava, K.Greenwood, H.; Carr, A. (Environmental Science Division)

    2012-05-01

    The Renewable Energy Atlas (Atlas) of the United States is a compilation of geospatial data focused on renewable energy resources, federal land ownership, and base map reference information. It is designed for the U.S. Department of Agriculture Forest Service (USFS) and other federal land management agencies to evaluate existing and proposed renewable energy projects. Much of the content of the Atlas was compiled at Argonne National Laboratory (Argonne) to support recent and current energy-related Environmental Impact Statements and studies, including the following projects: (1) West-wide Energy Corridor Programmatic Environmental Impact Statement (PEIS) (BLM 2008); (2) Draft PEIS for Solar Energy Development in Six Southwestern States (DOE/BLM 2010); (3) Supplement to the Draft PEIS for Solar Energy Development in Six Southwestern States (DOE/BLM 2011); (4) Upper Great Plains Wind Energy PEIS (WAPA/USFWS 2012, in progress); and (5) Energy Transport Corridors: The Potential Role of Federal Lands in States Identified by the Energy Policy Act of 2005, Section 368(b) (in progress). This report explains how to add the Atlas to your computer and install the associated software; describes each of the components of the Atlas; lists the Geographic Information System (GIS) database content and sources; and provides a brief introduction to the major renewable energy technologies.

  10. A Study of ATLAS Grid Performance for Distributed Analysis

    CERN Document Server

    Panitkin, S; The ATLAS collaboration; Wenaus, T

    2012-01-01

    In the past two years the ATLAS Collaboration at the LHC has collected a large volume of data and published a number of ground breaking papers. The Grid-based ATLAS distributed computing infrastructure played a crucial role in enabling timely analysis of the data. We will present a study of the performance and usage of the ATLAS Grid as platform for physics analysis in 2011. This includes studies of general properties as well as timing properties of user jobs (wait time, run time, etc). These studies are based on mining of data archived by the PanDA workload management system.

  11. Virtual Machine Logbook - Enabling virtualization for ATLAS

    International Nuclear Information System (INIS)

    ATLAS software has been developed mostly on CERN linux cluster lxplus or on similar facilities at the experiment Tier 1 centers. The fast rise of virtualization technology has the potential to change this model, turning every laptop or desktop into an ATLAS analysis platform. In the context of the CernVM project we are developing a suite of tools and CernVM plug-in extensions to promote the use of virtualization for ATLAS analysis and software development. The Virtual Machine Logbook (VML), in particular, is an application to organize work of physicists on multiple projects, logging their progress, and speeding up ''context switches'' from one project to another. An important feature of VML is the ability to share with a single 'click' the status of a given project with other colleagues. VML builds upon the save and restore capabilities of mainstream virtualization software like VMware, and provides a technology-independent client interface to them. A lot of emphasis in the design and implementation has gone into optimizing the save and restore process to makepractical to store many VML entries on a typical laptop disk or to share a VML entry over the network. At the same time, taking advantage of CernVM's plugin capabilities, we are extending the CernVM platform to help increase the usability of ATLAS software. For example, we added the ability to start the ATLAS event display on any computer running CernVM simply by clicking a button in a web browser. We want to integrate seamlessly VML with CernVM unique file system design to distribute efficiently ATLAS software on every physicist computer. The CernVM File System (CVMFS) download files on-demand via HTTP, and cache it locally for future use. This reduces by one order of magnitude the download sizes, making practical for a developer to work with multiple software releases on a virtual machine.

  12. ATLAS Grid Data Processing: system evolution and scalability

    International Nuclear Information System (INIS)

    The production system for Grid Data Processing handles petascale ATLAS data reprocessing and Monte Carlo activities. The production system empowered further data processing steps on the Grid performed by dozens of ATLAS physics groups with coordinated access to computing resources worldwide, including additional resources sponsored by regional facilities. The system provides knowledge management of configuration parameters for massive data processing tasks, reproducibility of results, scalable database access, orchestrated workflow and performance monitoring, dynamic workload sharing, automated fault tolerance and petascale data integrity control. The system evolves to accommodate a growing number of users and new requirements from our contacts in ATLAS main areas: Trigger, Physics, Data Preparation and Software and Computing. To assure scalability, the next generation production system architecture development is in progress. We report on scaling up the production system for a growing number of users providing data for physics analysis and other ATLAS main activities.

  13. The ATLAS online High Level Trigger framework: Experience reusing offline software components in the ATLAS trigger

    International Nuclear Information System (INIS)

    Event selection in the ATLAS High Level Trigger is accomplished to a large extent by reusing software components and event selection algorithms developed and tested in an offline environment. Many of these offline software modules are not specifically designed to run in a heavily multi-threaded online data flow environment. The ATLAS High Level Trigger (HLT) framework based on the GAUDI and ATLAS ATHENA frameworks, forms the interface layer, which allows the execution of the HLT selection and monitoring code within the online run control and data flow software. While such an approach provides a unified environment for trigger event selection across all of ATLAS, it also poses strict requirements on the reused software components in terms of performance, memory usage and stability. Experience of running the HLT selection software in the different environments and especially on large multi-node trigger farms has been gained in several commissioning periods using preloaded Monte Carlo events, in data taking periods with cosmic events and in a short period with proton beams from LHC. The contribution discusses the architectural aspects of the HLT framework, its performance and its software environment within the ATLAS computing, trigger and data flow projects. Emphasis is also put on the architectural implications for the software by the use of multi-core processors in the computing farms and the experiences gained with multi-threading and multi-process technologies.

  14. Canadian ATLAS data center to support CERN's LHC

    CERN Multimedia

    2006-01-01

    "The biggest science experiment in history is currently underway at the world-famous CERN labs in Switzerland, and Canada is poised to play a critical role in its success. Thanks to a $10.5 million investment announced by the Canada Foundation for Innovation (CFI), an ultra-sophisticated computing facility -- the ATLAS Data Center -- will be created to support the ATLAS project at CERN's Large Hadron Collider (LHC)." (1 page)

  15. The ATLAS pixel detector

    OpenAIRE

    Cristinziani, M.

    2007-01-01

    After a ten years planning and construction phase, the ATLAS pixel detector is nearing its completion and is scheduled to be integrated into the ATLAS detector to take data with the first LHC collisions in 2007. An overview of the construction is presented with particular emphasis on some of the major and most recent problems encountered and solved.

  16. ATLAS Thesis Awards 2015

    CERN Multimedia

    Biondi, Silvia

    2016-01-01

    Winners of the ATLAS Thesis Award were presented with certificates and glass cubes during a ceremony on Thursday 25 February. The winners also presented their work in front of members of the ATLAS Collaboration. Winners: Javier Montejo Berlingen, Barcelona (Spain), Ruth Pöttgen, Mainz (Germany), Nils Ruthmann, Freiburg (Germany), and Steven Schramm, Toronto (Canada).

  17. ATLAS-Hadronic Calorimeter

    CERN Multimedia

    2003-01-01

    Hall 180 work on Hadronic Calorimeter The ATLAS hadronic tile calorimeter The Tile Calorimeter, which constitutes the central section of the ATLAS hadronic calorimeter, is a non-compensating sampling device made of iron and scintillating tiles. (IEEE Trans. Nucl. Sci. 53 (2006) 1275-81)

  18. ATLAS TV PROJECT

    CERN Multimedia

    2005-01-01

    La Givrine near St Cergue Cross Country Skiing and Fondue at Basse Ruche with M Nordberg, P Jenni, M Nessi, F Gianotti and Co. ATLAS Management Fondu dinner, reviewing state of play of the experiment Many fun scenes from cross country skiing and after 41 minutes of the film starts the fondue dinner in a nice chalet with many persons working for ATLAS experiment

  19. ATLAS TV PROJECT

    CERN Multimedia

    2005-01-01

    Budker Nuclear Physics Institute, Novosibirsk Sequence 1 Shots of aircraft factory where machining for ATLAS is done Shots of aircraft Work on components for ATLAS big wheel Discussions between Tikhonov and Nordberg in workshop Sequence 2 Shots of downtown Novosibirsk, including little church which is mid-point of Russian Federation Sequence 3 Interview of Yuri Tikhonov by Andrew Millington

  20. A Slice of ATLAS

    CERN Multimedia

    2004-01-01

    An entire section of the ATLAS detector is being assembled at Prévessin. Since May the components have been tested using a beam from the SPS, giving the ATLAS team valuable experience of operating the detector as well as an opportunity to debug the system.

  1. ATLAS brochure (Spanish version)

    CERN Multimedia

    Lefevre, C

    2008-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world, which will start up in 2008. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  2. ATLAS Visitors Centre

    CERN Multimedia

    claudia Marcelloni

    2009-01-01

    ATLAS Visitors Centre has opened its shiny new doors to the public. Officially launched on Monday February 23rd, 2009, the permanent exhibition at Point 1 was conceived as a tour resource for ATLAS guides, and as a way to preserve the public’s opportunity to get a close-up look at the experiment in action when the cavern is sealed.

  3. ATLAS people can run!

    CERN Multimedia

    Claudia Marcelloni de Oliveira; Pauline Gagnon

    It must be all the training we are getting every day, running around trying to get everything ready for the start of the LHC next year. This year, the ATLAS runners were in fine form and came in force. Nine ATLAS teams signed up for the 37th Annual CERN Relay Race with six runners per team. Under a blasting sun on Wednesday 23rd May 2007, each team covered the distances of 1000m, 800m, 800m, 500m, 500m and 300m taking the runners around the whole Meyrin site, hills included. A small reception took place in the ATLAS secretariat a week later to award the ATLAS Cup to the best ATLAS team. For the details on this complex calculation which takes into account the age of each runner, their gender and the color of their shoes, see the July 2006 issue of ATLAS e-news. The ATLAS Running Athena Team, the only all-women team enrolled this year, won the much coveted ATLAS Cup for the second year in a row. In fact, they are so good that Peter Schmid and Patrick Fassnacht are wondering about reducing the women's bonus in...

  4. The ATLAS tile calorimeter

    CERN Multimedia

    Maximilien Brice

    2003-01-01

    Louis Rose-Dulcina, a technician from the ATLAS collaboration, works on the ATLAS tile calorimeter. Special manufacturing techniques were developed to mass produce the thousands of elements in this detector. Tile detectors are made in a sandwich-like structure where these scintillator tiles are placed between metal sheets.

  5. ATLAS rewards industry

    CERN Multimedia

    Maximilien Brice

    2006-01-01

    For contributing vital pieces to the ATLAS puzzle, three industries were recognized on Friday 5 May during a supplier awards ceremony. After a welcome and overview of the ATLAS experiment by spokesperson Peter Jenni, CERN Secretary-General Maximilian Metzger stressed the importance of industry to CERN's scientific goals. Picture 30 : representatives of the three award-wining companies after the ceremony

  6. ATLAS brochure (German version)

    CERN Multimedia

    Lefevre, C

    2012-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  7. ATLAS brochure (French version)

    CERN Multimedia

    Lefevre, C

    2012-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  8. Enhancing atlas based segmentation with multiclass linear classifiers

    Energy Technology Data Exchange (ETDEWEB)

    Sdika, Michaël, E-mail: michael.sdika@creatis.insa-lyon.fr [Université de Lyon, CREATIS, CNRS UMR 5220, Inserm U1044, INSA-Lyon, Université Lyon 1, Villeurbanne 69300 (France)

    2015-12-15

    Purpose: To present a method to enrich atlases for atlas based segmentation. Such enriched atlases can then be used as a single atlas or within a multiatlas framework. Methods: In this paper, machine learning techniques have been used to enhance the atlas based segmentation approach. The enhanced atlas defined in this work is a pair composed of a gray level image alongside an image of multiclass classifiers with one classifier per voxel. Each classifier embeds local information from the whole training dataset that allows for the correction of some systematic errors in the segmentation and accounts for the possible local registration errors. The authors also propose to use these images of classifiers within a multiatlas framework: results produced by a set of such local classifier atlases can be combined using a label fusion method. Results: Experiments have been made on the in vivo images of the IBSR dataset and a comparison has been made with several state-of-the-art methods such as FreeSurfer and the multiatlas nonlocal patch based method of Coupé or Rousseau. These experiments show that their method is competitive with state-of-the-art methods while having a low computational cost. Further enhancement has also been obtained with a multiatlas version of their method. It is also shown that, in this case, nonlocal fusion is unnecessary. The multiatlas fusion can therefore be done efficiently. Conclusions: The single atlas version has similar quality as state-of-the-arts multiatlas methods but with the computational cost of a naive single atlas segmentation. The multiatlas version offers a improvement in quality and can be done efficiently without a nonlocal strategy.

  9. Enhancing atlas based segmentation with multiclass linear classifiers

    International Nuclear Information System (INIS)

    Purpose: To present a method to enrich atlases for atlas based segmentation. Such enriched atlases can then be used as a single atlas or within a multiatlas framework. Methods: In this paper, machine learning techniques have been used to enhance the atlas based segmentation approach. The enhanced atlas defined in this work is a pair composed of a gray level image alongside an image of multiclass classifiers with one classifier per voxel. Each classifier embeds local information from the whole training dataset that allows for the correction of some systematic errors in the segmentation and accounts for the possible local registration errors. The authors also propose to use these images of classifiers within a multiatlas framework: results produced by a set of such local classifier atlases can be combined using a label fusion method. Results: Experiments have been made on the in vivo images of the IBSR dataset and a comparison has been made with several state-of-the-art methods such as FreeSurfer and the multiatlas nonlocal patch based method of Coupé or Rousseau. These experiments show that their method is competitive with state-of-the-art methods while having a low computational cost. Further enhancement has also been obtained with a multiatlas version of their method. It is also shown that, in this case, nonlocal fusion is unnecessary. The multiatlas fusion can therefore be done efficiently. Conclusions: The single atlas version has similar quality as state-of-the-arts multiatlas methods but with the computational cost of a naive single atlas segmentation. The multiatlas version offers a improvement in quality and can be done efficiently without a nonlocal strategy

  10. ATLAS' major cooling project

    CERN Document Server

    2005-01-01

    In 2005, a considerable effort has been put into commissioning the various units of ATLAS' complex cryogenic system. This is in preparation for the imminent cooling of some of the largest components of the detector in their final underground configuration. The liquid helium and nitrogen ATLAS refrigerators in USA 15. Cryogenics plays a vital role in operating massive detectors such as ATLAS. In many ways the liquefied argon, nitrogen and helium are the life-blood of the detector. ATLAS could not function without cryogens that will be constantly pumped via proximity systems to the superconducting magnets and subdetectors. In recent weeks compressors at the surface and underground refrigerators, dewars, pumps, linkages and all manner of other components related to the cryogenic system have been tested and commissioned. Fifty metres underground The helium and nitrogen refrigerators, installed inside the service cavern, are an important part of the ATLAS cryogenic system. Two independent helium refrigerators ...

  11. ATLAS Virtual Visits

    CERN Document Server

    Goldfarb, Steven; The ATLAS collaboration

    2015-01-01

    ATLAS Virtual Visits is a project initiated in 2011 for the Education & Outreach program of the ATLAS Experiment at CERN. Its goal is to promote public appreciation of the LHC physics program and particle physics, in general, through direct dialogue between ATLAS physicists and remote audiences. A Virtual Visit is an IP-based videoconference, coupled with a public webcast and video recording, between ATLAS physicists and remote locations around the world, that typically include high school or university classrooms, Masterclasses, science fairs, or other special events, usually hosted by collaboration members. Over the past two years, more than 10,000 people, from all of the world’s continents, have actively participated in ATLAS Virtual Visits, with many more enjoying the experience from the publicly available webcasts and recordings. We present an overview of our experience and discuss potential development for the future.

  12. Software Validation in ATLAS

    International Nuclear Information System (INIS)

    The ATLAS collaboration operates an extensive set of protocols to validate the quality of the offline software in a timely manner. This is essential in order to process the large amounts of data being collected by the ATLAS detector in 2011 without complications on the offline software side. We will discuss a number of different strategies used to validate the ATLAS offline software; running the ATLAS framework software, Athena, in a variety of configurations daily on each nightly build via the ATLAS Nightly System (ATN) and Run Time Tester (RTT) systems; the monitoring of these tests and checking the compilation of the software via distributed teams of rotating shifters; monitoring of and follow up on bug reports by the shifter teams and periodic software cleaning weeks to improve the quality of the offline software further.

  13. Dear ATLAS colleagues,

    CERN Multimedia

    PH Department

    2008-01-01

    We are collecting old pairs of glasses to take out to Mali, where they can be re-used by people there. The price for a pair of glasses can often exceed 3 months salary, so they are prohibitively expensive for many people. If you have any old spectacles you can donate, please put them in the special box in the ATLAS secretariat, bldg.40-4-D01 before the Christmas closure on 19 December so we can take them with us when we leave for Africa at the end of the month. (more details in ATLAS e-news edition of 29 September 2008: http://atlas-service-enews.web.cern.ch/atlas-service-enews/news/news_mali.php) many thanks! Katharine Leney co-driver of the ATLAS car on the Charity Run to Mali

  14. Distributed computing and farm management with application to the search for heavy gauge bosons using the ATLAS experiment at the LHC (CERN)

    CERN Document Server

    Lopez-Perez, Juan Antonio; Salt, Jose; Ros, Eduardo

    2008-01-01

    The Standard Model of particle physics describes the strong, weak, and electromagnetic forces between the fundamental particles of ordinary matter. However, it presents several problems and some questions remain unanswered so it cannot be considered a complete theory of fundamental interactions. Many extensions have been proposed in order to address these problems. Some important recent extensions are the Extra Dimensions theories. In the context of some models with Extra Dimensions of size about $1 TeV^{-}1$, in particular in the ADD model with only fermions confined to a D-brane, heavy Kaluza-Klein excitations are expected, with the same properties as SM gauge bosons but more massive. In this work, three hadronic decay modes of some of such massive gauge bosons, Z* and W*, are investigated using the ATLAS experiment at the Large Hadron Collider (LHC), presently under construction at CERN. These hadronic modes are more difficult to detect than the leptonic ones, but they should allow a measurement of the cou...

  15. Evolution of the ATLAS Nightly Build System

    International Nuclear Information System (INIS)

    The ATLAS Nightly Build System is a major component in the ATLAS collaborative software organization, validation, and code approval scheme. For over 10 years of development it has evolved into a factory for automatic release production and grid distribution. The 50 multi-platform branches of ATLAS releases provide vast opportunities for testing new packages, verification of patches to existing software, and migration to new platforms and compilers for ATLAS code that currently contains 2200 packages with 4 million C++ and 1.4 million python scripting lines written by about 1000 developers. Recent development was focused on the integration of ATLAS Nightly Build and Installation systems. The nightly releases are distributed and validated and some are transformed into stable releases used for data processing worldwide. The ATLAS Nightly System is managed by the NICOS control tool on a computing farm with 50 powerful multiprocessor nodes. NICOS provides the fully automated framework for the release builds, testing, and creation of distribution kits. The ATN testing framework of the Nightly System runs unit and integration tests in parallel suites, fully utilizing the resources of multi-core machines, and provides the first results even before compilations complete. The NICOS error detection system is based on several techniques and classifies the compilation and test errors according to their severity. It is periodically tuned to place greater emphasis on certain software defects by highlighting the problems on NICOS web pages and sending automatic e-mail notifications to responsible developers. These and other recent developments will be presented and future plans will be described.

  16. Distributed analysis in ATLAS using GANGA

    International Nuclear Information System (INIS)

    Distributed data analysis using Grid resources is one of the fundamental applications in high energy physics to be addressed and realized before the start of LHC data taking. The needs to manage the resources are very high. In every experiment up to a thousand physicists will be submitting analysis jobs to the Grid. Appropriate user interfaces and helper applications have to be made available to assure that all users can use the Grid without expertise in Grid technology. These tools enlarge the number of Grid users from a few production administrators to potentially all participating physicists. The GANGA job management system (http://cern.ch/ganga), developed as a common project between the ATLAS and LHCb experiments, provides and integrates these kind of tools. GANGA provides a simple and consistent way of preparing, organizing and executing analysis tasks within the experiment analysis framework, implemented through a plug-in system. It allows trivial switching between running test jobs on a local batch system and running large-scale analyzes on the Grid, hiding Grid technicalities. We will be reporting on the plug-ins and our experiences of distributed data analysis using GANGA within the ATLAS experiment. Support for all Grids presently used by ATLAS, namely the LCG/EGEE, NDGF/NorduGrid, and OSG/PanDA is provided. The integration and interaction with the ATLAS data management system DQ2 into GANGA is a key functionality. An intelligent job brokering is set up by using the job splitting mechanism together with data-set and file location knowledge. The brokering is aided by an automated system that regularly processes test analysis jobs at all ATLAS DQ2 supported sites. Large numbers of analysis jobs can be sent to the locations of data following the ATLAS computing model. GANGA supports amongst other things tasks of user analysis with reconstructed data and small scale production of Monte Carlo data.

  17. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  18. Iberian ATLAS Cloud response during the first LHC collisions

    CERN Document Server

    Villaplana, M; The ATLAS collaboration; Borges, G; Borrego, C; Carvalho, J; David, M; Espinal, X; Fernández, A; Gomes, J; González de la Hoz, S; Kaci, M; Lamas, A; Nadal, J; Oliveira, M; Oliver, E; Osuna, C; Pacheco, A; Pardo, JJ; del Peso, J; Salt, J; Sánchez, J; Wolters, H

    2011-01-01

    The computing model of the ATLAS experiment at the LHC (Large Hadron Collider) is based on a tiered hierarchy that ranges from Tier0 (CERN) down to end-user's own resources (Tier3). According to the same computing model, the role of the Tier2s is to provide computing resources for event simulation processing and distributed data analysis. Tier3 centers, on the other hand, are the responsibility of individual institutions to define, fund, deploy and support. In this contribution we report on the operations of the ATLAS Iberian Cloud centers facing data taking and we describe some of the Tier3 facilities currently deployed at the Cloud.

  19. Iberian ATLAS Cloud response during the first LHC collisions

    International Nuclear Information System (INIS)

    The computing model of the ATLAS experiment at the LHC (Large Hadron Collider) is based on a tiered hierarchy that ranges from Tier0 (CERN) down to end-user's own resources (Tier3). According to the same computing model, the role of the Tier2s is to provide computing resources for event simulation processing and distributed data analysis. Tier3 centers, on the other hand, are the responsibility of individual institutions to define, fund, deploy and support. In this contribution we report on the operations of the ATLAS Iberian Cloud centers facing data taking and we describe some of the Tier3 facilities currently deployed at the Cloud.

  20. 11 March 2009 - Italian Minister of Education, University and Research M. Gelmini, visiting ATLAS and CMS underground experimental areas and LHC tunnel with Director for Research and Scientific Computing S. Bertolucci. Signature of the guest book with CERN Director-General R. Heuer and S. Bertolucci at CMS Point 5.

    CERN Multimedia

    Maximilien Brice

    2009-01-01

    Members of the Ministerial delegation: Cons. Amb. Sebastiano FULCI, Consigliere Diplomatico Dott.ssa Elisa GREGORINI, Segretario Particolare del Ministro Dott. Massimo ZENNARO, Responsabile rapporti con la stampa Prof. Roberto PETRONZIO, Presidente dell’INFN (Istituto Nazionale di Fisica Nucleare) Dott. Luciano CRISCUOLI, Direttore Generale della Ricerca, MIUR Dott. Andrea MARINONI, Consulente scientifico del Ministro CERN delegation present throughout the programme: Prof. Sergio Bertolucci, Director for Research and Scientific Computing Prof. Fabiola Gianotti, ATLAS Collaboration Spokesperson Prof. Paolo Giubellino, ALICE Deputy Spokesperson, Universita & INFN, Torino Prof. Guido Tonelli, CMS Collaboration Deputy Spokesperson, INFN Pisa Dr Monica Pepe-Altarelli, LHCb Collaboration CERN Team Leader Guests in the ATLAS exhibition area: Dr Marcello Givoletti\tPresident of CAEN Dr Davide Malacalza\tPresident of ASG Ansaldo Superconductors and users: Prof. Clara Matteuzzi, LHCb Collaboration, Universita' d...

  1. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  2. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  3. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  4. A unified framework for cross-modality multi-atlas segmentation of brain MRI

    DEFF Research Database (Denmark)

    Eugenio Iglesias, Juan; Rory Sabuncu, Mert; Van Leemput, Koen

    2013-01-01

    Multi-atlas label fusion is a powerful image segmentation strategy that is becoming increasingly popular in medical imaging. A standard label fusion algorithm relies on independently computed pairwise registrations between individual atlases and the (target) image to be segmented. These...... registrations are then used to propagate the atlas labels to the target space and fuse them into a single final segmentation. Such label fusion schemes commonly rely on the similarity between intensity values of the atlases and target scan, which is often problematic in medical imaging - in particular, when the...... atlases and target images are obtained via different sensor types or imaging protocols.In this paper, we present a generative probabilistic model that yields an algorithm for solving the atlas-to-target registrations and label fusion steps simultaneously. The proposed model does not directly rely on the...

  5. ATLAS Data Challenge 1

    CERN Document Server

    DC1 TaskForce

    2003-01-01

    The ATLAS Collaboration at CERN is preparing for the data taking and analysis at LHC that will start in 2007. Therefore, in 2002 a series of Data Challenges (DC's) was started whose goals are the validation of the Computing Model, of the complete software suite, of the data model, and to ensure the correctness of the technical choices to be made. A major feature of the first Data Challenge (DC1) was the preparation and the deployment of the software required for the production of large event samples for the High Level Trigger and Physics communities, and the production of those large data samples as a worldwide distributed activity. It should be noted that it was not an option to "run everything at CERN" even if we had wanted to; the resources were not available at CERN to carry out the production on a reasonable time-scale. We were therefore faced with the great challenge of organising and then carrying out this large-scale production at a significant number of sites around the world. However, the benefits o...

  6. ATLAS Forward Detectors and Physics

    CERN Document Server

    Soni, N

    2010-01-01

    In this communication I describe the ATLAS forward physics program and the detectors, LUCID, ZDC and ALFA that have been designed to meet this experimental challenge. In addition to their primary role in the determination of ATLAS luminosity these detectors - in conjunction with the main ATLAS detector - will be used to study soft QCD and diffractive physics in the initial low luminosity phase of ATLAS running. Finally, I will briefly describe the ATLAS Forward Proton (AFP) project that currently represents the future of the ATLAS forward physics program.

  7. Morphometric Atlas Selection for Automatic Brachial Plexus Segmentation

    International Nuclear Information System (INIS)

    Purpose: The purpose of this study was to determine the effects of atlas selection based on different morphometric parameters, on the accuracy of automatic brachial plexus (BP) segmentation for radiation therapy planning. The segmentation accuracy was measured by comparing all of the generated automatic segmentations with anatomically validated gold standard atlases developed using cadavers. Methods and Materials: Twelve cadaver computed tomography (CT) atlases (3 males, 9 females; mean age: 73 years) were included in the study. One atlas was selected to serve as a patient, and the other 11 atlases were registered separately onto this “patient” using deformable image registration. This procedure was repeated for every atlas as a patient. Next, the Dice and Jaccard similarity indices and inclusion index were calculated for every registered BP with the original gold standard BP. In parallel, differences in several morphometric parameters that may influence the BP segmentation accuracy were measured for the different atlases. Specific brachial plexus-related CT-visible bony points were used to define the morphometric parameters. Subsequently, correlations between the similarity indices and morphometric parameters were calculated. Results: A clear negative correlation between difference in protraction-retraction distance and the similarity indices was observed (mean Pearson correlation coefficient = −0.546). All of the other investigated Pearson correlation coefficients were weak. Conclusions: Differences in the shoulder protraction-retraction position between the atlas and the patient during planning CT influence the BP autosegmentation accuracy. A greater difference in the protraction-retraction distance between the atlas and the patient reduces the accuracy of the BP automatic segmentation result

  8. Morphometric Atlas Selection for Automatic Brachial Plexus Segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Van de Velde, Joris, E-mail: joris.vandevelde@ugent.be [Department of Anatomy, Ghent University, Ghent (Belgium); Department of Radiotherapy, Ghent University, Ghent (Belgium); Wouters, Johan [Department of Anatomy, Ghent University, Ghent (Belgium); Vercauteren, Tom; De Gersem, Werner; Duprez, Fréderic; De Neve, Wilfried [Department of Radiotherapy, Ghent University, Ghent (Belgium); Van Hoof, Tom [Department of Anatomy, Ghent University, Ghent (Belgium)

    2015-07-01

    Purpose: The purpose of this study was to determine the effects of atlas selection based on different morphometric parameters, on the accuracy of automatic brachial plexus (BP) segmentation for radiation therapy planning. The segmentation accuracy was measured by comparing all of the generated automatic segmentations with anatomically validated gold standard atlases developed using cadavers. Methods and Materials: Twelve cadaver computed tomography (CT) atlases (3 males, 9 females; mean age: 73 years) were included in the study. One atlas was selected to serve as a patient, and the other 11 atlases were registered separately onto this “patient” using deformable image registration. This procedure was repeated for every atlas as a patient. Next, the Dice and Jaccard similarity indices and inclusion index were calculated for every registered BP with the original gold standard BP. In parallel, differences in several morphometric parameters that may influence the BP segmentation accuracy were measured for the different atlases. Specific brachial plexus-related CT-visible bony points were used to define the morphometric parameters. Subsequently, correlations between the similarity indices and morphometric parameters were calculated. Results: A clear negative correlation between difference in protraction-retraction distance and the similarity indices was observed (mean Pearson correlation coefficient = −0.546). All of the other investigated Pearson correlation coefficients were weak. Conclusions: Differences in the shoulder protraction-retraction position between the atlas and the patient during planning CT influence the BP autosegmentation accuracy. A greater difference in the protraction-retraction distance between the atlas and the patient reduces the accuracy of the BP automatic segmentation result.

  9. Multi-atlas segmentation of subcortical brain structures via the AutoSeg software pipeline

    Directory of Open Access Journals (Sweden)

    Jiahui Wang

    2014-02-01

    Full Text Available Automated segmenting and labeling of individual brain anatomical regions, in MRI are challenging, due to the issue of individual structural variability. Although atlas-based segmentation has shown its potential for both tissue and structure segmentation, due to the inherent natural variability as well as disease-related changes in MR appearance, a single atlas image is often inappropriate to represent the full population of datasets processed in a given neuroimaging study. As an alternative for the case of single atlas segmentation, the use of multiple atlases alongside label fusion techniques has been introduced using a set of individual “atlases” that encompasses the expected variability in the studied population. In our study, we proposed a multi-atlas segmentation scheme with a novel graph-based atlas selection technique. We first paired and co-registered all atlases and the subject MR scans. A directed graph with edge weights based on intensity and shape similarity between all MR scans is then computed. The set of neighboring templates is selected via clustering of the graph. Finally, weighted majority voting is employed to create the final segmentation over the selected atlases. This multi-atlas segmentation scheme is used to extend a single-atlas-based segmentation toolkit entitled AutoSeg, which is an open-source, extensible C++ based software pipeline employing BatchMake for its pipeline scripting, developed at the Neuro Image Research and Analysis Laboratories of the University of North Carolina at Chapel Hill. AutoSeg performs N4 intensity inhomogeneity correction, rigid registration to a common template space, automated brain tissue classification based skull-stripping, and the multi-atlas segmentation. The multi-atlas-based AutoSeg has been evaluated on subcortical structure segmentation with a testing dataset of 20 adult brain MRI scans and 15 atlas MRI scans. The AutoSeg achieved mean Dice coefficients of 81.73% for the

  10. Alignment of the ATLAS Inner Detector Tracking System

    CERN Document Server

    Moles-Valls, R

    2008-01-01

    The ATLAS experiment is equipped with a tracking system for c harged particles built on two technologies: silicon and drift tube base detectors. These kind of detectors compose the ATLAS Inner Detector (ID). The Alignment of the ATLAS ID tracking s ystem requires the determination of almost 36000 degrees of freedom. From the tracking point o f view, the alignment parameters should be know to a few microns precision. This permits to att ain optimal measurements of the parameters of the charged particles trajectories, thus ena bling ATLAS to achieve its physics goals. The implementation of the alignment software, its framewor k and the data flow will be discussed. Special attention will be paid to the recent challenges wher e large scale computing simulation of the ATLAS detector has been performed, mimicking the ATLAS o peration, which is going to be very important for the LHC startup scenario. The alignment r esult for several challenges (real cosmic ray data taking and computing system commissioning) will be...

  11. The last ATLAS overview week now available on Web Lectures

    CERN Multimedia

    Jeremy Herr

    2006-01-01

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project, WLAP, a collaboration between the University of Michigan and CERN, has developed a synchronized system for recording and publishing educational multimedia presentations, using the Web as medium. This year, the University of Michigan team has been asked to record and publish all ATLAS Plenary sessions, as well as a large number of Physics and Computing tutorials. A significant amount of this material has already been published and can be accessed via the links below. All lectures can be viewed on any major platform with any common internet browser, either via streaming or local download (for limited bandwidth). Please enjoy the lectures and send us a note at wlap@umich.edu to tell us what you think. The newly available WLAP items relating to ATLAS is the following: ATLAS Week Plenary, CERN, 2-3 October 2006 All previous WLAP lectures are also avilable on the web.

  12. Atlas-Based Prostate Segmentation Using an Hybrid Registration

    CERN Document Server

    Martin, Sébastien; Troccaz, Jocelyne

    2008-01-01

    Purpose: This paper presents the preliminary results of a semi-automatic method for prostate segmentation of Magnetic Resonance Images (MRI) which aims to be incorporated in a navigation system for prostate brachytherapy. Methods: The method is based on the registration of an anatomical atlas computed from a population of 18 MRI exams onto a patient image. An hybrid registration framework which couples an intensity-based registration with a robust point-matching algorithm is used for both atlas building and atlas registration. Results: The method has been validated on the same dataset that the one used to construct the atlas using the "leave-one-out method". Results gives a mean error of 3.39 mm and a standard deviation of 1.95 mm with respect to expert segmentations. Conclusions: We think that this segmentation tool may be a very valuable help to the clinician for routine quantitative image exploitation.

  13. Reliability Engineering for ATLAS Petascale Data Processing on the Grid

    CERN Document Server

    Golubkov, D V; The ATLAS collaboration; Vaniachine, A V

    2012-01-01

    The ATLAS detector is in its third year of continuous LHC running taking data for physics analysis. A starting point for ATLAS physics analysis is reconstruction of the raw data. First-pass processing takes place shortly after data taking, followed later by reprocessing of the raw data with updated software and calibrations to improve the quality of the reconstructed data for physics analysis. Data reprocessing involves a significant commitment of computing resources and is conducted on the Grid. The reconstruction of one petabyte of ATLAS data with 1B collision events from the LHC takes about three million core-hours. Petascale data processing on the Grid involves millions of data processing jobs. At such scales, the reprocessing must handle a continuous stream of failures. Automatic job resubmission recovers transient failures at the cost of CPU time used by the failed jobs. Orchestrating ATLAS data processing applications to ensure efficient usage of tens of thousands of CPU-cores, reliability engineering ...

  14. ATLAS Tier-3 within IFIC-Valencia analysis facility

    CERN Document Server

    Villaplana, M; The ATLAS collaboration; Fernández, A; Salt, J; Lamas, A; Fassi, F; Kaci, M; Oliver, E; Sánchez, J; Sánchez-Martínez, V

    2012-01-01

    The ATLAS Tier-3 at IFIC-Valencia is attached to a Tier-2 that has 50% of the Spanish Federated Tier-2 resources. In its design, the Tier-3 includes a GRID-aware part that shares some of the features of IFIC Tier-2 such as using Lustre as a file system. ATLAS users, 70% of IFIC users, also have the possibility of analysing data with a PROOF farm and storing them locally. In this contribution we discuss the design of the analysis facility as well as the monitoring tools we use to control and improve its performance. We also comment on how the recent changes in the ATLAS computing GRID model affect IFIC. Finally, how this complex system can coexist with the other scientific applications running at IFIC (non-ATLAS users) is presented.

  15. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  16. Computer

    CERN Document Server

    Atkinson, Paul

    2011-01-01

    The pixelated rectangle we spend most of our day staring at in silence is not the television as many long feared, but the computer-the ubiquitous portal of work and personal lives. At this point, the computer is almost so common we don't notice it in our view. It's difficult to envision that not that long ago it was a gigantic, room-sized structure only to be accessed by a few inspiring as much awe and respect as fear and mystery. Now that the machine has decreased in size and increased in popular use, the computer has become a prosaic appliance, little-more noted than a toaster. These dramati

  17. EnviroAtlas - Memphis, TN - EnviroAtlas Community Boundary

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset shows the boundary of the Memphis, TN EnviroAtlas Community. It represents the outside edge of all the block groups included in the...

  18. Migration of ATLAS PanDA to CERN

    Science.gov (United States)

    Stewart, Graeme Andrew; Klimentov, Alexei; Koblitz, Birger; Lamanna, Massimo; Maeno, Tadashi; Nevski, Pavel; Nowak, Marcin; Emanuel De Castro Faria Salgado, Pedro; Wenaus, Torre

    2010-04-01

    The ATLAS Production and Distributed Analysis System (PanDA) is a key component of the ATLAS distributed computing infrastructure. All ATLAS production jobs, and a substantial amount of user and group analysis jobs, pass through the PanDA system, which manages their execution on the grid. PanDA also plays a key role in production task definition and the data set replication request system. PanDA has recently been migrated from Brookhaven National Laboratory (BNL) to the European Organization for Nuclear Research (CERN), a process we describe here. We discuss how the new infrastructure for PanDA, which relies heavily on services provided by CERN IT, was introduced in order to make the service as reliable as possible and to allow it to be scaled to ATLAS's increasing need for distributed computing. The migration involved changing the backend database for PanDA from MySQL to Oracle, which impacted upon the database schemas. The process by which the client code was optimised for the new database backend is discussed. We describe the procedure by which the new database infrastructure was tested and commissioned for production use. Operations during the migration had to be planned carefully to minimise disruption to ongoing ATLAS offline computing. All parts of the migration were fully tested before commissioning the new infrastructure and the gradual migration of computing resources to the new system allowed any problems of scaling to be addressed.

  19. ATLAS Event - First Splash of Particles in ATLAS

    CERN Multimedia

    ATLAS Outreach

    2008-01-01

    A simulated event. September 10, 2008 - The ATLAS detector lit up as a flood of particles traversed the detector when the beam was occasionally directed at a target near ATLAS. This allowed ATLAS physicists to study how well the various components of the detector were functioning in preparation for the forthcoming collisions. The first ATLAS data recorded on September 10, 2008 is seen here. Running time 24 seconds

  20. ATLAS TV PROJECT

    CERN Multimedia

    OMNI communication

    2006-01-01

    CERN, Building 40 Interview with theorist Mr. Philip Hinchliffe (Berkeley) as well an interview with his wife Mrs. Hinchliffe who is also Physics Department head at Berkeley. They are both working in ATLAS Experiment.

  1. ATLAS TV PROJECT

    CERN Multimedia

    OMNI communication

    2005-01-01

    ATLAS Physics Workshop at the University of Roma Tre held from Monday 06 June 2005 to Saturday 11 June 2005. Experts establishing workshop, poster, people milling Shots of Peter Jenni introduction Many audience shots Sequences from various talks

  2. Printed circuit for ATLAS

    CERN Multimedia

    Laurent Guiraud

    1999-01-01

    A printed circuit board made by scientists in the ATLAS collaboration for the transition radiaton tracker (TRT). This will read data produced when a high energy particle crosses the boundary between two materials with different electrical properties.

  3. California Ocean Uses Atlas

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset is a result of the California Ocean Uses Atlas Project: a collaboration between NOAA's National Marine Protected Areas Center and Marine Conservation...

  4. PeptideAtlas

    Data.gov (United States)

    U.S. Department of Health & Human Services — PeptideAtlas is a multi-organism, publicly accessible compendium of peptides identified in a large set of tandem mass spectrometry proteomics experiments. Mass...

  5. General Dynamics Atlas family

    Science.gov (United States)

    Oates, James

    Developments concerning the Atlas family of launch vehicles over the last three or four years are summarized. Attention is given to the center of gravity, load factors, acoustics, pyroshock, low-frequency sinusoidal vibration, and high-frequency random vibration.

  6. ATLAS Cavern baseplate

    CERN Multimedia

    2002-01-01

    This video shows the incredible amounth of iron used for ATLAS cavern. Please look at the related links and also videos that are concerning the civil engineering where you can see even more detailed cavern excavation work.

  7. The Latest from ATLAS

    CERN Multimedia

    2009-01-01

    Since November 2008, ATLAS has undertaken detailed maintenance, consolidation and repair work on the detector (see Bulletin of 20 July 2009). Today, the fraction of the detector that is operational has increased compared to last year: less than 1% of dead channels for most of the sub-systems. "We are going to start taking data this year with a detector which is even more efficient than it was last year," agrees ATLAS Spokesperson, Fabiola Gianotti. By mid-September the detector was fully closed again, and the cavern sealed. The magnet system has been operated at nominal current for extensive periods over recent months. Once the cavern was sealed, ATLAS began two weeks of combined running. Right now, subsystems are joining the run incrementally until the point where the whole detector is integrated and running as one. In the words of ATLAS Technical Coordinator, Marzio Nessi: "Now we really start physics." In parallel, the analysis ...

  8. ATLAS Metadata Task Force

    Energy Technology Data Exchange (ETDEWEB)

    ATLAS Collaboration; Costanzo, D.; Cranshaw, J.; Gadomski, S.; Jezequel, S.; Klimentov, A.; Lehmann Miotto, G.; Malon, D.; Mornacchi, G.; Nemethy, P.; Pauly, T.; von der Schmitt, H.; Barberis, D.; Gianotti, F.; Hinchliffe, I.; Mapelli, L.; Quarrie, D.; Stapnes, S.

    2007-04-04

    This document provides an overview of the metadata, which are needed to characterizeATLAS event data at different levels (a complete run, data streams within a run, luminosity blocks within a run, individual events).

  9. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  10. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  11. Multilevel Workflow System in the ATLAS Experiment

    CERN Document Server

    Borodin, M; The ATLAS collaboration; Golubkov, D; Klimentov, A; Maeno, T; Vaniachine, A

    2015-01-01

    The ATLAS experiment is scaling up Big Data processing for the next LHC run using a multilevel workflow system comprised of many layers. In Big Data processing ATLAS deals with datasets, not individual files. Similarly a task (comprised of many jobs) has become a unit of the ATLAS workflow in distributed computing, with about 0.8M tasks processed per year. In order to manage the diversity of LHC physics (exceeding 35K physics samples per year), the individual data processing tasks are organized into workflows. For example, the Monte Carlo workflow is composed of many steps: generate or configure hard-processes, hadronize signal and minimum-bias (pileup) events, simulate energy deposition in the ATLAS detector, digitize electronics response, simulate triggers, reconstruct data, convert the reconstructed data into ROOT ntuples for physics analysis, etc. Outputs are merged and/or filtered as necessary to optimize the chain. The bi-level workflow manager - ProdSys2 - generates actual workflow tasks and their jobs...

  12. Multilevel Workflow System in the ATLAS Experiment

    CERN Document Server

    Borodin, M; The ATLAS collaboration; De, K; Golubkov, D; Klimentov, A; Maeno, T; Vaniachine, A

    2014-01-01

    The ATLAS experiment is scaling up Big Data processing for the next LHC run using a multilevel workflow system comprised of many layers. In Big Data processing ATLAS deals with datasets, not individual files. Similarly a task (comprised of many jobs) has become a unit of the ATLAS workflow in distributed computing, with about 0.8M tasks processed per year. In order to manage the diversity of LHC physics (exceeding 35K physics samples per year), the individual data processing tasks are organized into workflows. For example, the Monte Carlo workflow is composed of many steps: generate or configure hard-processes, hadronize signal and minimum-bias (pileup) events, simulate energy deposition in the ATLAS detector, digitize electronics response, simulate triggers, reconstruct data, convert the reconstructed data into ROOT ntuples for physics analysis, etc. Outputs are merged and/or filtered as necessary to optimize the chain. The bi-level workflow manager - ProdSys2 - generates actual workflow tasks and their jobs...

  13. Study of ATLAS TRT performance with GRID and supercomputers.

    CERN Document Server

    Krasnopevtsev, Dimitriy; The ATLAS collaboration; Mashinistov, Ruslan; Belyaev, Nikita; Ryabinkin, Evgeny

    2015-01-01

    After the early success in discovering a new particle consistent with the long awaited Higgs boson, Large Hadron Collider experiments are ready for the precision measurements and further discoveries that will be made possible by much higher LHC collision rates from spring 2015. A proper understanding of the detectors performance at high occupancy conditions is important for many on-going physics analyses. The ATLAS Transition Radiation Tracker (TRT) is one of these detectors. TRT is a large straw tube tracking system that is the outermost of the three subsystems of the ATLAS Inner Detector (ID). TRT contributes significantly to the resolution for high-pT tracks in the ID providing excellent particle identification capabilities and electron-pion separation. ATLAS experiment is using Worldwide LHC Computing Grid. WLCG is a global collaboration of computer centers and provides seamless access to computing resources which include data storage capacity, processing power, sensors, visualisation tools and more. WLCG...

  14. Study of ATLAS TRT performance with GRID and supercomputers.

    CERN Document Server

    Krasnopevtsev, Dimitriy; The ATLAS collaboration; Belyaev, Nikita; Mashinistov, Ruslan; Ryabinkin, Evgeny

    2015-01-01

    After the early success in discovering a new particle consistent with the long awaited Higgs boson, Large Hadron Collider experiments are ready for the precision measurements and further discoveries that will be made possible by much higher LHC collision rates from spring 2015. A proper understanding of the detectors performance at highoccupancy conditions is important for many on-going physics analyses. The ATLAS Transition Radiation Tracker (TRT) is one of these detectors. TRT is a large straw tube tracking system that is the outermost of the three subsystems of the ATLAS Inner Detector (ID). TRT contributes significantly to the resolution for high-pT tracks in the ID providing excellent particle identification capabilities and electron-pion separation. ATLAS experiment is using Worldwide LHC Computing Grid. WLCG is a global collaboration of computer centers and provides seamless access to computing resources which include data storage capacity, processing power, sensors, visualization tools and more. WLCG ...

  15. ATLAS Transitional Radiation Tracker

    CERN Multimedia

    ATLAS Outreach

    2006-01-01

    This colorful 3D animation is an excerpt from the film "ATLAS-Episode II, The Particles Strike Back." Shot with a bug's eye view of the inside of the detector. The viewer is taken on a tour of the inner workings of the transitional radiation tracker within the ATLAS detector. Subjects covered include what the tracker is used to measure, its structure, what happens when particles pass through the tracker, how it distinguishes between different types of particles within it.

  16. ATLAS physics results

    CERN Document Server

    Mitsou, Vasiliki A

    2015-01-01

    The ATLAS experiment at the Large Hadron Collider at CERN has been successfully taking data since the end of 2009 in proton-proton collisions at centre-of-mass energies of 7 and 8 TeV, and in heavy ion collisions. In these lectures, some of the most recent ATLAS results will be given on Standard Model measurements, the discovery of the Higgs boson, searches for supersymmetry and exotics and on heavy-ion results.

  17. ATLAS Jet Energy Scale

    OpenAIRE

    D. Schouten; Tanasijczuk, A.; Vetterli, M.(Department of Physics, Simon Fraser University, Burnaby, BC, Canada); Collaboration, for the ATLAS

    2012-01-01

    Jets originating from the fragmentation of quarks and gluons are the most common, and complicated, final state objects produced at hadron colliders. A precise knowledge of their energy calibration is therefore of great importance at experiments at the Large Hadron Collider at CERN, while is very difficult to ascertain. We present in-situ techniques and results for the jet energy scale at ATLAS using recent collision data. ATLAS has demonstrated an understanding of the necessary jet energy cor...

  18. ATLAS distributed analysis

    OpenAIRE

    Adams, David; Branco, Miguel; Albrand, Solveig; Rybkine, G.; Orellana, F.; Liko, D.; Tan C.L.; Deng, W.; C. KANNAN; Harrison Karl; Fassi, Farida; Fulachier, J.; Chetan, N.; Haeberli, C.; Soroko, A.

    2004-01-01

    The ATLAS distributed analysis (ADA) system is described. The ATLAS experiment has more that 2000 physicists from 150 insititutions in 34 countries. Users, data and processing are distributed over these sites. ADA makes use of a collection of high-level web services whose interfaces are expressed in terms of AJDL (abstract job definition language) which includes descriptions of datasets, transformations and jobs. The high-level services are implemented using generic parts...

  19. ATLAS Facility Description Report

    International Nuclear Information System (INIS)

    A thermal-hydraulic integral effect test facility, ATLAS (Advanced Thermal-hydraulic Test Loop for Accident Simulation), has been constructed at KAERI (Korea Atomic Energy Research Institute). The ATLAS has the same two-loop features as the APR1400 and is designed according to the well-known scaling method suggested by Ishii and Kataoka to simulate the various test scenarios as realistically as possible. It is a half-height and 1/288-volume scaled test facility with respect to the APR1400. The fluid system of the ATLAS consists of a primary system, a secondary system, a safety injection system, a break simulating system, a containment simulating system, and auxiliary systems. The primary system includes a reactor vessel, two hot legs, four cold legs, a pressurizer, four reactor coolant pumps, and two steam generators. The secondary system of the ATLAS is simplified to be of a circulating loop-type. Most of the safety injection features of the APR1400 and the OPR1000 are incorporated into the safety injection system of the ATLAS. In the ATLAS test facility, about 1300 instrumentations are installed to precisely investigate the thermal-hydraulic behavior in simulation of the various test scenarios. This report describes the scaling methodology, the geometric data of the individual component, and the specification and the location of the instrumentations in detail

  20. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...

  1. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  2. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  3. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  4. COMPUTING

    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  5. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  6. ATLAS Civil Engineering Point 1

    CERN Multimedia

    Jean-Claude Vialis

    1999-01-01

    Different phases of realisation to Point 1 : zone of the ATLAS experiment The ATLAS experimental area is located in Point 1, just across the main CERN entrance, in the commune of Meyrin. There people are ever so busy to finish the different infrastructures for ATLAS. Real underground video. The film has original working sound.

  7. ATLAS Overview Week at Brookhaven

    CERN Multimedia

    Pilcher, J

    Over 200 ATLAS participants gathered at Brookhaven National Laboratory during the first week of June for our annual overview week. Some system communities arrived early and held meetings on Saturday and Sunday, and the detector interface group (DIG) and Technical Coordination also took advantage of the time to discuss issues of interest for all detector systems. Sunday was also marked by a workshop on the possibilities for heavy ion physics with ATLAS. Beginning on Monday, and for the rest of the week, sessions were held in common in the well equipped Berkner Hall auditorium complex. Laptop computers became the norm for presentations and a wireless network kept laptop owners well connected. Most lunches and dinners were held on the lawn outside Berkner Hall. The weather was very cooperative and it was an extremely pleasant setting. This picture shows most of the participants from a view on the roof of Berkner Hall. Technical Coordination and Integration issues started the reports on Monday and became a...

  8. ATLAS FTK: Fast Track Trigger

    CERN Document Server

    Volpi, Guido; The ATLAS collaboration

    2015-01-01

    An overview of the ATLAS Fast Tracker processor is presented, reporting the design of the system, its expected performance, and the integration status. The next LHC runs, with a significant increase in instantaneous luminosity, will provide a big challenge to the trigger and data acquisition systems of all the experiments. An intensive use of the tracking information at the trigger level will be important to keep high efficiency in interesting events, despite the increase in multiple p-p collisions per bunch crossing (pile-up). In order to increase the use of tracks within the High Level Trigger (HLT), the ATLAS experiment planned the installation of an hardware processor dedicated to tracking: the Fast TracKer (FTK) processor. The FTK is designed to perform full scan track reconstruction at every Level-1 accept. To achieve this goal, the FTK uses a fully parallel architecture, with algorithms designed to exploit the computing power of custom VLSI chips, the Associative Memory, as well as modern FPGAs. The FT...

  9. Automated, Foot-Bone Registration Using Subdivision-Embedded Atlases for Spatial Mapping of Bone Mineral Density

    OpenAIRE

    Liu, Lu; Commean, Paul K.; Hildebolt, Charles; Sinacore, Dave; Prior, Fred; Carson, James P.; Kakadiaris, Ioannis,; Ju, Tao

    2012-01-01

    We present an atlas-based registration method for bones segmented from quantitative computed tomography (QCT) scans, with the goal of mapping their interior bone mineral densities (BMDs) volumetrically. We introduce a new type of deformable atlas, called subdivision-embedded atlas, which consists of a control grid represented as a tetrahedral subdivision mesh and a template bone surface embedded within the grid. Compared to a typical lattice-based deformation grid, the subdivision control gri...

  10. ATLAS data sonification : a new interface for musical expression

    CERN Document Server

    Hill, Ewan; The ATLAS collaboration

    2016-01-01

    The goal of this project is to transform ATLAS data into sound and explore how ATLAS audio can be a source of inspiration and education for musicians and for the general public. Real-time ATLAS data is sonified and streamed as music on a dedicated website. Listeners may be motivated to learn more about the ATLAS experiment and composers have the opportunity to explore the physics in the collision data through a new medium. The ATLAS collaboration has shared its expertise and access to the live data stream from which the live event displays are generated. This poster tells the story of a long journey from the hallways of CERN where the project collaboration began to the halls of the Montreux Jazz Festival where harmonies were performed. The mapping of the data to sound will be outlined and interactions with musicians and contributions to conferences dedicated to human-computer interaction will also be discussed. It is a partnership between the ATLAS collaboration and the MIT multimedia lab.

  11. COMPUTING

    CERN Multimedia

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  12. COMPUTING

    CERN Multimedia

    Matthias Kasemann

    Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...

  13. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...

  14. COMPUTING

    CERN Multimedia

    Contributions from I. Fisk

    2012-01-01

    Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences.  Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...

  15. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  16. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...

  17. COMPUTING

    CERN Multimedia

    2010-01-01

    Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...

  18. Multi-atlas learner fusion: An efficient segmentation approach for large-scale data.

    Science.gov (United States)

    Asman, Andrew J; Huo, Yuankai; Plassard, Andrew J; Landman, Bennett A

    2015-12-01

    We propose multi-atlas learner fusion (MLF), a framework for rapidly and accurately replicating the highly accurate, yet computationally expensive, multi-atlas segmentation framework based on fusing local learners. In the largest whole-brain multi-atlas study yet reported, multi-atlas segmentations are estimated for a training set of 3464 MR brain images. Using these multi-atlas estimates we (1) estimate a low-dimensional representation for selecting locally appropriate example images, and (2) build AdaBoost learners that map a weak initial segmentation to the multi-atlas segmentation result. Thus, to segment a new target image we project the image into the low-dimensional space, construct a weak initial segmentation, and fuse the trained, locally selected, learners. The MLF framework cuts the runtime on a modern computer from 36 h down to 3-8 min - a 270× speedup - by completely bypassing the need for deformable atlas-target registrations. Additionally, we (1) describe a technique for optimizing the weak initial segmentation and the AdaBoost learning parameters, (2) quantify the ability to replicate the multi-atlas result with mean accuracies approaching the multi-atlas intra-subject reproducibility on a testing set of 380 images, (3) demonstrate significant increases in the reproducibility of intra-subject segmentations when compared to a state-of-the-art multi-atlas framework on a separate reproducibility dataset, (4) show that under the MLF framework the large-scale data model significantly improve the segmentation over the small-scale model under the MLF framework, and (5) indicate that the MLF framework has comparable performance as state-of-the-art multi-atlas segmentation algorithms without using non-local information. PMID:26363845

  19. An ATLAS Virtual Visit connects physicists at the Town Square of Cracow and physicists of the LHC Experiment in the ATLAS control room; special participation of CERN's General Director, Rolf Heuer and the Director for Research and Scientific Computing, Sergio Bertolucci.

    CERN Multimedia

    2012-01-01

    he 12 Festival of Science "Theory-knowledge-experience...". Fest will be located on the traditional Main Square, which is visited by thousands of citizens and tourists. The Institute of Nuclear Physics as usual participates in this annual event. Our visitors will learn the secrets of the CERN experiments on the Large Hadron Collider - ATLAS, LHCb, ALICE, CMS, find out more about the Higgs particles, antimatter quark-gluon plasma (beeing guided by our scientists and PhD students). One of the attractions will be ATLAS Control Room Virtual Visit. Visiting people will have an opportunity to see how ATLAS is controlled and operated to collect its exciting data and ask questions to scientists and engineers involved in LHC program at CERN. Institute of Nuclear Physics has prepared also several interactive demonstrations of Atomic Force Microscopy, Magnetic Resonance, Hadron Therapy and Crystal Physics.

  20. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...

  1. ATLAS Grid Data Processing: system evolution and scalability

    CERN Document Server

    Golubkov, D; The ATLAS collaboration; Klimentov, A; Minaenko, A; Nevski, P; Vaniachine, A; Walker, R

    2012-01-01

    The production system for Grid Data Processing handles petascale ATLAS data reprocessing and Monte Carlo activities. The production system empowered further data processing steps on the Grid performed by dozens of ATLAS physics groups with coordinated access to computing resources worldwide, including additional resources sponsored by regional facilities. The system provides knowledge management of configuration parameters for massive data processing tasks, reproducibility of results, scalable database access, orchestrated workflow and performance monitoring, dynamic workload sharing, automated fault tolerance and petascale data integrity control. The system evolves to accommodate a growing number of users and new requirements from our contacts in ATLAS main areas: Trigger, Physics, Data Preparation and Software & Computing. To assure scalability, the next generation production system architecture development is in progress. We report on scaling up the production system for a growing number of users provi...

  2. Grid site testing for ATLAS with HammerCloud

    International Nuclear Information System (INIS)

    With the exponential growth of LHC (Large Hadron Collider) data in 2012, distributed computing has become the established way to analyze collider data. The ATLAS grid infrastructure includes more than 130 sites worldwide, ranging from large national computing centers to smaller university clusters. HammerCloud was previously introduced with the goals of enabling virtual organisations (VO) and site-administrators to run validation tests of the site and software infrastructure in an automated or on-demand manner. The HammerCloud infrastructure has been constantly improved to support the addition of new test workflows. These new workflows comprise e.g. tests of the ATLAS nightly build system, ATLAS Monte Carlo production system, XRootD federation (FAX) and new site stress test workflows. We report on the development, optimization and results of the various components in the HammerCloud framework.

  3. ATLAS Review Office

    CERN Multimedia

    Szeless, B

    The ATLAS internal reviews, be it the mandatory Production Readiness Reviews, the now newly installed Production Advancement Reviews, or the more and more requested different Design Reviews, have become a part of our ATLAS culture over the past years. The Activity Systems Status Overviews are, for the time being, a one in time event and should be held for each system as soon as possible to have some meaning. There seems to a consensus that the reviews have become a useful project tool for the ATLAS management but even more so for the sub-systems themselves making achievements as well as possible shortcomings visible. One other recognized byproduct is the increasing cross talk between the systems, a very important ingredient to make profit all the systems from the large collective knowledge we dispose of in ATLAS. In the last two months, the first two PARs were organized for the MDT End Caps and the TRT Barrel Modules, both part of the US contribution to the ATLAS Project. Furthermore several different design...

  4. ATLAS: Exceeding all expectations

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    “One year ago it would have been impossible for us to guess that the machine and the experiments could achieve so much so quickly”, says Fabiola Gianotti, ATLAS spokesperson. The whole chain – from collision to data analysis – has worked remarkably well in ATLAS.   The first LHC proton run undoubtedly exceeded expectations for the ATLAS experiment. “ATLAS has worked very well since the beginning. Its overall data-taking efficiency is greater than 90%”, says Fabiola Gianotti. “The quality and maturity of the reconstruction and simulation software turned out to be better than we expected for this initial stage of the experiment. The Grid is a great success, and right from the beginning it has allowed members of the collaboration all over the world to participate in the data analysis in an effective and timely manner, and to deliver physics results very quickly”. In just a few months of data taking, ATLAS has observed t...

  5. ATLAS Offline Data Quality Monitoring

    CERN Document Server

    Adelman, J; Boelaert, N; D'Onofrio, M; Frost, J A; Guyot, C; Hauschild, M; Hoecker, A; Leney, K J C; Lytken, E; Martinez-Perez, M; Masik, J; Nairz, A M; Onyisi, P U E; Roe, S; Schatzel, S; Schaetzel, S; Wilson, M G

    2010-01-01

    The ATLAS experiment at the Large Hadron Collider reads out 100 Million electronic channels at a rate of 200 Hz. Before the data are shipped to storage and analysis centres across the world, they have to be checked to be free from irregularities which render them scientifically useless. Data quality offline monitoring provides prompt feedback from full first-pass event reconstruction at the Tier-0 computing centre and can unveil problems in the detector hardware and in the data processing chain. Detector information and reconstructed proton-proton collision event characteristics are distilled into a few key histograms and numbers which are automatically compared with a reference. The results of the comparisons are saved as status flags in a database and are published together with the histograms on a web server. They are inspected by a 24/7 shift crew who can notify on-call experts in case of problems and in extreme cases signal data taking abort.

  6. Experience running an ATLAS distributed Tier-2 and Tier-3 at IFIC-Valencia

    International Nuclear Information System (INIS)

    The ATLAS computing model describes a hierarchical distributed virtual computing facility within which are defined Tier-1 and Tier-2 computing centres having certain specific MOU agreed roles and capacities to be used for the benefit and at the direction of ATLAS as a whole. In this model the primary functions of the Tier-1 are to host and provide long term storage for, access to and re-reconstruction of a subset of the ATLAS RAW data (20% in the case of the Tier-1), provide access to ESD, AOD and TAG data sets and support the analysis of these data sets. The primary functions of the Tier-2.s are simulation (they provide the bulk of simulation for ATLAS), calibration, chaotic analysis for a subset of analysis groups and hosting of AOD, TAG and some physics group samples. Tier-3 sites are institution-level non-ATLAS funded or controlled centres/clusters which wish to participate in ATLAS computing, presumably most frequently in support of the particular interests of local physicists (physicists at the local Tier-3 decide how these resources are used). These are clusters of computers which can vary widely in size. It should be noted that substantial institutional funding to originate such clusters is potentially available, and that they could make a real contribution to the impact of ATLAS on the overall ATLAS physics output. As such, there is considerable value in providing some level of technical support to these sites. In this talk the experience gained on running, maintaining, supporting and managing a Tier2 centre will be presented. Finally, a Tier-3 prototype at IFIC-Valencia is going to be discussed, in order to meet ATLAS data-taking requirements. (Author)

  7. EnviroAtlas - Metrics for Austin, TX

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://enviroatlas.epa.gov/EnviroAtlas). The layers in this...

  8. Recently Published Lectures and Tutorials for ATLAS

    CERN Multimedia

    Goldfarb, S.

    2006-01-01

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project, WLAP, a collaboration between the University of Michigan and CERN, has developed a synchronized system for recording and publishing educational multimedia presentations, using the Web as medium. The current system, including future developments for the project and the field in general, was recently presented at the CHEP 2006 conference in Mumbai, India. The relevant presentations and papers can be found here: The Web Lecture Archive Project. A Web Lecture Capture System with Robotic Speaker Tracking This year, the University of Michigan team has been asked to record and publish all ATLAS Plenary sessions, as well as a large number of Physics and Computing tutorials. A significant amount of this material has already been published and can be accessed via the links below. All lectures can be viewed on any major platform with any common internet browser, either via streaming or local download (for limited bandwidth). Please e...

  9. The ATLAS Distributed Data Management System & Databases

    CERN Document Server

    Garonne, V; The ATLAS collaboration; Barisits, M; Beermann, T; Vigne, R; Serfon, C

    2013-01-01

    The ATLAS Distributed Data Management (DDM) System is responsible for the global management of petabytes of high energy physics data. The current system, DQ2, has a critical dependency on Relational Database Management Systems (RDBMS), like Oracle. RDBMS are well-suited to enforcing data integrity in online transaction processing applications, however, concerns have been raised about the scalability of its data warehouse-like workload. In particular, analysis of archived data or aggregation of transactional data for summary purposes is problematic. Therefore, we have evaluated new approaches to handle vast amounts of data. We have investigated a class of database technologies commonly referred to as NoSQL databases. This includes distributed filesystems, like HDFS, that support parallel execution of computational tasks on distributed data, as well as schema-less approaches via key-value stores, like HBase. In this talk we will describe our use cases in ATLAS, share our experiences with various databases used ...

  10. Recently Published Lectures and Tutorials for ATLAS

    CERN Multimedia

    J. Herr

    2006-01-01

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project, a collaboration between the University of Michigan and CERN, has developed a synchronized system for recording and publishing educational multimedia presentations, using the Web as medium. The current system, including future developments for the project and the field in general, was recently presented at the CHEP 2006 conference in Mumbai, India. The relevant presentations and papers can be found here: The Web Lecture Archive Project A Web Lecture Capture System with Robotic Speaker Tracking This year, the University of Michigan team has been asked to record and publish all ATLAS Plenary sessions, as well as a large number of Physics and Computing tutorials. A significant amount of this material has already been published and can be accessed via the links below. All lectures can be viewed on any major platform with any common internet browser, either via streaming or local download (for limited bandwidth). Please enjoy the l...

  11. The ATLAS Data Management Software Engineering Process

    CERN Document Server

    Lassnig, M; The ATLAS collaboration; Stewart, G A; Barisits, M; Beermann, T; Vigne, R; Serfon, C; Goossens, L; Nairz, A

    2013-01-01

    Rucio is the next-generation data management system of the ATLAS experiment. The software engineering process to develop Rucio is fundamentally different to existing software development approaches in the ATLAS distributed computing community. Based on a conceptual design document, development takes place using peer-reviewed code in a test-driven environment. The main objectives are to ensure that every engineer understands the details of the full project, even components usually not touched by them, that the design and architecture are coherent, that temporary contributors can be productive without delay, that programming mistakes are prevented before being committed to the source code, and that the source is always in a fully functioning state. This contribution will illustrate the workflows and products used, and demonstrate the typical development cycle of a component from inception to deployment within this software engineering process. Next to the technological advantages, this contribution will also hi...

  12. The ATLAS Data Management Software Engineering Process

    CERN Document Server

    Lassnig, M; The ATLAS collaboration; Stewart, G A; Barisits, M; Beermann, T; Vigne, R; Serfon, C; Goossens, L; Nairz, A; Molfetas, A

    2014-01-01

    Rucio is the next-generation data management system of the ATLAS experiment. The software engineering process to develop Rucio is fundamentally different to existing software development approaches in the ATLAS distributed computing community. Based on a conceptual design document, development takes place using peer-reviewed code in a test-driven environment. The main objectives are to ensure that every engineer understands the details of the full project, even components usually not touched by them, that the design and architecture are coherent, that temporary contributors can be productive without delay, that programming mistakes are prevented before being committed to the source code, and that the source is always in a fully functioning state. This contribution will illustrate the workflows and products used, and demonstrate the typical development cycle of a component from inception to deployment within this software engineering process. Next to the technological advantages, this contribution will also hi...

  13. COMPUTING

    CERN Multimedia

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  14. The ATLAS Muon Trigger

    CERN Document Server

    Ventura, A; The ATLAS collaboration

    2013-01-01

    The ATLAS experiment at CERN's Large Hadron Collider (LHC) deploys a three-levels processing scheme for the trigger system. The Level-1 muon trigger system gets its input from fast muon trigger detectors. Fast sector logic boards select muon candidates, which are passed via an interface board to the central trigger processor and then to the High Level Trigger (HLT). The muon HLT is purely software based and encompasses a Level-2 trigger followed by an event filter for a staged trigger approach. It has access to the data of the precision muon detectors and other detector elements to refine the muon hypothesis. The ATLAS experiment has taken data with high efficiency continuously over entire running periods from 2010 to 2012, for which sophisticated triggers to guard the highest physics output while reducing effectively the event rate were mandatory. The ATLAS muon trigger has successfully adapted to this changing environment. The selection strategy has been optimized for the various physics analyses involving ...

  15. The ATLAS tau trigger

    CERN Document Server

    Casado, MP; Benslama, K; Bosman, M; Brenner, R; Czyczula, Z; Dam, M; Demers, S; Farrington, S; Igonkina, O; Kalinowski, A; Kanaya, N; Osuna, C; Pérez, E; Ptacek, E; Reinsch, A; Saavedra, A; Sfyrla, A; Shamin, M; Sopczak, A; Strom, D; Torrence, E; Tsuno, S; Vorwerk, V; Watson, A; Xella, S

    2008-01-01

    The implementation of a trigger for hadronically decaying tau leptons at the Large Hadronic Collider (LHC) is challenging due to the high background rate, on the other hand it increases tremendously the discovery potential of ATLAS in searches for Standard Model (SM) or Supersymmetric (SUSY) Higgs or other more exotic final states. In this paper we describe the ATLAS tau trigger system, focusing on the early data taking period, and present results from studies based on GEANT 4 simulated events, including trigger rates and the acceptance of tau leptons from SM processes. In order to cope with the rate and optimize the efficiency of important physics channels, the results of the current simulation studies indicate that ATLAS tau triggers should include either relatively high transverse momentum single tau signatures, or low transverse momentum tau signatures in combination with other signatures, such as missing transverse energy, leptons, or jets.

  16. The ATLAS metadata interface

    International Nuclear Information System (INIS)

    AMI was chosen as the ATLAS dataset selection interface in July 2006. It is the main interface for searching for ATLAS data using physics metadata criteria. AMI has been implemented as a generic database management framework which allows parallel searching over many catalogues, which may have differing schema. The main features of the web interface will be described; in particular the powerful graphic query builder. The use of XML/XLST technology ensures that all commands can be used either on the web or from a command line interface via a web service. We also describe the overall architecture of ATLAS metadata and the different actors and granularity involved, and the place of AMI within this architecture. We discuss the problems involved in the correlation of metadata of differing granularity, and propose a solution for information mediation

  17. ATLAS rewards industry

    CERN Multimedia

    2006-01-01

    Showing excellence in mechanics, electronics and cryogenics, three industries are honoured for their contributions to the ATLAS experiment. Representatives of the three award-wining companies after the ceremony. For contributing vital pieces to the ATLAS puzzle, three industries were recognized on Friday 5 May during a supplier awards ceremony. After a welcome and overview of the ATLAS experiment by spokesperson Peter Jenni, CERN Secretary-General Maximilian Metzger stressed the importance of industry to CERN's scientific goals. Close interaction with CERN was a key factor in the selection of each rewarded company, in addition to the high-quality products they delivered to the experiment. Alu Menziken Industrie AG, of Switzerland, was honoured for the production of 380,000 aluminium tubes for the Monitored Drift Tube Chambers (MDT). As Giora Mikenberg, the Muon System Project Leader stressed, the aluminium tubes were delivered on time with an extraordinary quality and precision. Between October 2000 and Jan...

  18. ATLAS TDAQ System Administration:

    CERN Document Server

    Lee, Christopher Jon; The ATLAS collaboration; Bogdanchikov, Alexander; Ballestrero, Sergio; Contescu, Alexandru Cristian; Dubrov, Sergei; Fazio, Daniel; Korol, Aleksandr; Scannicchio, Diana; Twomey, Matthew Shaun; Voronkov, Artem

    2015-01-01

    The ATLAS Trigger and Data Acquisition (TDAQ) system is responsible for the online processing of live data, streaming from the ATLAS experiment at the Large Hadron Collider (LHC) at CERN. The online farm is composed of ̃3000 servers, processing the data readout from ̃100 million detector channels through multiple trigger levels. During the two years of the first Long Shutdown (LS1) there has been a tremendous amount of work done by the ATLAS TDAQ System Administrators, implementing numerous new software applications, upgrading the OS and the hardware, changing some design philosophies and exploiting the High Level Trigger farm with different purposes. During the data taking only critical security updates are applied and broken hardware is replaced to ensure a stable operational environment. The LS1 provided an excellent opportunity to look into new technologies and applications that would help to improve and streamline the daily tasks of not only the System Administrators, but also of the scientists who wil...

  19. The ATLAS tau trigger

    International Nuclear Information System (INIS)

    The implementation of a trigger for hadronically decaying tau leptons at the Large Hadronic Collider (LHC) is challenging due to the high background rate, on the other hand it increases tremendously the discovery potential of ATLAS in searches for Standard Model (SM) or Supersymmetric (SUSY) Higgs or other more exotic final states. In this paper we describe the ATLAS tau trigger system, focusing on the early data taking period, and present results from studies based on GEANT 4 simulated events, including trigger rates and the acceptance of tau leptons from SM processes. In order to cope with the rate and optimize the efficiency of important physics channels, the results of the current simulation studies indicate that ATLAS tau triggers should include either relatively high transverse momentum single tau signatures, or low transverse momentum tau signatures in combination with other signatures, such as missing transverse energy, leptons, or jets.

  20. Calorimetry triggering in ATLAS

    International Nuclear Information System (INIS)

    The ATLAS experiment is preparing for data taking at 14 TeV collision energy. A rich discovery physics program is being prepared in addition to the detailed study of Standard Model processes which will be produced in abundance. The ATLAS multi-level trigger system is designed to accept one event in 2 | 105 to enable the selection of rare and unusual physics events. The ATLAS calorimeter system is a precise instrument, which includes liquid Argon electro-magnetic and hadronic components as well as a scintillator-tile hadronic calorimeter. All these components are used in the various levels of the trigger system. A wide physics coverage is ensured by inclusively selecting events with candidate electrons, photons, taus, jets or those with large missing transverse energy. The commissioning of the trigger system is being performed with cosmic ray events and by replaying simulated Monte Carlo events through the trigger and data acquisition system.

  1. Two ATLAS suppliers honoured

    CERN Document Server

    2007-01-01

    The ATLAS experiment has recognised the outstanding contribution of two firms to the pixel detector. Recipients of the supplier award with Peter Jenni, ATLAS spokesperson, and Maximilian Metzger, CERN Secretary-General.At a ceremony held at CERN on 28 November, the ATLAS collaboration presented awards to two of its suppliers that had produced sensor wafers for the pixel detector. The CiS Institut für Mikrosensorik of Erfurt in Germany has supplied 655 sensor wafers containing a total of 1652 sensor tiles and the firm ON Semiconductor has supplied 515 sensor wafers (1177 sensor tiles) from its foundry at Roznov in the Czech Republic. Both firms have successfully met the very demanding requirements. ATLAS’s huge pixel detector is very complicated, requiring expertise in highly specialised integrated microelectronics and precision mechanics. Pixel detector project leader Kevin Einsweiler admits that when the project was first propo...

  2. Multiple brain atlas database and atlas-based neuroimaging system.

    Science.gov (United States)

    Nowinski, W L; Fang, A; Nguyen, B T; Raphel, J K; Jagannathan, L; Raghavan, R; Bryan, R N; Miller, G A

    1997-01-01

    For the purpose of developing multiple, complementary, fully labeled electronic brain atlases and an atlas-based neuroimaging system for analysis, quantification, and real-time manipulation of cerebral structures in two and three dimensions, we have digitized, enhanced, segmented, and labeled the following print brain atlases: Co-Planar Stereotaxic Atlas of the Human Brain by Talairach and Tournoux, Atlas for Stereotaxy of the Human Brain by Schaltenbrand and Wahren, Referentially Oriented Cerebral MRI Anatomy by Talairach and Tournoux, and Atlas of the Cerebral Sulci by Ono, Kubik, and Abernathey. Three-dimensional extensions of these atlases have been developed as well. All two- and three-dimensional atlases are mutually preregistered and may be interactively registered with an actual patient's data. An atlas-based neuroimaging system has been developed that provides support for reformatting, registration, visualization, navigation, image processing, and quantification of clinical data. The anatomical index contains about 1,000 structures and over 400 sulcal patterns. Several new applications of the brain atlas database also have been developed, supported by various technologies such as virtual reality, the Internet, and electronic publishing. Fusion of information from multiple atlases assists the user in comprehensively understanding brain structures and identifying and quantifying anatomical regions in clinical data. The multiple brain atlas database and atlas-based neuroimaging system have substantial potential impact in stereotactic neurosurgery and radiotherapy by assisting in visualization and real-time manipulation in three dimensions of anatomical structures, in quantitative neuroradiology by allowing interactive analysis of clinical data, in three-dimensional neuroeducation, and in brain function studies. PMID:9148878

  3. ATLAS/CMS Upgrades

    CERN Document Server

    Horii, Yasuyuki; The ATLAS collaboration

    2016-01-01

    Precision studies of the Standard Model (SM) and the searches of the physics beyond the SM are ongoing at the ATLAS and CMS experiments at the Large Hadron Collider (LHC). A luminosity upgrade of LHC is planned, which provides a significant challenge for the experiments. In this report, the plans of the ATLAS and CMS upgrades are introduced. Physics prospects for selected topics, including Higgs coupling measurements, Bs,d -> mumu decays, and top quark decays through flavor changing neutral current, are also shown.

  4. The Herschel ATLAS

    CERN Document Server

    Eales, S; Clements, D; Cooray, A R; De Zotti, G; Dye, S; Ivison, R; Jarvis, M; Lagache, G; Maddox, S; Negrello, M; Serjeant, S; Thompson, M A; Van Kampen, E; Amblard, A; Andreani, P; Baes, M; Beelen, A; Bendo, G J; Benford, D; Bertoldi, F; Bock, J; Bonfield, D; Boselli, A; Bridge, C; Buat, V; Burgarella, D; Carlberg, R; Cava, A; Chanial, P; Charlot, S; Christopher, N; Coles, P; Cortese, L; Dariush, A; Da Cunha, E; Dalton, G; Danese, L; Dannerbauer, H; Driver, S; Dunlop, J; Fan, L; Farrah, D; Frayer, D; Frenk, C; Geach, J; Gardner, J; Gomez, H; Gonzalez-Nuevo, J; Gonzalez-Solares, E; Griffin, M; Hardcastle, M; Hatziminaoglou, E; Herranz, D; Hughes, D; Ibar, E; Jeong, Woong-Seob; Lacey, C; Lapi, A; Lee, M; Leeuw, L; Liske, J; Lopez-Caniego, M; Müller, T; Nandra, K; Panuzzo, P; Papageorgiou, A; Patanchon, G; Peacock, J; Pearson, C; Phillipps, S; Pohlen, M; Popescu, C; Rawlings, S; Rigby, E; Rigopoulou, M; Rodighiero, G; Sansom, A; Schulz, B; Scott, D; Smith, D J B; Sibthorpe, B; Smail, I; Stevens, J; Sutherland, W; Takeuchi, T; Tedds, J; Temi, P; Tuffs, R; Trichas, M; Vaccari, M; Valtchanov, I; Van der Werf, P; Verma, A; Vieria, J; Vlahakis, C; White, Glenn J

    2009-01-01

    The Herschel ATLAS is the largest open-time key project that will be carried out on the Herschel Space Observatory. It will survey 510 square degrees of the extragalactic sky, four times larger than all the other Herschel surveys combined, in five far-infrared and submillimetre bands. We describe the survey, the complementary multi-wavelength datasets that will be combined with the Herschel data, and the six major science programmes we are undertaking. Using new models based on a previous submillimetre survey of galaxies, we present predictions of the properties of the ATLAS sources in other wavebands.

  5. ATLAS fast physics monitoring

    Indian Academy of Sciences (India)

    Karsten Köneke; on behalf of the ATLAS Collaboration

    2012-11-01

    The ATLAS experiment at the Large Hadron Collider is recording data from proton–proton collisions at a centre-of-mass energy of 7 TeV since the spring of 2010. The integrated luminosity has grown nearly exponentially since then and continues to rise fast. The ATLAS Collaboration has set up a framework to automatically process the rapidly growing dataset and produce performance and physics plots for the most interesting analyses. The system is designed to give fast feedback. The histograms are produced within hours of data reconstruction (2–3 days after data taking). Hints of potentially interesting physics signals obtained this way are followed up by physics groups.

  6. ATLAS Jet Energy Scale

    CERN Document Server

    Schouten, D; Vetterli, M

    2012-01-01

    Jets originating from the fragmentation of quarks and gluons are the most common, and complicated, final state objects produced at hadron colliders. A precise knowledge of their energy calibration is therefore of great importance at experiments at the Large Hadron Collider at CERN, while is very difficult to ascertain. We present in-situ techniques and results for the jet energy scale at ATLAS using recent collision data. ATLAS has demonstrated an understanding of the necessary jet energy corrections to within \\approx 4% in the central region of the calorimeter.

  7. ATLAS forward physics program

    CERN Document Server

    HELLER, M; The ATLAS collaboration

    2010-01-01

    The variety of forward detectors installed in the vicinity of the ATLAS experiment allows to look over a wide range of forward physics topics. They ensure a good information about rapidity gaps, and the installation of very forward detectors (ALFA and AFP) will allow to tag the leading proton(s) remaining from the different processes studied. Most of the studies have to be done at low luminosity to avoid pile-up, but the AFP project offers a really exiting future for the ATLAS forward physics program. We also present how these forward detectors can be used to measure the relative and absolute luminosity.

  8. Improving ATLAS reprocessing software

    CERN Document Server

    Novak, Tadej

    2014-01-01

    For my CERN Summer Student programme I have been working with ATLAS reprocessing group. Data taken at ATLAS experiment is not only processed after being taken, but is also reprocessed multiple times afterwards. This allows applying new alignments, calibration of detector and using improved or faster algorithms. Reprocessing is usually done in campaigns for different periods of data or for different interest groups. The idea of my project was to simplify the definition of tasks and monitoring of their progress. I created a LIST configuration files generator script in Python and a monitoring webpage for tracking current reprocessing tasks.

  9. ATLAS TV PROJECT

    CERN Multimedia

    2005-01-01

    CAMERA ON TOROID The ATLAS barrel toroid system consists of eight coils, each of axial length 25.3 m, assembled radially and symmetrically around the beam axis. The coils are of a flat racetrack type with two double-pancake windings made of 20.5 kA aluminium-stabilized niobium-titanium superconductor. The video is about the slow lowering of the toroid down to the cavern of ATLAS. It is very demanding task. The camera is placed on top of the toroid.

  10. HIGGS RESULTS FROM ATLAS

    CERN Document Server

    Benhar Noccioli, Eleonora; The ATLAS collaboration

    2016-01-01

    This document presents the most recent ATLAS results on the searches for additional heavy scalars, which could confirm the existence of an extended Higgs sector. The new results include searches for charged as well as for neutral heavy Higgs bosons, decaying to a variety of final states. All analyses are performed using the 2015 LHC pp collision data at 13 TeV centre-of-mass energy, corresponding to an integrated luminosity of 3.2 fb−1 recorded with the ATLAS detector.

  11. Evolution of the Argonne Tandem Linear Accelerator System (ATLAS) control system

    International Nuclear Information System (INIS)

    Given that the Argonne Tandem Linear Accelerator System (ATLAS) recently celebrated its 25. anniversary, this paper will explore the past, present, and future of the ATLAS Control System, and how it has evolved along with the accelerator and control system technology. ATLAS as we know it today, originated with a Tandem Van de Graff in the sixties. With the addition of the Booster section in the late seventies, came the first computerized control. ATLAS itself was placed into service on June 25, 1985, and was the world's first superconducting linear accelerator for ions. Since its dedication as a National User Facility, more than a thousand experiments by more than 2,000 users worldwide, have taken advantage of the unique capabilities it provides. Today, ATLAS continues to be a user facility for physicists who study the particles that form the heart of atoms. Its most recent addition, CARIBU (Californium Rare Isotope Breeder Upgrade), creates special beams that feed into ATLAS. ATLAS is similar to a living organism, changing and responding to new technological challenges and research needs. As it continues to evolve, so does the control system: from the original days using a DEC PDP-11/34 computer and two CAMAC crates, to a DEC Alpha computer running Vsystem software and more than twenty CAMAC crates, to distributed computers and VME systems. Future upgrades are also in the planning stages that will continue to evolve the control system. (authors)

  12. IT Infrastructure Design and Implementation Considerations for the ATLAS TDAQ System

    CERN Document Server

    Dobson, M; The ATLAS collaboration; Caramarcu, C; Dumitru, I; Valsan, L; Darlea, G L; Bujor, F; Bogdanchikov, A G; Korol, A A; Zaytsev, A S; Ballestrero, S

    2013-01-01

    This paper gives a thorough overview of the ATLAS TDAQ SysAdmin group activities which deals with administration of the TDAQ computing environment supporting Front End detector hardware, Data Flow, Event Filter and other subsystems of the ATLAS detector operating on the LHC accelerator at CERN. The current installation consists of approximately 1500 netbooted nodes managed by more than 60 dedicated servers, a high performance centralized storage system, about 50 multi-screen user interface systems installed in the control rooms and various hardware and critical service monitoring machines. In the final configuration, the online computer farm will be capable of hosting tens of thousands applications running simultaneously. The ATLAS TDAQ computing environment is now serving more than 3000 users subdivided into approximately 300 categories in correspondence with their roles in the system. The access and role management system is custom built on top of an LDAP schema. The engineering infrastructure of the ATLAS ...

  13. 10 September 2013 - Italian Minister for Economic Development F. Zanonato visiting the ATLAS cavern with Collaboration Spokesperson D. Charlton and Italian scientists F. Gianotti and A. Di Ciaccio; signing the guest book with CERN Director-General R. Heuer and Director for Research and Scientific Computing S. Bertolucci; in the LHC tunnel with S. Bertolucci, Technology Deputy Department Head L. Rossi and Engineering Department Head R. Saban; visiting CMS cavern with Scientists G. Rolandi and P. Checchia.

    CERN Multimedia

    Jean-Claude Gadmer

    2013-01-01

    10 September 2013 - Italian Minister for Economic Development F. Zanonato visiting the ATLAS cavern with Collaboration Spokesperson D. Charlton and Italian scientists F. Gianotti and A. Di Ciaccio; signing the guest book with CERN Director-General R. Heuer and Director for Research and Scientific Computing S. Bertolucci; in the LHC tunnel with S. Bertolucci, Technology Deputy Department Head L. Rossi and Engineering Department Head R. Saban; visiting CMS cavern with Scientists G. Rolandi and P. Checchia.

  14. 18 December 2012 -Portuguese President of FCT M. Seabra visiting the Computing Centre with IT Department Head F. Hemmer, ATLAS experimental area with Collaboration Spokesperson F. Gianotti and A. Henriques Correia, in the LHC tunnel at Point 2 and CMS experimental area with Deputy Spokesperson J. Varela, signing an administrative agreement with Director-General R. Heuer; LIP President J. M. Gago and Delegate to CERN Council G. Barreia present.

    CERN Multimedia

    Samuel Morier-Genoud

    2012-01-01

    18 December 2012 -Portuguese President of FCT M. Seabra visiting the Computing Centre with IT Department Head F. Hemmer, ATLAS experimental area with Collaboration Spokesperson F. Gianotti and A. Henriques Correia, in the LHC tunnel at Point 2 and CMS experimental area with Deputy Spokesperson J. Varela, signing an administrative agreement with Director-General R. Heuer; LIP President J. M. Gago and Delegate to CERN Council G. Barreia present.

  15. Spracovanie dát na experimente ATLAS

    Czech Academy of Sciences Publication Activity Database

    Marčišovský, Michal; Kubeš, T.; Chudoba, Jiří

    2008-01-01

    Roč. 58, č. 6 (2008), 354-359. ISSN 0009-0700 R&D Projects: GA MŠk LA08032 Institutional research plan: CEZ:AV0Z10100502 Keywords : Atlas * computing * DCS * Grid Subject RIV: BF - Elementary Particles and High Energy Physics

  16. A thermosiphon for ATLAS

    CERN Multimedia

    Rosaria Marraffino

    2013-01-01

    A new thermosiphon cooling system, designed for the ATLAS silicon detectors by CERN’s EN-CV team in collaboration with the experiment, will replace the current system in the next LHC run in 2015. Using the basic properties of density difference and making gravity do the hard work, the thermosiphon promises to be a very reliable solution that will ensure the long-term stability of the whole system.   Former compressor-based cooling system of the ATLAS inner detectors. The system is currently being replaced by the innovative thermosiphon. (Photo courtesy of Olivier Crespo-Lopez). Reliability is the major issue for the present cooling system of the ATLAS silicon detectors. The system was designed 13 years ago using a compressor-based cooling cycle. “The current cooling system uses oil-free compressors to avoid fluid pollution in the delicate parts of the silicon detectors,” says Michele Battistin, EN-CV-PJ section leader and project leader of the ATLAS thermosiphon....

  17. ATLAS solenoid operates underground

    CERN Multimedia

    2006-01-01

    A new phase for the ATLAS collaboration started with the first operation of a completed sub-system: the Central Solenoid. Teams monitoring the cooling and powering of the ATLAS solenoid in the control room. The solenoid was cooled down to 4.5 K from 17 to 23 May. The first current was established the same evening that the solenoid became cold and superconductive. 'This makes the ATLAS Central Solenoid the very first cold and superconducting magnet to be operated in the LHC underground areas!', said Takahiko Kondo, professor at KEK. Though the current was limited to 1 kA, the cool-down and powering of the solenoid was a major milestone for all of the control, cryogenic, power and vacuum systems-a milestone reached by the hard work and many long evenings invested by various teams from ATLAS, all of CERN's departments and several large and small companies. Since the Central Solenoid and the barrel liquid argon (LAr) calorimeter share the same cryostat vacuum vessel, this achievement was only possible in perfe...

  18. ATLAS starts moving in

    CERN Multimedia

    2004-01-01

    The first large active detector component was lowered into the ATLAS cavern on 1 March. It consisted of the 8 modules forming the lower part of the central barrel of the tile hadronic calorimeter. The work of assembling the barrel, which comprises 64 modules, started the following day.

  19. Prime wires for ATLAS

    CERN Multimedia

    2003-01-01

    In an award ceremony on 3 September, ATLAS honoured the French company Axon Cable for its special coaxial cables, which were purpose-built for the Liquid Argon calorimeter modules. Working for CERN since the 1970s, Axon' Cable received the ATLAS supplier award last week for its contribution to the liquid argon calorimeter cables of ATLAS (LAL/Orsay, France and University of Victoria, Canada), started in 1996. Its two sets of minicoaxial cables, called harnesses "A" and "B", are designed to function in the harsh conditions in the liquid argon (at 90 Kelvin or -183°C) and under extreme radiation (up to several Mrads). The cables are mainly used for the readout of the calorimeters, and are connected to the outside world by 114 signal feedthroughs with 1920 channels each. The signal from the detectors is transmitted directly without any amplification, which imposes tight restrictions on the impedance and on the signal propagation time of the cables. Peter Jenni, ATLAS spokesperson, gives the award for best s...

  20. An Icelandic wind atlas

    Science.gov (United States)

    Nawri, Nikolai; Nína Petersen, Gudrun; Bjornsson, Halldór; Arason, Þórður; Jónasson, Kristján

    2013-04-01

    While Iceland has ample wind, its use for energy production has been limited. Electricity in Iceland is generated from renewable hydro- and geothermal source and adding wind energy has not be considered practical or even necessary. However, adding wind into the energy mix is becoming a more viable options as opportunities for new hydro or geothermal power installation become limited. In order to obtain an estimate of the wind energy potential of Iceland a wind atlas has been developed as a part of the Nordic project "Improved Forecast of Wind, Waves and Icing" (IceWind). The atlas is based on mesoscale model runs produced with the Weather Research and Forecasting (WRF) Model and high-resolution regional analyses obtained through the Wind Atlas Analysis and Application Program (WAsP). The wind atlas shows that the wind energy potential is considerable. The regions with the strongest average wind are nevertheless impractical for wind farms, due to distance from road infrastructure and power grid as well as harsh winter climate. However, even in easily accessible regions wind energy potential in Iceland, as measured by annual average power density, is among the highest in Western Europe. There is a strong seasonal cycle, with wintertime power densities throughout the island being at least a factor of two higher than during summer. Calculations show that a modest wind farm of ten medium size turbines would produce more energy throughout the year than a small hydro power plants making wind energy a viable additional option.

  1. HWW in ATLAS

    CERN Document Server

    Rados, Pere; The ATLAS collaboration

    2016-01-01

    The H-->WW channel plays an important role in Higgs boson property measurements, searches for rare decay modes, and searches for possible extended Higgs sectors. In this talk the latest H-->WW results from ATLAS will be briefly summarised.

  2. Atlas of NATO.

    Science.gov (United States)

    Young, Harry F.

    This atlas provides basic information about the North Atlantic Treaty Organization (NATO). Formed in response to growing concern for the security of Western Europe after World War II, NATO is a vehicle for Western efforts to reduce East-West tensions and the level of armaments. NATO promotes political and economic collaboration as well as military…

  3. Top physics in ATLAS

    CERN Document Server

    Naranjo, Roger

    2016-01-01

    These proceedings summarize the latest measurements on top production, top properties and searches using the ATLAS detector at the LHC. The measurements are performed on $pp$ collision data with a center of mass energy $\\sqrt{s} = 7, 8$ and $13$ TeV.

  4. Exotic searches at ATLAS

    CERN Document Server

    Turra, Ruggero; The ATLAS collaboration

    2016-01-01

    The ATLAS detector has collected 3.2 fb^-1 of proton-proton collisions at 13 TeV centre of mass energy during the 2015 LHC run. A selected review of the recent result are presented in the context of the direct search for BSM, not SUSY, not BSM Higgs.

  5. SUPERSYMMETRY SEARCHES IN ATLAS

    CERN Document Server

    Romero Adam, Elena; The ATLAS collaboration

    2016-01-01

    Weak scale supersymmetry remains one of the best motivated and studied Standard Model extensions. This contribution summarises recent ATLAS results for searches for supersymmetric (SUSY) particles with the LHC Run 1 data at √s = 8 TeV. A sensitivity study for the √s = 13 TeV data is also briefly presented.

  6. ATLAS Experiment Brochure

    CERN Multimedia

    Goldfarb, Steven

    2016-01-01

    ATLAS is one of the four major experiments at the Large Hadron Collider at CERN. It is a general-purpose particle physics experiment run by an international collaboration, and is designed to exploit the full discovery potential and the huge range of physics opportunities that the LHC provides.

  7. ATLAS "Splash event" 2008

    CERN Multimedia

    ATLAS, Experiment

    2014-01-01

    "Splash events": As the LHC was being tuned up on 10 September 2008, beam was initially directed at beam collimators just outside the detector, so that a splash of particles would fill much of the detector allowing ATLAS experimenters to prepare the detector for actual running.

  8. ATLAS Civil Engineering Point 1

    CERN Multimedia

    Jean-Claude Vialis

    2000-01-01

    Different phases of realisation to Point 1 : zone of the ATLAS experiment The ATLAS experimental area is located in Point 1, just across the main CERN entrance, in the commune of Meyrin. There people are ever so busy to finish the different infrastructures for ATLAS. Real underground video. When passing throw the walls the succeeding can be heard and seen. The film has original working sound.

  9. High-Performance Scalable Information Service for the ATLAS Experiment

    International Nuclear Information System (INIS)

    The ATLAS[1] experiment is operated by a highly distributed computing system which is constantly producing a lot of status information which is used to monitor the experiment operational conditions as well as to assess the quality of the physics data being taken. For example the ATLAS High Level Trigger(HLT) algorithms are executed on the online computing farm consisting from about 1500 nodes. Each HLT algorithm is producing few thousands histograms, which have to be integrated over the whole farm and carefully analyzed in order to properly tune the event rejection. In order to handle such non-physics data the Information Service (IS) facility has been developed in the scope of the ATLAS Trigger and Data Acquisition (TDAQ)[2] project. The IS provides a high-performance scalable solution for information exchange in distributed environment. In the course of an ATLAS data taking session the IS handles about a hundred gigabytes of information which is being constantly updated with the update interval varying from a second to a few tens of seconds. IS provides access to any information item on request as well as distributing notification to all the information subscribers. In the latter case IS subscribers receive information within a few milliseconds after it was updated. IS can handle arbitrary types of information, including histograms produced by the HLT applications, and provides C++, Java and Python API. The Information Service is a unique source of information for the majority of the online monitoring analysis and GUI applications used to control and monitor the ATLAS experiment. Information Service provides streaming functionality allowing efficient replication of all or part of the managed information. This functionality is used to duplicate the subset of the ATLAS monitoring data to the CERN public network with a latency of a few milliseconds, allowing efficient real-time monitoring of the data taking from outside the protected ATLAS network. Each information

  10. High-Performance Scalable Information Service for the ATLAS Experiment

    Science.gov (United States)

    Kolos, S.; Boutsioukis, G.; Hauser, R.

    2012-12-01

    The ATLAS[1] experiment is operated by a highly distributed computing system which is constantly producing a lot of status information which is used to monitor the experiment operational conditions as well as to assess the quality of the physics data being taken. For example the ATLAS High Level Trigger(HLT) algorithms are executed on the online computing farm consisting from about 1500 nodes. Each HLT algorithm is producing few thousands histograms, which have to be integrated over the whole farm and carefully analyzed in order to properly tune the event rejection. In order to handle such non-physics data the Information Service (IS) facility has been developed in the scope of the ATLAS Trigger and Data Acquisition (TDAQ)[2] project. The IS provides a high-performance scalable solution for information exchange in distributed environment. In the course of an ATLAS data taking session the IS handles about a hundred gigabytes of information which is being constantly updated with the update interval varying from a second to a few tens of seconds. IS provides access to any information item on request as well as distributing notification to all the information subscribers. In the latter case IS subscribers receive information within a few milliseconds after it was updated. IS can handle arbitrary types of information, including histograms produced by the HLT applications, and provides C++, Java and Python API. The Information Service is a unique source of information for the majority of the online monitoring analysis and GUI applications used to control and monitor the ATLAS experiment. Information Service provides streaming functionality allowing efficient replication of all or part of the managed information. This functionality is used to duplicate the subset of the ATLAS monitoring data to the CERN public network with a latency of a few milliseconds, allowing efficient real-time monitoring of the data taking from outside the protected ATLAS network. Each information

  11. ATLAS Point-1 System Administration Group

    CERN Multimedia

    Marc Dobson

    2007-01-01

    Hello, my name is Joe Blog and I am about to go on shift at ATLAS. When I enter the control room shown below with my CERN ID card, I go to the subsystem desk for which I am responsible. This is the first shift of the run period and there is a login window displayed on the screens. I just need to hit return and the control room desktop is started. Before I can do anything I must give my credentials in the shifter window which is then synchronised with the shift plan. After that I have access to all the allowed commands and can start preparing for the run. In order not to forget any steps I consult the documentation on how to prepare for a run on the Point-1 web. I can also check what the general status is for the ATLAS online computing farm, the sub-detectors and the LHC by using the utilities provided. ATLAS Control Room. The situation described is made up but the conditions are real. But the control room that the shifters and general public see is only the tip of the iceberg. Behind these tools lie the...

  12. Multiatlas whole heart segmentation of CT data using conditional entropy for atlas ranking and selection

    International Nuclear Information System (INIS)

    Purpose: Cardiac computed tomography (CT) is widely used in clinical diagnosis of cardiovascular diseases. Whole heart segmentation (WHS) plays a vital role in developing new clinical applications of cardiac CT. However, the shape and appearance of the heart can vary greatly across different scans, making the automatic segmentation particularly challenging. The objective of this work is to develop and evaluate a multiatlas segmentation (MAS) scheme using a new atlas ranking and selection algorithm for automatic WHS of CT data. Research on different MAS strategies and their influence on WHS performance are limited. This work provides a detailed comparison study evaluating the impacts of label fusion, atlas ranking, and sizes of the atlas database on the segmentation performance. Methods: Atlases in a database were registered to the target image using a hierarchical registration scheme specifically designed for cardiac images. A subset of the atlases were selected for label fusion, according to the authors’ proposed atlas ranking criterion which evaluated the performance of each atlas by computing the conditional entropy of the target image given the propagated atlas labeling. Joint label fusion was used to combine multiple label estimates to obtain the final segmentation. The authors used 30 clinical cardiac CT angiography (CTA) images to evaluate the proposed MAS scheme and to investigate different segmentation strategies. Results: The mean WHS Dice score of the proposed MAS method was 0.918 ± 0.021, and the mean runtime for one case was 13.2 min on a workstation. This MAS scheme using joint label fusion generated significantly better Dice scores than the other label fusion strategies, including majority voting (0.901 ± 0.276, p < 0.01), locally weighted voting (0.905 ± 0.0247, p < 0.01), and probabilistic patch-based fusion (0.909 ± 0.0249, p < 0.01). In the atlas ranking study, the proposed criterion based on conditional entropy yielded a performance curve

  13. Multiatlas whole heart segmentation of CT data using conditional entropy for atlas ranking and selection

    Energy Technology Data Exchange (ETDEWEB)

    Zhuang, Xiahai, E-mail: zhuangxiahai@sjtu.edu.cn; Qian, Xiaohua [SJTU-CU International Cooperative Research Center, Department of Engineering Mechanics, School of Naval Architecture Ocean and Civil Engineering, Shanghai Jiao Tong University, Shanghai 200240 (China); Bai, Wenjia; Shi, Wenzhe; Rueckert, Daniel [Biomedical Image Analysis Group, Department of Computing, Imperial College London, 180 Queens Gate, London SW7 2AZ (United Kingdom); Song, Jingjing; Zhan, Songhua [Shuguang Hospital Affiliated to Shanghai University of Traditional Chinese Medicine, Shanghai 201203 (China); Lian, Yanyun [Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai 201210 (China)

    2015-07-15

    Purpose: Cardiac computed tomography (CT) is widely used in clinical diagnosis of cardiovascular diseases. Whole heart segmentation (WHS) plays a vital role in developing new clinical applications of cardiac CT. However, the shape and appearance of the heart can vary greatly across different scans, making the automatic segmentation particularly challenging. The objective of this work is to develop and evaluate a multiatlas segmentation (MAS) scheme using a new atlas ranking and selection algorithm for automatic WHS of CT data. Research on different MAS strategies and their influence on WHS performance are limited. This work provides a detailed comparison study evaluating the impacts of label fusion, atlas ranking, and sizes of the atlas database on the segmentation performance. Methods: Atlases in a database were registered to the target image using a hierarchical registration scheme specifically designed for cardiac images. A subset of the atlases were selected for label fusion, according to the authors’ proposed atlas ranking criterion which evaluated the performance of each atlas by computing the conditional entropy of the target image given the propagated atlas labeling. Joint label fusion was used to combine multiple label estimates to obtain the final segmentation. The authors used 30 clinical cardiac CT angiography (CTA) images to evaluate the proposed MAS scheme and to investigate different segmentation strategies. Results: The mean WHS Dice score of the proposed MAS method was 0.918 ± 0.021, and the mean runtime for one case was 13.2 min on a workstation. This MAS scheme using joint label fusion generated significantly better Dice scores than the other label fusion strategies, including majority voting (0.901 ± 0.276, p < 0.01), locally weighted voting (0.905 ± 0.0247, p < 0.01), and probabilistic patch-based fusion (0.909 ± 0.0249, p < 0.01). In the atlas ranking study, the proposed criterion based on conditional entropy yielded a performance curve

  14. Taking ATLAS to new heights

    CERN Multimedia

    Abha Eli Phoboo, ATLAS experiment

    2013-01-01

    Earlier this month, 51 members of the ATLAS collaboration trekked up to the highest peak in the Atlas Mountains, Mt. Toubkal (4,167m), in North Africa.    The physicists were in Marrakech, Morocco, attending the ATLAS Overview Week (7 - 11 October), which was held for the first time on the African continent. Around 300 members of the collaboration met to discuss the status of the LS1 upgrades and plans for the next run of the LHC. Besides the trek, 42 ATLAS members explored the Saharan sand dunes of Morocco on camels.  Photos courtesy of Patrick Jussel.

  15. Statistical atlas based extrapolation of CT data

    Science.gov (United States)

    Chintalapani, Gouthami; Murphy, Ryan; Armiger, Robert S.; Lepisto, Jyri; Otake, Yoshito; Sugano, Nobuhiko; Taylor, Russell H.; Armand, Mehran

    2010-02-01

    We present a framework to estimate the missing anatomical details from a partial CT scan with the help of statistical shape models. The motivating application is periacetabular osteotomy (PAO), a technique for treating developmental hip dysplasia, an abnormal condition of the hip socket that, if untreated, may lead to osteoarthritis. The common goals of PAO are to reduce pain, joint subluxation and improve contact pressure distribution by increasing the coverage of the femoral head by the hip socket. While current diagnosis and planning is based on radiological measurements, because of significant structural variations in dysplastic hips, a computer-assisted geometrical and biomechanical planning based on CT data is desirable to help the surgeon achieve optimal joint realignments. Most of the patients undergoing PAO are young females, hence it is usually desirable to minimize the radiation dose by scanning only the joint portion of the hip anatomy. These partial scans, however, do not provide enough information for biomechanical analysis due to missing iliac region. A statistical shape model of full pelvis anatomy is constructed from a database of CT scans. The partial volume is first aligned with the statistical atlas using an iterative affine registration, followed by a deformable registration step and the missing information is inferred from the atlas. The atlas inferences are further enhanced by the use of X-ray images of the patient, which are very common in an osteotomy procedure. The proposed method is validated with a leave-one-out analysis method. Osteotomy cuts are simulated and the effect of atlas predicted models on the actual procedure is evaluated.

  16. Data Federation Strategies for ATLAS using XRootD

    CERN Document Server

    Gardner, R; The ATLAS collaboration; Duckeck, G; Elmsheuser, J; Hanushevski, A; Hönig, F; Iven, J; Legger, F; Vukotic, I; Yang, W

    2013-01-01

    In the past year the ATLAS Collaboration accelerated its program to federate data storage resources using an architecture based on XRootD with its attendant redirection and storage integration services. The main goal of the federation is an improvement in the data access experience for the end user while allowing more efficient and intelligent use of computing resources. Along with these advances come integration with existing ATLAS production services (PanDA and its pilot services) and data management services (DQ2, and in the next generation, Rucio). Functional testing of the federation has been integrated into the standard ATLAS and WLCG monitoring frameworks and a dedicated set of tools provides high granularity information on its current and historical usage. We use a federation topology designed to search from the site's local storage outward to its region and to globally distributed storage resources. We describe programmatic testing of various federation access modes including direct access over the w...

  17. Data Federation Strategies for ATLAS using XRootD

    CERN Document Server

    Gardner, R; The ATLAS collaboration; Duckeck, G; Elmsheuser, J; Hanushevski, A; Hönig, F; Iven, J; Legger, F; Vukotic, I; Yang, W

    2014-01-01

    In the past year the ATLAS Collaboration accelerated its program to federate data storage resources using an architecture based on XRootD with its attendant redirection and storage integration services. The main goal of the federation is an improvement in the data access experience for the end user while allowing more efficient and intelligent use of computing resources. Along with these advances come integration with existing ATLAS production services (PanDA and its pilot services) and data management services (DQ2, and in the next generation, Rucio). Functional testing of the federation has been integrated into the standard ATLAS and WLCG monitoring frameworks and a dedicated set of tools provides high granularity information on its current and historical usage. We use a federation topology designed to search from the site's local storage outward to its region and to globally distributed storage resources. We describe programmatic testing of various federation access modes including direct access over the w...

  18. Spinal canal stenosis at the level of Atlas

    Directory of Open Access Journals (Sweden)

    Suchanda Bhattacharjee

    2011-01-01

    Full Text Available We report here a rare case of high cervical stenosis at the level of atlas who presented with progressively deteriorating quadriparesis and respiratory distress. A 10-year-old boy presented with above symptoms of one-year duration with a preceding history of trivial trauma prior to onset of such symptoms. Cervical spine MRI revealed a significant stenosis at the level of atlas from the posterior side with a syrinx extending above and below. High-resolution computed tomography of the above level yielded an ill-defined osseous bar compressing the canal at the level of C 1 posterior arch, which appeared bifid in the midline. The patient was immediately taken up for surgery in view of his respiratory complaints. The child showed an excellent recovery after excision of the posterior arch of atlas and removal of the compressing osseous structure.

  19. 29 March 2011 - Ninth President of Israel S.Peres welcomed by CERN Director-General R. Heuer who introduces Council President M. Spiro, Director for Accelerators and Technology S. Myers, Head of International Relations F. Pauss, Physics Department Head P. Bloch, Technology Department Head F. Bordry, Human Resources Department Head A.-S. Catherin, Beams Department Head P. Collier, Information Technology Department Head F. Hemmer, Adviser for Israel J. Ellis, Legal Counsel E. Gröniger-Voss, ATLAS Collaboration Spokesperson F. Gianotti, Former ATLAS Collaboration Spokesperson P. Jenni, Weizmann Institute G. Mikenberg, CERN VIP and Protocol Officer W. Korda.

    CERN Multimedia

    Maximilien Brice

    2011-01-01

    During his visit he toured the ATLAS underground experimental area with Giora Mikenberg of the ATLAS collaboration, Weizmann Institute of Sciences and Israeli industrial liaison office, Rolf Heuer, CERN’s director-general, and Fabiola Gianotti, ATLAS spokesperson. The president also visited the CERN computing centre and met Israeli scientists working at CERN.

  20. Software releases management for TDAQ system in ATLAS experiment

    CERN Document Server

    Kazarov, A; The ATLAS collaboration; Hauser, R; Soloviev, I

    2010-01-01

    ATLAS is a general-purpose experiment in high-energy physics at Large Hadron Collider at CERN. ATLAS Trigger and Data Acquisition (TDAQ) system is a distributed computing system which is responsible for transferring and filtering the physics data from the experiment to mass-storage. TDAQ software is developed since 1998 by a team of few dozens developers. It is used for integration of all ATLAS subsystem participating in data-taking, providing framework and API for building the s/w pieces of TDAQ system. It is currently composed of more then 200 s/w packages which are available for ATLAS users in form of regular software releases. The s/w is available for development on a shared filesystem, on test beds and it is deployed to the ATLAS pit where it is used for data-taking. The paper describes the working model, the policies and the tools which are used by s/w developers and s/w librarians in order to develop, release, deploy and maintain the TDAQ s/w for the long period of development, commissioning and runnin...

  1. ATLAS copies its first PetaByte out of CERN

    CERN Multimedia

    M. Branco; P. Salgado; L. Goossens; A. Nairz

    2006-01-01

    On 6th August ATLAS reached a major milestone for its Distributed Data Management project - copying its first PetaByte (1015 Bytes) of data out from CERN to computing centers around the world. This achievement is part of the so-called 'Tier-0 exercise' running since 19th June, where simulated fake data is used to exercise the expected data flow within the CERN computing centre and out over the Grid to the Tier-1 computing centers as would happen during the real data taking. The expected rate of data output from CERN when the detector is running at full trigger rate is 780 MB/s shared among 10 external Tier-1 sites(*), amounting to around 8 PetaBytes per year. The idea of the exercise was to try to reach this data rate and sustain it for as long as possible. The exercise was run as part of the LCG's Service Challenges and allowed ATLAS to test successfully the integration of ATLAS software with the LCG middleware services that are used for low level cataloging and the actual data movement. When ATLAS is produ...

  2. Analysis facility infrastructure (Tier-3) for ATLAS experiment

    CERN Document Server

    González de la Hoza, S; Ros, E; Sánchez, J; Amorós, G; Fassi, F; Fernández, A; Kaci, M; Lamas, A; Salt, J

    2008-01-01

    In the ATLAS computing model the tiered hierarchy ranged from the Tier-0 (CERN) down to desktops or workstations (Tier-3). The focus on defining the roles of each tiered component has evolved with the initial emphasis on the Tier-0 and Tier-1 definition and roles. The various LHC (Large Hadron Collider) projects, including ATLAS, then evolved the tiered hierarchy to include Tier-2’s (Regional centers) as part of their projects. Tier-3 centres, on the other hand, have been defined as whatever an institution could construct to support their Physics goals using institutional and otherwise leveraged resources and therefore have not been considered to be part of the official ATLAS computing resources. However, Tier-3 centres are going to exist and will have implications on how the computing model should support ATLAS physicists. Tier-3 users will want to access LHC data and simulations and will want to enable their resources to support their analysis and simulation work. This document will define how IFIC (Insti...

  3. ATLAS EventIndex monitoring system using Kibana analytics and visualization platform

    CERN Document Server

    Barberis, Dario; The ATLAS collaboration; Prokoshin, Fedor; Gallas, Elizabeth; Favareto, Andrea; Hrivnac, Julius; Sanchez, Javier; Fernandez Casani, Alvaro; Gonzalez de la Hoz, Santiago; Garcia Montoro, Carlos; Salt, Jose; Malon, David; Toebbicke, Rainer; Yuan, Ruijun

    2016-01-01

    The ATLAS EventIndex is a data catalogue system that stores event-related metadata for all (real and simulated) ATLAS events, on all processing stages. As it consists of different components that depend on other applications (such as distributed storage, and different sources of information) we need to monitor the conditions of many heterogeneous subsystems, to make sure everything is working correctly. This information is collected, processed, and then displayed using CERN service monitoring software based on the Kibana analytic and visualization package, provided by CERN IT Department. EventIndex monitoring is used both by the EventIndex team and ATLAS Distributed Computing shifts crew.

  4. Congenital bipartite atlas with hypodactyly in a dog: clinical, radiographic and CT findings.

    Science.gov (United States)

    Wrzosek, M; Płonek, M; Zeira, O; Bieżyński, J; Kinda, W; Guziński, M

    2014-07-01

    A three-year-old Border collie was diagnosed with a bipartite atlas and bilateral forelimb hypodactyly. The dog showed signs of acute, non-progressive neck pain, general stiffness and right thoracic limb non-weight-bearing lameness. Computed tomography imaging revealed a bipartite atlas with abaxial vertical bone proliferation, which was the cause of the clinical signs. In addition, bilateral hypodactyly of the second and fifth digits was incidentally found. This report suggests that hypodactyly may be associated with atlas malformations. PMID:24635705

  5. Expected Performance of the ATLAS Experiment - Detector, Trigger and Physics

    Energy Technology Data Exchange (ETDEWEB)

    Aad, G.; Abat, E.; Abbott, B.; Abdallah, J.; Abdelalim, A.A.; Abdesselam, A.; Abdinov, O.; Abi, B.; Abolins, M.; Abramowicz, H.; Acharya, Bobby Samir; Adams, D.L.; Addy, T.N.; Adorisio, C.; Adragna, P.; Adye, T.; Aguilar-Saavedra, J.A.; Aharrouche, M.; Ahlen, S.P.; Ahles, F.; Ahmad, A.; /SUNY, Albany /Alberta U. /Ankara U. /Annecy, LAPP /Argonne /Arizona U. /Texas U., Arlington /Athens U. /Natl. Tech. U., Athens /Baku, Inst. Phys. /Barcelona, IFAE /Belgrade U. /VINCA Inst. Nucl. Sci., Belgrade /Bergen U. /LBL, Berkeley /Humboldt U., Berlin /Bern U., LHEP /Birmingham U. /Bogazici U. /INFN, Bologna /Bologna U.

    2011-11-28

    The Large Hadron Collider (LHC) at CERN promises a major step forward in the understanding of the fundamental nature of matter. The ATLAS experiment is a general-purpose detector for the LHC, whose design was guided by the need to accommodate the wide spectrum of possible physics signatures. The major remit of the ATLAS experiment is the exploration of the TeV mass scale where groundbreaking discoveries are expected. In the focus are the investigation of the electroweak symmetry breaking and linked to this the search for the Higgs boson as well as the search for Physics beyond the Standard Model. In this report a detailed examination of the expected performance of the ATLAS detector is provided, with a major aim being to investigate the experimental sensitivity to a wide range of measurements and potential observations of new physical processes. An earlier summary of the expected capabilities of ATLAS was compiled in 1999 [1]. A survey of physics capabilities of the CMS detector was published in [2]. The design of the ATLAS detector has now been finalised, and its construction and installation have been completed [3]. An extensive test-beam programme was undertaken. Furthermore, the simulation and reconstruction software code and frameworks have been completely rewritten. Revisions incorporated reflect improved detector modelling as well as major technical changes to the software technology. Greatly improved understanding of calibration and alignment techniques, and their practical impact on performance, is now in place. The studies reported here are based on full simulations of the ATLAS detector response. A variety of event generators were employed. The simulation and reconstruction of these large event samples thus provided an important operational test of the new ATLAS software system. In addition, the processing was distributed world-wide over the ATLAS Grid facilities and hence provided an important test of the ATLAS computing system - this is the origin of

  6. Concepts and Plans towards fast large scale Monte Carlo production for the ATLAS Experiment

    CERN Document Server

    Chapman, J; Duehrssen, M; Elsing, M; Froidevaux, D; Harrington, R; Jansky, R; Langenberg, R; Mandrysch, R; Marshall, Z; Ritsch, E; Salzburger, A

    2014-01-01

    The huge success of the physics program of the ATLAS experiment at the Large Hadron Collider (LHC) during run I relies upon a great number of simulated Monte Carlo events. This Monte Carlo production takes the biggest part of the computing resources being in use by ATLAS as of now. In this document we describe the plans to overcome the computing resource limitations for large scale Monte Carlo production in the ATLAS Experiment for run II, and beyond. A number of fast detector simulation, digitization and reconstruction techniques and are being discussed, based upon a new flexible detector simulation framework. To optimally benefit from these developments, a redesigned ATLAS MC production chain is presented at the end of this document.

  7. Concepts and Plans towards fast large scale Monte Carlo production for the ATLAS Experiment

    Science.gov (United States)

    Ritsch, E.; Atlas Collaboration

    2014-06-01

    The huge success of the physics program of the ATLAS experiment at the Large Hadron Collider (LHC) during Run 1 relies upon a great number of simulated Monte Carlo events. This Monte Carlo production takes the biggest part of the computing resources being in use by ATLAS as of now. In this document we describe the plans to overcome the computing resource limitations for large scale Monte Carlo production in the ATLAS Experiment for Run 2, and beyond. A number of fast detector simulation, digitization and reconstruction techniques are being discussed, based upon a new flexible detector simulation framework. To optimally benefit from these developments, a redesigned ATLAS MC production chain is presented at the end of this document.

  8. Integration Of PanDA Workload Management System With Supercomputers for ATLAS

    CERN Document Server

    Oleynik, Danila; The ATLAS collaboration; De, Kaushik; Wenaus, Torre; Maeno, Tadashi; Barreiro Megino, Fernando Harald; Nilsson, Paul; Guan, Wen; Panitkin, Sergey

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production ANd Distributed Analysis system) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more t...

  9. Brain templates and atlases.

    Science.gov (United States)

    Evans, Alan C; Janke, Andrew L; Collins, D Louis; Baillet, Sylvain

    2012-08-15

    The core concept within the field of brain mapping is the use of a standardized, or "stereotaxic", 3D coordinate frame for data analysis and reporting of findings from neuroimaging experiments. This simple construct allows brain researchers to combine data from many subjects such that group-averaged signals, be they structural or functional, can be detected above the background noise that would swamp subtle signals from any single subject. Where the signal is robust enough to be detected in individuals, it allows for the exploration of inter-individual variance in the location of that signal. From a larger perspective, it provides a powerful medium for comparison and/or combination of brain mapping findings from different imaging modalities and laboratories around the world. Finally, it provides a framework for the creation of large-scale neuroimaging databases or "atlases" that capture the population mean and variance in anatomical or physiological metrics as a function of age or disease. However, while the above benefits are not in question at first order, there are a number of conceptual and practical challenges that introduce second-order incompatibilities among experimental data. Stereotaxic mapping requires two basic components: (i) the specification of the 3D stereotaxic coordinate space, and (ii) a mapping function that transforms a 3D brain image from "native" space, i.e. the coordinate frame of the scanner at data acquisition, to that stereotaxic space. The first component is usually expressed by the choice of a representative 3D MR image that serves as target "template" or atlas. The native image is re-sampled from native to stereotaxic space under the mapping function that may have few or many degrees of freedom, depending upon the experimental design. The optimal choice of atlas template and mapping function depend upon considerations of age, gender, hemispheric asymmetry, anatomical correspondence, spatial normalization methodology and disease

  10. Radiologic atlas of pulmonary abnormalities in children

    International Nuclear Information System (INIS)

    This book is an atlas about thoracic abnormalities in infants and children. The authors include computed tomographic, digital subtraction angiographic, ultrasonographic, and a few magnetic resonance (MR) images. They recognize and discuss how changes in the medical treatment of premature infants and the management of infection and pediatric tumors have altered some of the appearances and considerations in these diseases. Oriented toward all aspects of pulmonary abnormalities, the book starts with radiographic techniques and then discusses the normal chest, the newborn, infections, tumors, and pulmonary vascular diseases. There is comprehensive treatment of mediastinal abnormalities and a discussion of airway abnormalities

  11. 17 April 2008 - Head of Internal Audit Network meeting visiting the ATLAS experimental area with CERN ATLAS Team Leader P. Fassnacht, ATLAS Technical Coordinator M. Nessi and ATLAS Resources Manager M. Nordberg.

    CERN Multimedia

    Mona Schweizer

    2008-01-01

    17 April 2008 - Head of Internal Audit Network meeting visiting the ATLAS experimental area with CERN ATLAS Team Leader P. Fassnacht, ATLAS Technical Coordinator M. Nessi and ATLAS Resources Manager M. Nordberg.

  12. ATLAS Data Challenges - A Collaborative Worldwide Activity

    CERN Multimedia

    Poulard, G

    The goals of the ATLAS Data Challenges (DC) are the validation of the Computing Model, of the complete software suite, of the data model, and to ensure the correctness of the technical choices to be made. It is understood that these Data Challenges should be of increasing complexity and that their results will be used as input for a Computing TDR and for preparing an MoU in due time. A major feature of the current computing activities (DC1) in ATLAS is the preparation and deployment of the software required for the production of large event samples for the High Level Trigger (HLT) and physics communities, and the actual production of those samples. It should be noted that it is not an option to "run everything at CERN" even if we wanted to; the resources are not available at CERN to carry out the production on a reasonable time-scale. We have therefore had to face the great challenge of organising and then carrying out this large-scale production at a significant number of sites around the world. However, th...

  13. The ATLAS Forward Calorimeter

    Science.gov (United States)

    Artamonov, A.; Bailey, D.; Belanger, G.; Cadabeschi, M.; Chen, T.-Y.; Epshteyn, V.; Gorbounov, P.; Joo, K. K.; Khakzad, M.; Khovanskiy, V.; Krieger, P.; Loch, P.; Mayer, J.; Neuheimer, E.; Oakham, F. G.; O'Neill, M.; Orr, R. S.; Qi, M.; Rutherfoord, J.; Savine, A.; Schram, M.; Shatalov, P.; Shaver, L.; Shupe, M.; Stairs, G.; Strickland, V.; Tompkins, D.; Tsukerman, I.; Vincent, K.

    2008-02-01

    Forward calorimeters, located near the incident beams, complete the nearly 4π coverage for high pT particles resulting from proton-proton collisions in the ATLAS detector at the Large Hadron Collider at CERN. Both the technology and the deployment of the forward calorimeters in ATLAS are novel. The liquid argon rod/tube electrode structure for the forward calorimeters was invented specifically for applications in high rate environments. The placement of the forward calorimeters adjacent to the other calorimeters relatively close to the interaction point provides several advantages including nearly seamless calorimetry and natural shielding for the muon system. The forward calorimeter performance requirements are driven by events with missing ET and tagging jets.

  14. Teaching atlas of mammography

    International Nuclear Information System (INIS)

    The illustrated case reports in this teaching atlas cover practically the entire range of possible pathological changes and are based on in-patient case material and 80,000 screening documents. The two basic approaches, - detection and analysis of changes -, are taught comprehensively and in great detail. A systematic procedure for analysing the mammographies, in order to detect even the very least changes, and its practical application is explained using mammographies showing unclear findings at first sight. A system of coordinates is presented which allows precise localisation of the changes. Exercises for practising the technique of identifying the pathological changes round up the methodolical chapters. Additional imaging technical enhancements and detail enlargements are of great help in interpreting the findings. The specific approach adopted for this teaching atlas is a 'reverse procedure', which leaves the beaten track and starts with analysing the mammographies and evaluating the radiographic findings, in order to finally derive the diagnosis. (orig./CB)

  15. The ATLAS ROBIN

    Energy Technology Data Exchange (ETDEWEB)

    Cranfield, R; Crone, G [University College London, London (United Kingdom); Francis, D; Gorini, B; Joos, M; Petersen, J; Tremblet, L; Unel, G [CERN, Geneva (Switzerland); Green, B; Misiejuk, A; Strong, J; Teixeira-Dias, P [Royal Holloway University of London, London (United Kingdom); Kieft, G; Vermeulen, J [FOM - Institute SAF and University of Amsterdam/Nikhef, Amsterdam (Netherlands); Kugel, A; Mueller, M; Yu, M [University of Mannheim, Mannheim (Germany); Perera, V; Wickens, F [Rutherford Appleton Laboratory, Didcot (United Kingdom)], E-mail: kugel@ti.uni-mannheim.de

    2008-01-15

    The ATLAS readout subsystem is the main interface between {approx} 1600 detector front-end readout links and the higher-level trigger farms. To handle the high event rate (up to 100 kHz) and bandwidth (up to 160 MB/s per link) the readout PCs are equipped with four ROBIN (readout buffer input) cards. Each ROBIN attaches to three optical links, provides local event buffering for approximately 300 ms and communicates with the higher-level trigger system for data and delete requests. According to the ATLAS baseline architecture this communication runs via the PCI bus of the host PC. In addition, each ROBIN provides a private Gigabit Ethernet port which can be used for the same purpose. Operational monitoring is performed via PCI. This paper presents a summary of the ROBIN hardware and software together with measurements results obtained from various test setups.

  16. Electroweak Physics at ATLAS

    CERN Document Server

    Conti, G; The ATLAS collaboration

    2013-01-01

    Various electroweak measurements have already been performed at the ATLAS experiment since the start of the Large Hadron Collider at CERN. A review of the latest results in $W/Z$ and diboson physics will be given here. The $W/Z$ physics results include the measurement of the high-mass Drell-Yan di-lepton production cross section, the $Wb(b)$ production cross section and the study of the transverse momentum of $Z/\\gamma^*$. The latest $WW$, $WZ$, $ZZ$, $W\\gamma$ and $Z\\gamma$ production cross sections will be summarized, including updated $WW$ and $ZZ$ results. In particular, the $ZZ^*$ channel has been added. The ATLAS diboson results are also used to set limits on charged triple gauge couplings ($WWZ$, $WW\\gamma$) and on neutral triple gauge couplings ($Z\\gamma\\gamma$, $ZZ\\gamma$, $ZZZ$).

  17. ATLAS software packaging

    CERN Document Server

    Rybkin, G

    2012-01-01

    Software packaging is indispensable part of build and prerequisite for deployment processes. Full ATLAS software stack consists of TDAQ, HLT, and Offline software. These software groups depend on some 80 external software packages. We present tools, package PackDist, developed and used to package all this software except for TDAQ project. PackDist is based on and driven by CMT, ATLAS software configuration and build tool, and consists of shell and Python scripts. The packaging unit used is CMT project. Each CMT project is packaged as several packages - platform dependent (one per platform available), source code excluding header files, other platform independent files, documentation, and debug information packages (the last two being built optionally). Packaging can be done recursively to package all the dependencies. The whole set of packages for one software release, distribution kit, also includes configuration packages and contains some 120 packages for one platform. Also packaged are physics analysis pro...

  18. Electron isolation at ATLAS

    International Nuclear Information System (INIS)

    The ATLAS experiment at the Large Hadron Collider (LHC) will face the challenge of efficiently selecting interesting candidate events in pp collisions at 14 TeV centre-of-mass energy, whilst rejecting the enormous number of background events. Many of these interesting candidate events have isolated leptons in the final state, like for example events with a gauge boson or SUSY. On top of the standard ATLAS electron identification an isolation criterion has been developed using a likelihood as multivariate approach with several discriminating variables. The likelihood is constructed by selecting electrons from Z decays for the signal and for the background electrons from b quark jets. Results for the example of the associated Higgs boson production with top quarks and subsequent decay into a pair of W bosons are presented. In addition first results of a likelihood to discriminate against jets are given and a possible extension for muons is discussed

  19. ATLAS-1 Logo

    Science.gov (United States)

    1990-01-01

    The primary payload for the Space Shuttle mission STS-45, launched March 24, 1992, was the Atmospheric Laboratory for Applications and Science-1 (ATLAS-1)which was mounted on nondeployable Spacelab pallets in the orbiter cargo bay. Eight countries, th U.S., France, Germany, Belgium, United Kingdom, Switzerland, The Netherlands, and Japan, provided 12 instruments designed to perform 14 investigations in four fields. Atmospheric science instruments/investigations: Atmospheric Lyman-Alpha Emissions (ALAE); Atmospheric Trace Molecule Spectroscopy (ATMOS); Grille Spectrometer (GRILLE); Imaging Spectrometric Observatory (ISO); Millimeter-Wave Atmospheric Sounder (MAS). Solar Science: Active Cavity Radiometer Irradiance Monitor (ACRIM); Measurement of the Solar Constant (SOLCON); Solar Spectrum from 180 to 3,200 Nanometers (SOLSPEC); Solar Ultraviolet Spectral Irradiance Monitor (SUSIM). Space Plasma Physics: Atmospheric Emissions Photometric Imaging (AEPI); Space Experiments with Particle Accelerators (SEPAC). Ultraviolet astronomy: Far Ultraviolet Space Telescope (FAUST). This is the logo or emblem that was designed to represent the ATLAS-1 payload.

  20. Recent ATLAS Detector Improvements

    CERN Document Server

    de Nooij, L; The ATLAS collaboration

    2011-01-01

    During the recent LHC shutdown period, ATLAS performed vital maintenance and improvements on the various sub-detectors. For the calorimeters, repairs were carried out on front-end electronics and power supplies to recover detector coverage that had been lost since the last maintenance period. The ALFA luminosity detector was installed along the beam line and is currently being commissioned. Smaller scale repairs were needed on the Inner Detector. Maintenance on the muon system included repairs on the readout as well as updates and leak checks in the gas systems. Six TGC chambers were also replaced. This poster summarizes the repairs and their expected improvement for physics performance and reliability of ATLAS for the upcoming LHC run.

  1. ATLAS recognises its best suppliers

    CERN Multimedia

    2002-01-01

    The ATLAS Collaboration has recently rewarded two of its suppliers in the construction of very major detector components, fabricated in Japan. The ATLAS Supplier Award in recognition of excellent supplier performance has just been attributed to Kawasaki Heavy Industries, while Toshiba Corporation received the award two months ago at their headquarters in Japan.

  2. ATLAS: civil engineering Point 1

    CERN Multimedia

    2000-01-01

    The ATLAS experimental area is located in Point 1, just across the main CERN entrance, in the commune of Meyrin. There people are busy to finish the different infrastructures for ATLAS. Real underground video. Nice view from the surface to the cavern from the pit side - all the big machines looked very small. The film has original working sound.

  3. Lowering the first ATLAS toroid

    CERN Multimedia

    Maximilien Brice

    2004-01-01

    The ATLAS detector on the LHC at CERN will consist of eight toroid magnets, the first of which was lowered into the cavern in these images on 26 October 2004. The coils are supported on platforms where they will be attached to form a giant torus. The platforms will hold about 300 tonnes of ATLAS' muon chambers and will envelop the inner detectors.

  4. The ATLAS Forward Physics Program

    OpenAIRE

    Royon, Christophe

    2010-01-01

    We describe the ATLAS Forward Physics Program at low luminosity using the rapidity gap method and a dedicated detector called ALFA to tag the protons. We also describe the physics topics of the ATLAS Forward Physics Project at high instantaneous luminosity.

  5. L'esperimento ATLAS

    CERN Multimedia

    ATLAS Outreach Committee

    2000-01-01

    This award winning film gives a glimpse behind the scenes of building the ATLAS detector. This film asks: Why are so many physicists anxious to build this apparatus? Will they be able to answer fundamental questions such as: Where does mass come from? Why does the Universe have so little antimatter? Are there extra dimensions of space that are hidden from our view? Is there an underlying theory to find? Major surprises are likely in this unknown part of physics.

  6. El experimento ATLAS

    CERN Multimedia

    ATLAS Outreach Committee

    2000-01-01

    This award winning film gives a glimpse behind the scenes of building the ATLAS detector. This film asks: Why are so many physicists anxious to build this apparatus? Will they be able to answer fundamental questions such as: Where does mass come from? Why does the Universe have so little antimatter? Are there extra dimensions of space that are hidden from our view? Is there an underlying theory to find? Major surprises are likely in this unknown part of physics.

  7. The ATLAS Experiment Movie

    CERN Multimedia

    ATLAS Outreach Committee

    2000-01-01

    This award winning film gives a glimpse behind the scenes of building the ATLAS detector. This film asks: Why are so many physicists anxious to build this apparatus? Will they be able to answer fundamental questions such as: Where does mass come from? Why does the Universe have so little antimatter? Are there extra dimensions of space that are hidden from our view? Is there an underlying theory to find? Major surprises are likely in this unknown part of physics.

  8. Higgs results from ATLAS

    CERN Document Server

    Chen, Xin; The ATLAS collaboration

    2015-01-01

    The updated Higgs measurements in various search channels with ATLAS Run 1 data are reviewed. Both the Standard Model (SM) Higgs results, such as $H\\to\\gamma\\gamma,ZZ,WW,\\tau\\tau,\\mu\\mu,b\\bar{b}$, and Beyond Standard Model (BSM) results, such as the charged Higgs, Higgs invisible decay and tensor couplings, are summarized. Prospects for future Higgs searches are briefly discussed.

  9. ATLAS support rails

    CERN Multimedia

    Maximilien Brice

    2003-01-01

    These supports will hold the 7000 tonne ATLAS detector in its cavern at the LHC. The huge toroid will be assembled from eight coils that will house some of the muon chambers. Supported within the toroid will be the inner detector, containing tracking devices, as well as devices to measure the energies of the particles produced in the 14 TeV proton-proton collisions at the LHC.

  10. SUSY Searches in ATLAS

    CERN Document Server

    Zhuang, Xuai; The ATLAS collaboration

    2016-01-01

    Despite the absence of experimental evidence, weak scale supersymmetry remains one of the best motivated and studied Standard Model extensions. This talk summarises recent ATLAS results for searches for supersymmetric (SUSY) particles, with focus on those obtained using proton-proton collisions at a centre of mass energy of 13 TeV using 2015+2016 data. The searches with final states including jets, missing transverse momentum, light leptons will be presented.

  11. SUSY Searches at ATLAS

    CERN Document Server

    Lorenz, Jeanette; The ATLAS collaboration

    2016-01-01

    Analyzing 3.2 fb$^{-1}$ of proton--proton collision data at $\\sqrt{s} = 13$ TeV, delivered by the LHC and recorded by the ATLAS detector in Run 2, various SUSY searches for gluinos, stops and sbottoms were pursued. The analyses focus on simple and robust analyses techniques and are optimized for specific benchmark signatures. Stringent limits significantly superseding the Run 1 limits are obtained.

  12. Atlas of duplex scanning

    International Nuclear Information System (INIS)

    This book presents the first atlas devoted entirely to duplex scanning. It details the uses of this important ''up-and-coming'' diagnostic tool for vascular and general surgeons and radiologists. It also covers scanning of the extremities, as well as the carotoids. The topics also covered are correlative line drawings elaborate and clarify the excellent scan images; the principles of duplex scanning or arteries and veins, techniques, and results; pictures normal anatomy; venous thromboses, arterial occlusion, true and false aneurysms, graft stenoses

  13. ATLAS/CMS Upgrades

    CERN Document Server

    Horii, Yasuyuki; The ATLAS collaboration

    2016-01-01

    Precise Higgs measurements and new physics searches are planned at LHC (HL-LHC) with integrated luminosity of 300 fb^{-1} (3000 fb^{-1}). An increased peak luminosity provides a significant challenge for the experiments. In this presentation, the plans for the ATLAS and CMS upgrades are introduced. Physics prospects for some topics related with ‘flavour’, e.g Higgs couplings, B_{s, d}->mumu, and FCNC top decays, are also shown.

  14. Hybrid Atlas Models

    CERN Document Server

    Ichiba, Tomoyuki; Banner, Adrian; Karatzas, Ioannis; Fernholz, Robert

    2009-01-01

    We study Atlas-type models of equity markets with local characteristics that depend on both name and rank, and in ways that induce a stability of the capital distribution. Ergodic properties and rankings of processes are examined with reference to the theory of reflected Brownian motions in polyhedral domains. In the context of such models, we discuss properties of various investment strategies, including the so-called growth-optimal and universal portfolios.

  15. The Genome Atlas Resource

    OpenAIRE

    Azam Qureshi, Matloob; Rotenberg, Eva; Stærfeldt, Hans Henrik; Hansson, Lena; Ussery, David

    2010-01-01

    Abstract. The Genome Atlas is a resource for addressing the challenges of synchronising prokaryotic genomic sequence data from multiple public repositories. This resource can integrate bioinformatic analyses in various data format and quality. Existing open source tools have been used together with scripts and algorithms developed in a variety of programming languages at the Centre for Biological Sequence Analysis in order to create a three-tier software application for genome analysis. The r...

  16. Supersymmetry Searches in ATLAS

    CERN Document Server

    Romero Adam, Elena; The ATLAS collaboration

    2015-01-01

    Despite the absence of experimental evidence, weak scale supersymmetry remains one of the best motivated and studied Standard Model extensions. This talk summarises recent ATLAS results for searches for supersymmetric (SUSY) particles. Weak and strong production in both R-Parity conserving and R-Parity violating SUSY scenarios are considered. The searches involved final states including jets, missing transverse momentum, light leptons, taus or photons, as well as long-lived particle signatures.

  17. The atlas detector

    International Nuclear Information System (INIS)

    The ATLAS detector, one of the two multi-purpose detectors at the Large Hadron Collider at CERN, is currently being built in order to meet the first proton-proton collisions in time. A description of the detector components will be given, corresponding to the most up to date design and status of construction, completed with test beam results and performances of the first serial modules. (author)

  18. ATLAS overview week highlights

    CERN Multimedia

    D. Froidevaux

    2005-01-01

    A warm and early October afternoon saw the beginning of the 2005 ATLAS overview week, which took place Rue de La Montagne Sainte-Geneviève in the heart of the Quartier Latin in Paris. All visitors had been warned many times by the ATLAS management and the organisers that the premises would be the subject of strict security clearance because of the "plan Vigipirate", which remains at some level of alert in all public buildings across France. The public building in question is now part of the Ministère de La Recherche, but used to host one of the so-called French "Grandes Ecoles", called l'Ecole Polytechnique (in France there is only one Ecole Polytechnique, whereas there are two in Switzerland) until the end of the seventies, a little while after it opened its doors also to women. In fact, the setting chosen for this ATLAS overview week by our hosts from LPNHE Paris has turned out to be ideal and the security was never an ordeal. For those seeing Paris for the first time, there we...

  19. Clean tracks for ATLAS

    CERN Multimedia

    2006-01-01

    First cosmic ray tracks in the integrated ATLAS barrel SCT and TRT tracking detectors. A snap-shot of a cosmic ray event seen in the different layers of both the SCT and TRT detectors. The ATLAS Inner Detector Integration Team celebrated a major success recently, when clean tracks of cosmic rays were detected in the completed semiconductor tracker (SCT) and transition radiation tracker (TRT) barrels. These tracking tests come just months after the successful insertion of the SCT into the TRT (See Bulletin 09/2006). The cosmic ray test is important for the experiment because, after 15 years of hard work, it is the last test performed on the fully assembled barrel before lowering it into the ATLAS cavern. The two trackers work together to provide millions of channels so that particles' tracks can be identified and measured with great accuracy. According to the team, the preliminary results were very encouraging. After first checks of noise levels in the final detectors, a critical goal was to study their re...

  20. ATLAS Award for Difficult Task

    CERN Multimedia

    2004-01-01

    Two Russian companies were honoured with an ATLAS Award, for supply of the ATLAS Inner Detector barrel support structure elements, last week. On 23 March the Russian company ORPE Technologiya and its subcontractor, RSP Khrunitchev, were jointly presented with an ATLAS Supplier Award. Since 1998, ORPE Technologiya has been actively involved in the development of the carbon-fibre reinforced plastic elements of the ATLAS Inner Detector barrel support structure. After three years of joint research and development, CERN and ORPE Technologiya launched the manufacturing contract. It had a tight delivery schedule and very demanding specifications in terms of mechanical tolerance and stability. The contract was successfully completed with the arrival of the last element of the structure at CERN on 8 January 2004. The delivery of this key component of the Inner Detector deserves an ATLAS Award given the difficulty of manufacturing the end-frames, which very few companies in the world would have been able to do at an ...

  1. 24 October 2014 - President of the Republic of Ecuador R. Correa Delgado signing the guest book with Vice President L. Moreno and Director for Research and Scientific Computing S. Bertolucci.

    CERN Multimedia

    Guillaume, Jeanneret

    2014-01-01

    visiting the ATLAS experimental cavern with Collaboration PSokesperson D. Charlton and ATLAS User F. Monticelli; throughout accompanied by Adviser for Ecuador J. Salicio Diez and Director for Research and Scientific Computing S. Bertolucci.

  2. Application of Grid technologies and search for exotics physics with the ATLAS experiment at the LHC

    CERN Document Server

    March, Luis; Ros, Eduardo

    The work presented in this thesis has been performed within the ATLAS (A Toroidal LHC ApparatuS) collaboration. Two subjects have been investigated. One subject is the Computing System Commissioning (CSC) production using an instance of the Production System (ProdSys), called Lexor, and the test of the ATLAS Distributed Analysis (ADA) using ProdSys. The other subject is the sim- ulation and subsequent analysis of processes involving new particles predicted by the Little Higgs model within the ATLAS detector. An introduction to the Standard Model (SM), the Large Hadron Collider (LHC) and the ATLAS experiment, software and computing is given in chapter 1. The problems of the SM are discussed and some proposed solutions are reviewed. The SM introduction is followed by an overview of LHC and ATLAS. The main AT- LAS subsystems are described and the ATLAS software and computing model is discussed. Many physics processes within and beyond the Standard Model involve b-quark decays. New heavy particles, expected in mo...

  3. Improved ATLAS HammerCloud Monitoring for local Site Administration

    CERN Document Server

    Boehler, Michael; The ATLAS collaboration; Hoenig, Friedrich; Legger, Federica

    2015-01-01

    Every day hundreds of tests are run on the Worldwide LHC Computing Grid for the ATLAS, CMS, and LHCb experiments in order to evaluate the performance and reliability of the different computing sites. All this activity is steered, controlled, and monitored by the HammerCloud testing infrastructure. Sites with failing functionality tests are auto-excluded from the ATLAS computing grid, therefore it is essential to provide a detailed and well organized web interface for the local site administrators such that they can easily spot and promptly solve site issues. Additional functionalities have been developed to extract and visualize the most relevant information. The site administrators can now be pointed easily to major site issues which lead to site blacklisting as well as possible minor issues that are usually not conspicuous enough to warrant the blacklisting of a specific site, but can still cause undesired effects such as a non-negligible job failure rate. This contribution summarizes the different developm...

  4. Functional testing of the ATLAS distributed analysis resources with Ganga

    International Nuclear Information System (INIS)

    The ATLAS computing model is based on the GRID paradigm, which entails a high degree of decentralisation and sharing of computer resources. For such a large system to be efficient, regular checks on the performances of the involved computing facilities are desirable. We present the recent developments of a tool, the ATLAS Gangarobot, designed to perform regular tests of all sites by running arbitrary user applications with varied configurations at predefined time intervals. The Gangarobot uses Ganga, a front-end for job definition and management, for configuring and running the test applications on the various GRID sites. The test results can be used to dynamically blacklist sites that are temporarly unsuited to run analysis jobs, therefore providing on the one hand a way to quickly spot site problems, and on the other hand allowing for an effective distribution of the work load on the available resources.

  5. Improved ATLAS HammerCloud Monitoring for local Site Administration

    CERN Document Server

    Boehler, Michael; The ATLAS collaboration; Hoenig, Friedrich; Legger, Federica; Sciacca, Francesco Giovanni; Mancinelli, Valentina

    2015-01-01

    Every day hundreds of tests are run on the Worldwide LHC Computing Grid for the ATLAS, and CMS experiments in order to evaluate the performance and reliability of the different computing sites. All this activity is steered, controlled, and monitored by the HammerCloud testing infrastructure. Sites with failing functionality tests are auto-excluded from the ATLAS computing grid, therefore it is essential to provide a detailed and well organized web interface for the local site administrators such that they can easily spot and promptly solve site issues. Additional functionality has been developed to extract and visualize the most relevant information. The site administrators can now be pointed easily to major site issues which lead to site blacklisting as well as possible minor issues that are usually not conspicuous enough to warrant the blacklisting of a specific site, but can still cause undesired effects such as a non-negligible job failure rate. This paper summarizes the different developments and optimiz...

  6. Advanced Technology Lifecycle Analysis System (ATLAS)

    Science.gov (United States)

    O'Neil, Daniel A.; Mankins, John C.

    2004-01-01

    Developing credible mass and cost estimates for space exploration and development architectures require multidisciplinary analysis based on physics calculations, and parametric estimates derived from historical systems. Within the National Aeronautics and Space Administration (NASA), concurrent engineering environment (CEE) activities integrate discipline oriented analysis tools through a computer network and accumulate the results of a multidisciplinary analysis team via a centralized database or spreadsheet Each minute of a design and analysis study within a concurrent engineering environment is expensive due the size of the team and supporting equipment The Advanced Technology Lifecycle Analysis System (ATLAS) reduces the cost of architecture analysis by capturing the knowledge of discipline experts into system oriented spreadsheet models. A framework with a user interface presents a library of system models to an architecture analyst. The analyst selects models of launchers, in-space transportation systems, and excursion vehicles, as well as space and surface infrastructure such as propellant depots, habitats, and solar power satellites. After assembling the architecture from the selected models, the analyst can create a campaign comprised of missions spanning several years. The ATLAS controller passes analyst specified parameters to the models and data among the models. An integrator workbook calls a history based parametric analysis cost model to determine the costs. Also, the integrator estimates the flight rates, launched masses, and architecture benefits over the years of the campaign. An accumulator workbook presents the analytical results in a series of bar graphs. In no way does ATLAS compete with a CEE; instead, ATLAS complements a CEE by ensuring that the time of the experts is well spent Using ATLAS, an architecture analyst can perform technology sensitivity analysis, study many scenarios, and see the impact of design decisions. When the analyst is

  7. The Hatfield Lunar Atlas Digitally Re-Mastered Edition

    CERN Document Server

    Cook, Anthony Charles

    2012-01-01

    The Hatfield Lunar Atlas has become an amateur lunar observer's bible since it was first published in 1968. A major update of the atlas was made in 1998, using the same wonderful photographs that Commander Henry Hatfield made with his purpose-built 12-inch (300 mm) telescope, but bringing the lunar nomenclature up to date and changing the units from Imperial to S.I. metric. However, with modern telescope optics, digital imaging equipment and computer enhancement new pictures can easily surpass what was achieved with Henry Hatfield's 12-inch telescope and a film camera. This limits the usefulness of the original atlas to visual observing or imaging with rather small amateur telescopes. The new, digitally re-mastered edition vastly improves the clarity and definition of the original photographs - significantly beyond the resolution limits of the photographic grains present in earlier atlas versions - while preserving the layout and style of the original publications. This has been achieved by merging computer-v...

  8. High-performance scalable Information Service for the ATLAS experiment

    CERN Document Server

    Kolos, S; The ATLAS collaboration; Hauser, R

    2012-01-01

    The ATLAS experiment is being operated by highly distributed computing system which is constantly producing a lot of status information which is used to monitor the experiment operational conditions as well as to access the quality of the physics data being taken. For example the ATLAS High Level Trigger(HLT) algorithms are executed on the online computing farm consisting from about 1500 nodes. Each HLT algorithm is producing few thousands histograms, which have to be integrated over the whole farm and carefully analyzed in order to properly tune the event rejection. In order to handle such non-physics data the Information Service (IS) facility has been developed in the scope of the ATLAS TDAQ project. The IS provides high-performance scalable solution for information exchange in distributed environment. In the course of an ATLAS data taking session the IS handles about hundred gigabytes of information which is being constantly updated with the update interval varying from a second to few tens of seconds. IS ...

  9. ATLAS Off-Grid sites (Tier-3) monitoring

    CERN Document Server

    Petrosyan, A S; The ATLAS collaboration

    2012-01-01

    ATLAS is a particle physics experiment on Large Hadron Collider at CERN. The experiment produces petabytes of data every year. The ATLAS Computing model embraces the Grid paradigm and originally included three levels of computing centers to be able to operate such large volume of data. The ATLAS Distributed Computing activities concentrated so far in the “central” part of the computing system of the experiment, namely the first 3 tiers (CERN Tier-0, the 10 Tier-1s centers and about 50 Tier-2s). This is a coherent system to perform data processing and management on a global scale and host (re)processing, simulation activities down to group and user analysis. With the formation of small computing centers, usually based at universities, the model was expanded to include them as Tier-3 sites. Tier-3 centers consist of non-pledged resources mostly dedicated for the data analysis by the geographically close or local scientific groups. The experiment supplies all necessary software to operate typical Grid-site, ...

  10. Hippocampal unified multi-atlas network (HUMAN): protocol and scale validation of a novel segmentation tool

    Science.gov (United States)

    Amoroso, N.; Errico, R.; Bruno, S.; Chincarini, A.; Garuccio, E.; Sensi, F.; Tangaro, S.; Tateo, A.; Bellotti, R.; Alzheimers Disease Neuroimaging Initiative,the

    2015-11-01

    In this study we present a novel fully automated Hippocampal Unified Multi-Atlas-Networks (HUMAN) algorithm for the segmentation of the hippocampus in structural magnetic resonance imaging. In multi-atlas approaches atlas selection is of crucial importance for the accuracy of the segmentation. Here we present an optimized method based on the definition of a small peri-hippocampal region to target the atlas learning with linear and non-linear embedded manifolds. All atlases were co-registered to a data driven template resulting in a computationally efficient method that requires only one test registration. The optimal atlases identified were used to train dedicated artificial neural networks whose labels were then propagated and fused to obtain the final segmentation. To quantify data heterogeneity and protocol inherent effects, HUMAN was tested on two independent data sets provided by the Alzheimer’s Disease Neuroimaging Initiative and the Open Access Series of Imaging Studies. HUMAN is accurate and achieves state-of-the-art performance (Dice{{}\\text{ADNI}} =0.929+/- 0.003 and Dice{{}\\text{OASIS}} =0.869+/- 0.002 ). It is also a robust method that remains stable when applied to the whole hippocampus or to sub-regions (patches). HUMAN also compares favorably with a basic multi-atlas approach and a benchmark segmentation tool such as FreeSurfer.

  11. Fusion set selection with surrogate metric in multi-atlas based image segmentation

    Science.gov (United States)

    Zhao, Tingting; Ruan, Dan

    2016-02-01

    Multi-atlas based image segmentation sees unprecedented opportunities but also demanding challenges in the big data era. Relevant atlas selection before label fusion plays a crucial role in reducing potential performance loss from heterogeneous data quality and high computation cost from extensive data. This paper starts with investigating the image similarity metric (termed ‘surrogate’), an alternative to the inaccessible geometric agreement metric (termed ‘oracle’) in atlas relevance assessment, and probes into the problem of how to select the ‘most-relevant’ atlases and how many such atlases to incorporate. We propose an inference model to relate the surrogates and the oracle geometric agreement metrics. Based on this model, we quantify the behavior of the surrogates in mimicking oracle metrics for atlas relevance ordering. Finally, analytical insights on the choice of fusion set size are presented from a probabilistic perspective, with the integrated goal of including the most relevant atlases and excluding the irrelevant ones. Empirical evidence and performance assessment are provided based on prostate and corpus callosum segmentation.

  12. PanDA: Exascale Federation of Resources for the ATLAS Experiment at the LHC

    Directory of Open Access Journals (Sweden)

    Megino Fernando Barreiro

    2016-01-01

    The PanDA (Production and Distributed Analysis system was developed in 2005 for the ATLAS experiment on top of this heterogeneous infrastructure to seamlessly integrate the computational resources and give the users the feeling of a unique system. Since its origins, PanDA has evolved together with upcoming computing paradigms in and outside HEP, such as changes in the networking model, Cloud Computing and HPC. It is currently running steadily up to 200 thousand simultaneous cores (limited by the available resources for ATLAS, up to two million aggregated jobs per day and processes over an exabyte of data per year. The success of PanDA in ATLAS is triggering the widespread adoption and testing by other experiments. In this contribution we will give an overview of the PanDA components and focus on the new features and upcoming challenges that are relevant to the next decade of distributed computing workload management using PanDA.

  13. MRI-based treatment planning with pseudo CT generated through atlas registration

    International Nuclear Information System (INIS)

    Purpose: To evaluate the feasibility and accuracy of magnetic resonance imaging (MRI)-based treatment planning using pseudo CTs generated through atlas registration. Methods: A pseudo CT, providing electron density information for dose calculation, was generated by deforming atlas CT images previously acquired on other patients. The authors tested 4 schemes of synthesizing a pseudo CT from single or multiple deformed atlas images: use of a single arbitrarily selected atlas, arithmetic mean process using 6 atlases, and pattern recognition with Gaussian process (PRGP) using 6 or 12 atlases. The required deformation for atlas CT images was derived from a nonlinear registration of conjugated atlas MR images to that of the patient of interest. The contrasts of atlas MR images were adjusted by histogram matching to reduce the effect of different sets of acquisition parameters. For comparison, the authors also tested a simple scheme assigning the Hounsfield unit of water to the entire patient volume. All pseudo CT generating schemes were applied to 14 patients with common pediatric brain tumors. The image similarity of real patient-specific CT and pseudo CTs constructed by different schemes was compared. Differences in computation times were also calculated. The real CT in the treatment planning system was replaced with the pseudo CT, and the dose distribution was recalculated to determine the difference. Results: The atlas approach generally performed better than assigning a bulk CT number to the entire patient volume. Comparing atlas-based schemes, those using multiple atlases outperformed the single atlas scheme. For multiple atlas schemes, the pseudo CTs were similar to the real CTs (correlation coefficient, 0.787–0.819). The calculated dose distribution was in close agreement with the original dose. Nearly the entire patient volume (98.3%–98.7%) satisfied the criteria of chi-evaluation (<2% maximum dose and 2 mm range). The dose to 95% of the volume and the

  14. Overview of the ATLAS Fast Tracker Project

    CERN Document Server

    Ancu, Lucian Stefan; The ATLAS collaboration

    2016-01-01

    The next LHC runs, with a significant increase in instantaneous luminosity, will provide a big challenge for the trigger and data acquisition systems of all the experiments. An intensive use of the tracking information at the trigger level will be important to keep high efficiency for interesting events despite the increase in multiple collisions per bunch crossing. In order to increase the use of tracks within the High Level Trigger, the ATLAS experiment planned the installation of a hardware processor dedicated to tracking: the Fast TracKer processor. The Fast Tracker is designed to perform full scan track reconstruction of every event accepted by the ATLAS first level hardware trigger. To achieve this goal the system uses a parallel architecture, with algorithms designed to exploit the computing power of custom Associative Memory chips, and modern field programmable gate arrays. The processor will provide computing power to reconstruct tracks with transverse momentum greater than 1 GeV in the whol...

  15. Searches for beyond the Standard Model physics with boosted topologies in the ATLAS experiment using the Grid-based Tier-3 facility at IFIC-Valencia

    CERN Document Server

    Villaplana Pérez, Miguel; Vos, Marcel

    Both the LHC and ATLAS have been performing well beyond expectation since the start of the data taking by the end of 2009. Since then, several thousands of millions of collision events have been recorded by the ATLAS experiment. With a data taking efficiency higher than 95% and more than 99% of its channels working, ATLAS supplies data with an unmatched quality. In order to analyse the data, the ATLAS Collaboration has designed a distributed computing model based on GRID technologies. The ATLAS computing model and its evolution since the start of the LHC is discussed in section 3.1. The ATLAS computing model groups the different types of computing centers of the ATLAS Collaboration in a tiered hierarchy that ranges from Tier-0 at CERN, down to the 11 Tier-1 centers and the nearly 80 Tier-2 centres distributed world wide. The Spanish Tier-2 activities during the first years of data taking are described in section 3.2. Tier-3 are institution-level non-ATLAS funded or controlled centres that participate presuma...

  16. ATLAS Trigger: design and commissioning

    CERN Document Server

    Pastore, F; The ATLAS collaboration

    2009-01-01

    The ATLAS detector at CERN's Large Hadron Collider (LHC) will be exposed to proton-proton collisions from beams crossing at 40 MHz. A three-level trigger system was designed to select potentially interesting events and reduce the incoming rate to 100-200 Hz. The first trigger level (LVL1) is implemented in custom-built electronics, the second and third trigger levels are realised in software. Based on calorimeter information and hits in dedicated muon-trigger detectors, the LVL1 decision is made by the central-trigger processor yielding an output rate of less than 100 kHz. The allowed latency for the trigger decision at this stage is less than 2.5 micro seconds. The two subsequent levels, called, High-Level Trigger (HLT) further reduce the rate to the offline storage rate while retaining the most interesting physics. The HLT is implemented in software running in commercially available computer farms and consists of Level 2 and Event Filter. To reduce the network data traffic and the processing time to managea...

  17. Atlas C++ Coding Standard Specification

    CERN Document Server

    Albrand, S; Barberis, D; Bosman, M; Jones, B; Stavrianakou, M; Arnault, C; Candlin, D; Candlin, R; Franck, E; Hansl-Kozanecka, Traudl; Malon, D; Qian, S; Quarrie, D; Schaffer, R D

    2001-01-01

    This document defines the ATLAS C++ coding standard, that should be adhered to when writing C++ code. It has been adapted from the original "PST Coding Standard" document (http://pst.cern.ch/HandBookWorkBook/Handbook/Programming/programming.html) CERN-UCO/1999/207. The "ATLAS standard" comprises modifications, further justification and examples for some of the rules in the original PST document. All changes were discussed in the ATLAS Offline Software Quality Control Group and feedback from the collaboration was taken into account in the "current" version.

  18. European Atlas of Soil Biodiversity

    DEFF Research Database (Denmark)

    Krogh (contributor), Paul Henning

    and climate change? The first ever European Atlas of Soil Biodiversity uses informative texts, stunning photographs and maps to answer these questions and other issues. The European Atlas of Soil Biodiversity functions as a comprehensive guide allowing non-specialists to access information about this unseen...... Biodiversity'. Starting with the smallest organisms such as the bacteria, this segment works through a range of taxonomic groups such as fungi, nematodes, insects and macro-fauna to illustrate the astonishing levels of heterogeneity of life in soil. The European Atlas of Soil Biodiversity is more than just...

  19. Electrons and Photons at ATLAS

    CERN Document Server

    Heim, Sarah; The ATLAS collaboration

    2016-01-01

    The performance of the reconstruction, calibration and identification of electrons and photons with the ATLAS detector at the LHC is a key component to realize the ATLAS full physics potential, both in the searches for new physics and in precision measurements. The algorithms used for the reconstruction and identification of electrons and photons with the ATLAS detector during LHC run 2 are presented. Measurements of the identification efficiencies are derived from data. The results from the 2015 pp collision data set at sqrt(s)=13 TeV are reported. The electron and photon energy calibration procedure and its performance are also discussed.

  20. A unified framework for cross-modality multi-atlas segmentation of brain MRI.

    Science.gov (United States)

    Eugenio Iglesias, Juan; Rory Sabuncu, Mert; Van Leemput, Koen

    2013-12-01

    Multi-atlas label fusion is a powerful image segmentation strategy that is becoming increasingly popular in medical imaging. A standard label fusion algorithm relies on independently computed pairwise registrations between individual atlases and the (target) image to be segmented. These registrations are then used to propagate the atlas labels to the target space and fuse them into a single final segmentation. Such label fusion schemes commonly rely on the similarity between intensity values of the atlases and target scan, which is often problematic in medical imaging - in particular, when the atlases and target images are obtained via different sensor types or imaging protocols. In this paper, we present a generative probabilistic model that yields an algorithm for solving the atlas-to-target registrations and label fusion steps simultaneously. The proposed model does not directly rely on the similarity of image intensities. Instead, it exploits the consistency of voxel intensities within the target scan to drive the registration and label fusion, hence the atlases and target image can be of different modalities. Furthermore, the framework models the joint warp of all the atlases, introducing interdependence between the registrations. We use variational expectation maximization and the Demons registration framework in order to efficiently identify the most probable segmentation and registrations. We use two sets of experiments to illustrate the approach, where proton density (PD) MRI atlases are used to segment T1-weighted brain scans and vice versa. Our results clearly demonstrate the accuracy gain due to exploiting within-target intensity consistency and integrating registration into label fusion. PMID:24001931

  1. ATLAS BigPanDA Monitoring and Its Evolution

    CERN Document Server

    Wenaus, Torre; The ATLAS collaboration; Korchuganova, Tatiana

    2016-01-01

    BigPanDA is the latest generation of the monitoring system for the Production and Distributed Analysis (PanDA) system. The BigPanDA monitor is a core component of PanDA and also serves the monitoring needs of the new ATLAS Production System Prodsys-2. BigPanDA has been developed to serve the growing computation needs of the ATLAS Experiment and the wider applications of PanDA beyond ATLAS. Through a system-wide job database, the BigPanDA monitor provides a comprehensive and coherent view of the tasks and jobs executed by the system, from high level summaries to detailed drill-down job diagnostics. The system has been in production and has remained in continuous development since mid 2014, today effectively managing more than 2 million jobs per day distributed over 150 computing centers worldwide. BigPanDA also delivers web-based analytics and system state views to groups of users including distributed computing systems operators, shifters, physicist end-users, computing managers and accounting services. Provi...

  2. Advanced technologies for scalable ATLAS conditions database access on the grid

    CERN Document Server

    Basset, R; Dimitrov, G; Girone, M; Hawkings, R; Nevski, P; Valassi, A; Vaniachine, A; Viegas, F; Walker, R; Wong, A

    2010-01-01

    During massive data reprocessing operations an ATLAS Conditions Database application must support concurrent access from numerous ATLAS data processing jobs running on the Grid. By simulating realistic work-flow, ATLAS database scalability tests provided feedback for Conditions Db software optimization and allowed precise determination of required distributed database resources. In distributed data processing one must take into account the chaotic nature of Grid computing characterized by peak loads, which can be much higher than average access rates. To validate database performance at peak loads, we tested database scalability at very high concurrent jobs rates. This has been achieved through coordinated database stress tests performed in series of ATLAS reprocessing exercises at the Tier-1 sites. The goal of database stress tests is to detect scalability limits of the hardware deployed at the Tier-1 sites, so that the server overload conditions can be safely avoided in a production environment. Our analysi...

  3. ATLAS TV PROJECT

    CERN Multimedia

    OMNI communication

    2005-01-01

    CPPM Laboratory Marseille Starting with the Workshop- adding modules to the strip 00:09:19 Exterior-entering the lab site by car, Sascha Rosanov and a PR lady walking, Lab sign on building -Physique des Particules de Marseille 00:20:00 Interviews of the ATLAS pixel work for bio-mediacal research 00:34:00 Interview of Roy Aleksov, Head of CPPM Laboratory, Working in international team, working with CERN and GRID The rest of the film inclusdes lab testingand some exterior shots.

  4. Supersymmetry Searches with ATLAS

    CERN Document Server

    Hill, Ewan; The ATLAS collaboration

    2015-01-01

    Supersymmetry is one of the best motivated and studied theories of physics beyond the Standard Model. This document summarises recent ATLAS results of searches for supersymmetric particles using LHC proton--proton collision data at $\\sqrt{s} = 7$ and 8 TeV. Weak and strong production Supersymmetry scenarios are considered, with particular attention to direct production of third generation supersymmetric particles. The searches involve final states including jets, missing transverse momentum, leptons, and long-lived particles. Sensitivity projections for the $\\sqrt{s} = 13$ TeV data are also presented.

  5. ATLAS TV PROJECT

    CERN Document Server

    2005-01-01

    Budker Nuclear Physics Institute, Novosibirsk Sequence 1 Reception for Markus Nordberg and Andrew Millington by about 20 physicists from the Budker Nuclear Physics Institute Host: Yuri Tikhonov Various short talks and exchanges, with coffee Sequence 2 Visit to BINP Facilities Tikhonov and Nordberg walking and talking Visit to electron accelerator, old solar detector Sequence 3 Visit to BNIP workshops Work on big wheel segments shots over-exposed Work on Atlas coils LHC Magnets Men playing chess, exterior shots of Tikhonov, Nordberg arriving Sequence 4 Shots from car of journey from workshop to main BNIP building.

  6. Supersymmetry searches in ATLAS

    CERN Document Server

    Kuwertz, Emma Sian; The ATLAS collaboration

    2015-01-01

    Despite the absence of experimental evidence, weak scale supersymmetry remains one of the best motivated and studied Standard Model extensions. This talk summarises recent ATLAS results for searches for supersymmetric (SUSY) particles. Weak and strong production in both R-Parity conserving and R-Parity violating SUSY scenarios are considered. The searches involved final states including jets, including those those tagged as originating from b-quark decays, missing transverse momentum, light leptons, taus or photons, as well as long-lived particle signatures. An overview of the constraints on supersymmetry from the run1 results is presented, as well as sensitivity projections for the data that will be collected in 2015.

  7. The ATLAS Simulation Software

    International Nuclear Information System (INIS)

    We present the status of the ATLAS Simulation Project. Recent detector description improvements have focussed on commissioning layouts, implementation of inert material, and comparisons to the as-built detector. Core Simulation is reviewed with a focus on parameter optimizations, physics list choices, visualization, large-scale production, and validation. A fast simulation is also briefly described, and its performance is evaluated with respect to the full Simulation. Digitization, the last step of the Monte Carlo chain, is described, including developments in pile up and data overlay.

  8. QCD Measurements at ATLAS

    CERN Document Server

    Hubacek, Zdenek; The ATLAS collaboration

    2016-01-01

    This paper presents recent QCD related measurements from the ATLAS Experiment at the LHC at CERN. The results on the total inelastic cross- section, charged particle production, jet production, photon production, and W-, Z-bosons productions are briefly summarized. The measurments are performed at different center-of-mass energies sqrt(s) = 7, 8, and 13 TeV. The measured cross-sections are generally found to be in agreement with the expectations from the Standard Model within the estimated uncertainties.

  9. ATLAS Exotic Searches

    Directory of Open Access Journals (Sweden)

    Bousson Nicolas

    2012-06-01

    Full Text Available Thanks to the outstanding performance of the Large Hadron Collider (LHC that delivered more than 2 fb−1 of proton-proton collision data at center-of-mass energy of 7 TeV, the ATLAS experiment has been able to explore a wide range of exotic models trying to address the questions unanswered by the Standard Model of particle physics. Searches for leptoquarks, new heavy quarks, vector-like quarks, black holes, hidden valley and contact interactions are reviewed in these proceedings.

  10. ATLAS Exotic Searches

    CERN Document Server

    Bousson, Nicolas

    2012-01-01

    Thanks to the outstanding performance of the Large Hadron Collider (LHC) that delivered more than 2 fb^-1 of proton-proton collision data at center-of-mass energy of 7 TeV, the ATLAS experiment has been able to explore a wide range of exotic models trying to address the questions unanswered by the Standard Model of particle physics. Searches for leptoquarks, new heavy quarks, vector-like quarks, black holes, hidden valley and contact interactions are reviewed in these proceedings.

  11. Top Physics at ATLAS

    OpenAIRE

    Barisonzi, Marcello

    2005-01-01

    The Large Hadron Collider LHC is a top quark factory: due to its high design luminosity, LHC will produce about 200 millions of top quarks per year of operation. The large amount of data will allow to study with great precision the properties of the top quark, most notably cross-section, mass and spin. The Top Physics Working Group has been set up at the ATLAS experiment, to evaluate the precision reach of physics measurements in the top sector, and to study the systematic effects of the ATLA...

  12. Supersymmetry searches in ATLAS

    CERN Document Server

    Meloni, Federico; The ATLAS collaboration

    2015-01-01

    This document summarises recent ATLAS results for searches for supersymmetric particles using LHC proton-proton collision data. Despite the absence of experimental evidence, weak scale supersymmetry remains one of the best motivated and studied Standard Model extensions. We consider both R-Parity conserving and R-Parity violating SUSY scenarios. The searches involve final states including jets, missing transverse momentum, light leptons, taus or photons, as well as long-lived particle signatures. Sensitivity projections for the data that will be collected in 2015 are also presented.

  13. Supersymmetry searches in ATLAS

    CERN Document Server

    Meloni, Federico; The ATLAS collaboration

    2015-01-01

    Despite the absence of experimental evidence, weak scale supersymmetry remains one of the best motivated and studied Standard Model extensions. This talk summarises recent ATLAS results for searches for supersymmetric (SUSY) particles. Weak and strong production in both R-Parity conserving and R-Parity violating SUSY scenarios are considered. The searches involved final states including jets, missing transverse momentum, light leptons, taus or photons, as well as long-lived particle signatures. Sensitivity projections for the data that will be collected in 2015 are also presented.

  14. Quarkonium production at ATLAS

    CERN Document Server

    Price, D; The ATLAS collaboration

    2011-01-01

    The production of quarkonium is an important testing ground for QCD calculations. The J/psi and Upsilon production cross-sections are measured in proton-proton collisions at a centre-of-mass energy of 7 TeV with the ATLAS detector at the LHC. Differential cross-sections as a function of transverse momentum and rapidity are presented. The fraction of J/psi produced in B-hadron decays is also measured and the differential production cross-sections of prompt and non-prompt J/psi production determined separately. Results are compared to recent predictions from perturbative QCD calculations.

  15. Dark Matter in ATLAS

    CERN Document Server

    Resconi, Silvia; The ATLAS collaboration

    2016-01-01

    Results of Dark Matter searches in mono-X analysis with the ATLAS experiment at the Large Hadron Collider are reported. The data were collected in proton–proton collisions at a centre-of-mass energy of 13 TeV and correspond to an integrated luminosity of 3.2 fb-1. A description of the main characteristics of each analysis and how the main backgrounds are estimated is shown. The observed data are in agreement with the expected Standard Model backgrounds for all analysis described. Exclusion limits are presented for Dark Matter models including pair production of dark matter candidates.

  16. Dark Matter in ATLAS

    CERN Document Server

    Resconi, Silvia; The ATLAS collaboration

    2016-01-01

    An overview of Dark Matter searches with the ATLAS experiment at the Large Hadron Collider (LHC) is shown. Results of Mono-X analyses requiring large missing transverse momentum and a recoiling detectable physics object (X) are reported. The data were collected in proton-proton collisions at a centre-of-mass energy of 13 TeV. The observed data are in agreement with the expected Standard Model backgrounds for all analyses described. Exclusion limits are presented for Dark Matter models including pair production of Dark Matter candidates.

  17. Exotics searches in ATLAS

    CERN Document Server

    Vranjes, N; The ATLAS collaboration

    2016-01-01

    We report on the latest searches for (non-SUSY) Beyond Standard Model phenomena performed with the ATLAS detector. The searches have been performed with the data from proton-proton collisions at a centre-of-mass energy of 7 TeV collected in 2010 and 2011. Various experimental signatures have been studied involving reconstruction and measurement of leptons, photons, jets, missing transverse energy, as well as reconstruction of top quarks. For most of the signatures, the experimental reach is significantly increased with respect to previous results.

  18. The Genome Atlas Resource

    DEFF Research Database (Denmark)

    Azam Qureshi, Matloob; Rotenberg, Eva; Stærfeldt, Hans Henrik;

    2010-01-01

    Abstract. The Genome Atlas is a resource for addressing the challenges of synchronising prokaryotic genomic sequence data from multiple public repositories. This resource can integrate bioinformatic analyses in various data format and quality. Existing open source tools have been used together...... with scripts and algorithms developed in a variety of programming languages at the Centre for Biological Sequence Analysis in order to create a three-tier software application for genome analysis. The results are made available via a web interface developed in Java, PHP and Perl CGI. User...

  19. MARS: a mouse atlas registration system based on a planar x-ray projector and an optical camera

    International Nuclear Information System (INIS)

    This paper introduces a mouse atlas registration system (MARS), composed of a stationary top-view x-ray projector and a side-view optical camera, coupled to a mouse atlas registration algorithm. This system uses the x-ray and optical images to guide a fully automatic co-registration of a mouse atlas with each subject, in order to provide anatomical reference for small animal molecular imaging systems such as positron emission tomography (PET). To facilitate the registration, a statistical atlas that accounts for inter-subject anatomical variations was constructed based on 83 organ-labeled mouse micro-computed tomography (CT) images. The statistical shape model and conditional Gaussian model techniques were used to register the atlas with the x-ray image and optical photo. The accuracy of the atlas registration was evaluated by comparing the registered atlas with the organ-labeled micro-CT images of the test subjects. The results showed excellent registration accuracy of the whole-body region, and good accuracy for the brain, liver, heart, lungs and kidneys. In its implementation, the MARS was integrated with a preclinical PET scanner to deliver combined PET/MARS imaging, and to facilitate atlas-assisted analysis of the preclinical PET images. (paper)

  20. Parcellation of the Healthy Neonatal Brain into 107 Regions Using Atlas Propagation through Intermediate Time Points in Childhood

    Science.gov (United States)

    Blesa, Manuel; Serag, Ahmed; Wilkinson, Alastair G.; Anblagan, Devasuda; Telford, Emma J.; Pataky, Rozalia; Sparrow, Sarah A.; Macnaught, Gillian; Semple, Scott I.; Bastin, Mark E.; Boardman, James P.

    2016-01-01

    Neuroimage analysis pipelines rely on parcellated atlases generated from healthy individuals to provide anatomic context to structural and diffusion MRI data. Atlases constructed using adult data introduce bias into studies of early brain development. We aimed to create a neonatal brain atlas of healthy subjects that can be applied to multi-modal MRI data. Structural and diffusion 3T MRI scans were acquired soon after birth from 33 typically developing neonates born at term (mean postmenstrual age at birth 39+5 weeks, range 37+2–41+6). An adult brain atlas (SRI24/TZO) was propagated to the neonatal data using temporal registration via childhood templates with dense temporal samples (NIH Pediatric Database), with the final atlas (Edinburgh Neonatal Atlas, ENA33) constructed using the Symmetric Group Normalization (SyGN) method. After this step, the computed final transformations were applied to T2-weighted data, and fractional anisotropy, mean diffusivity, and tissue segmentations to provide a multi-modal atlas with 107 anatomical regions; a symmetric version was also created to facilitate studies of laterality. Volumes of each region of interest were measured to provide reference data from normal subjects. Because this atlas is generated from step-wise propagation of adult labels through intermediate time points in childhood, it may serve as a useful starting point for modeling brain growth during development. PMID:27242423

  1. Parcellation of the healthy neonatal brain into 107 regions using atlas propagation through intermediate time points in childhood

    Directory of Open Access Journals (Sweden)

    Manuel eBlesa Cabez

    2016-05-01

    Full Text Available Neuroimage analysis pipelines rely on parcellated atlases generated from healthy individuals to provide anatomic context to structural and diffusion MRI data. Atlases constructed using adult data introduce bias into studies of early brain development. We aimed to create a neonatal brain atlas of healthy subjects that can be applied to multi-modal MRI data. Structural and diffusion 3T MRI scans were acquired soon after birth from 33 typically developing neonates born at term (mean postmenstrual age at birth 39+5 weeks, range 37+2-41+6. An adult brain atlas (SRI24/TZO was propagated to the neonatal data using temporal registration via childhood templates with dense temporal samples (NIH Pediatric Database, with the final atlas (Edinburgh Neonatal Atlas, ENA33 constructed using the Symmetric Group Normalization method. After this step, the computed final transformations were applied to T2-weighted data, and fractional anisotropy, mean diffusivity, and tissue segmentations to provide a multi-modal atlas with 107 anatomical regions; a symmetric version was also created to facilitate studies of laterality. Volumes of each region of interest were measured to provide reference data from normal subjects. Because this atlas is generated from step-wise propagation of adult labels through intermediate time points in childhood, it may serve as a useful starting point for modelling brain growth during development.

  2. CERN Open Days 2013, Point 1 - ATLAS: ATLAS Experiment

    CERN Multimedia

    CERN Photolab

    2013-01-01

    Stand description: The ATLAS Experiment at CERN is one of the largest and most complex scientific endeavours ever assembled. The detector, located at collision point 1 of the LHC, is designed to explore the fundamental components of nature and to study the forces that shape our universe. The past year’s discovery of a Higgs boson is one of the most important scientific achievements of our time, yet this is only one of many key goals of ATLAS. During a brief break in their journey, some of the 3000-member ATLAS collaboration will be taking time to share the excitement of this exploration with you. On surface no restricted access  The exhibit at Point 1 will give visitors a chance to meet these modern-day explorers and to learn from them how answers to the most fundamental questions of mankind are being sought. Activities will include a visit to the ATLAS detector, located 80m below ground; watching the prize-winning ATLAS movie in the ATLAS cinema; seeing real particle tracks in a cloud chamber and discussi...

  3. World Ocean Atlas 2005, Temperature

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — World Ocean Atlas 2005 (WOA05) is a set of objectively analyzed (1° grid) climatological fields of in situ temperature, salinity, dissolved oxygen, Apparent Oxygen...

  4. ATLAS recognises its best suppliers

    CERN Multimedia

    Jenni, P

    The ATLAS Collaboration has recently rewarded two of its suppliers in the construction of very major detector components, fabricated in Japan. The ATLAS Supplier Award in recognition of excellent supplier performance was attributed on 2nd September 2002 during a ceremony in Hall 180 to Kawasaki Heavy Industries, while Toshiba Corporation received the award two months before at their headquarters in Japan. The ATLAS experiment will become a reality thanks to a large international collaboration partnership. The industrial suppliers for the components all over the world play a major role in the construction of this gigantic jigsaw for the LHC. And sometimes they perform so well, that their work deserves specially to be recognised. This is the case for Kawasaki Heavy Industries and Toshiba Corporation, producers of the Liquid Argon Barrel Cryostat and of the Superconducting Central Solenoid, respectively. With these awards, the ATLAS Collaboration wants to congratulate Kawasaki and Toshiba for fulfilling the hi...

  5. World Ocean Atlas 2005, Salinity

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — World Ocean Atlas 2005 (WOA05) is a set of objectively analyzed (1° grid) climatological fields of in situ temperature, salinity, dissolved oxygen, Apparent Oxygen...

  6. Wheels lining up for ATLAS

    CERN Multimedia

    2003-01-01

    On 30 October, the mechanics test assembly of the central barrel of the ATLAS tile hadronic calorimeter was completed in building 185. It is the second wheel for the Tilecal completely assembled this year.

  7. ATLAS online data quality monitoring

    CERN Document Server

    Cuenca Almenar, C; The ATLAS collaboration; Hadavand, H; Ilchenko, Y; Kolos, S; Slagle, K; Taffard, A

    2010-01-01

    Every minute the ATLAS detector is taking data, the monitoring framework serves several thousands physics events to monitoring data analysis applications, handles millions of histogram updates coming from thousands applications, executes over forty thousand advanced data quality checks for a subset of those histograms, displays histograms and results of these checks on several dozens of monitors installed in main and satellite ATLAS control rooms. The online data quality monitoring system has been of great help in providing quick feedback to the subsystems about the functioning and performance of the different parts of ATLAS by providing a configurable easy and fast visualization of all this information. The Data Quality Monitoring Display (DQMD) is a visualization tool for the automatic data quality assessment of the ATLAS experiment. It is the interface through which the shift crew and experts can validate the quality of the data being recorded or processed, be warned of problems related to data quality, an...

  8. Lyon - Atlas topographique Lyon antique

    OpenAIRE

    LENOBLE, Michel

    2015-01-01

    Code INSEE de la commune : 69123Lien Atlas (MCC) :http://atlas.patrimoines.culture.fr/atlas/trunk/index.php?ap_theme=DOM_2.01.02&ap_bbox=4.772;45.707;4.899;45.808 Le programme collectif de recherche « Atlas topographique de Lyon antique » a atteint fin 2013 sa treizième année de fonctionnement. Rattaché à l’UMR 5138 (http://www.archeometrie.mom.fr/PCRAtlas.html), le groupe de recherche comprend 30 chercheurs appartenant aux diverses institutions archéologiques impliquées dans l’archéologie ly...

  9. World Ocean Atlas 2005, Temperature

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — World Ocean Atlas 2005 (WOA05) is a set of objectively analyzed (1° grid) climatological fields of in situ temperature, salinity, dissolved oxygen, Apparent Oxygen...

  10. ATLAS Civil Engineering Point 1

    CERN Multimedia

    Jean-Claude Vialis

    2001-01-01

    Different phases of realisation to Point 1: zone of the ATLAS experiment 14-02-2001Realising anchorage, isolations and scaffoldings at UX 15 18-04-2001Concreting the arch and posing the metal reinforcements at UX 15

  11. Two new wheels for ATLAS

    CERN Multimedia

    2002-01-01

    Juergen Zimmer (Max Planck Institute), Roy Langstaff (TRIUMF/Victoria) and Sergej Kakurin (JINR), in front of one of the completed wheels of the ATLAS Hadronic End Cap Calorimeter. A decade of careful preparation and construction by groups in three continents is nearing completion with the assembly of two of the four 4 m diameter wheels required for the ATLAS Hadronic End Cap Calorimeter. The first two wheels have successfully passed all their mechanical and electrical tests, and have been rotated on schedule into the vertical position required in the experiment. 'This is an important milestone in the completion of the ATLAS End Cap Calorimetry' explains Chris Oram, who heads the Hadronic End Cap Calorimeter group. Like most experiments at particle colliders, ATLAS consists of several layers of detectors in the form of a 'barrel' and two 'end caps'. The Hadronic Calorimeter layer, which measures the energies of particles such as protons and pions, uses two techniques. The barrel part (Tile Calorimeter) cons...

  12. Nuclear Receptor Signaling Atlas (NURSA)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Nuclear Receptor Signaling Atlas (NURSA) is designed to foster the development of a comprehensive understanding of the structure, function, and role in disease...

  13. BioFuels Atlas (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Moriarty, K.

    2011-02-01

    Presentation for biennial merit review of Biofuels Atlas, a first-pass visualization tool that allows users to explore the potential of biomass-to-biofuels conversions at various locations and scales.

  14. Dartmouth Atlas of Health Care

    Data.gov (United States)

    U.S. Department of Health & Human Services — For more than 20 years, the Dartmouth Atlas Project has documented glaring variations in how medical resources are distributed and used in the United States. The...

  15. Data federation strategies for ATLAS using XRootD

    International Nuclear Information System (INIS)

    In the past year the ATLAS Collaboration accelerated its program to federate data storage resources using an architecture based on XRootD with its attendant redirection and storage integration services. The main goal of the federation is an improvement in the data access experience for the end user while allowing more efficient and intelligent use of computing resources. Along with these advances come integration with existing ATLAS production services (PanDA and its pilot services) and data management services (DQ2, and in the next generation, Rucio). Functional testing of the federation has been integrated into the standard ATLAS and WLCG monitoring frameworks and a dedicated set of tools provides high granularity information on its current and historical usage. We use a federation topology designed to search from the site's local storage outward to its region and to globally distributed storage resources. We describe programmatic testing of various federation access modes including direct access over the wide area network and staging of remote data files to local disk. To support job-brokering decisions, a time-dependent cost-of-data-access matrix is made taking into account network performance and key site performance factors. The system's response to production-scale physics analysis workloads, either from individual end-users or ATLAS analysis services, is discussed.

  16. An anatomic gene expression atlas of the adult mouse brain.

    Science.gov (United States)

    Ng, Lydia; Bernard, Amy; Lau, Chris; Overly, Caroline C; Dong, Hong-Wei; Kuan, Chihchau; Pathak, Sayan; Sunkin, Susan M; Dang, Chinh; Bohland, Jason W; Bokil, Hemant; Mitra, Partha P; Puelles, Luis; Hohmann, John; Anderson, David J; Lein, Ed S; Jones, Allan R; Hawrylycz, Michael

    2009-03-01

    Studying gene expression provides a powerful means of understanding structure-function relationships in the nervous system. The availability of genome-scale in situ hybridization datasets enables new possibilities for understanding brain organization based on gene expression patterns. The Anatomic Gene Expression Atlas (AGEA) is a new relational atlas revealing the genetic architecture of the adult C57Bl/6J mouse brain based on spatial correlations across expression data for thousands of genes in the Allen Brain Atlas (ABA). The AGEA includes three discovery tools for examining neuroanatomical relationships and boundaries: (1) three-dimensional expression-based correlation maps, (2) a hierarchical transcriptome-based parcellation of the brain and (3) a facility to retrieve from the ABA specific genes showing enriched expression in local correlated domains. The utility of this atlas is illustrated by analysis of genetic organization in the thalamus, striatum and cerebral cortex. The AGEA is a publicly accessible online computational tool integrated with the ABA (http://mouse.brain-map.org/agea). PMID:19219037

  17. Fast and robust multi-atlas segmentation of brain magnetic resonance images

    DEFF Research Database (Denmark)

    Lötjönen, Jyrki Mp; Wolz, Robin; Koikkalainen, Juha R; Thurfjell, Lennart; Waldemar, Gunhild; Soininen, Hilkka; Rueckert, Daniel

    2010-01-01

    We introduce an optimised pipeline for multi-atlas brain MRI segmentation. Both accuracy and speed of segmentation are considered. We study different similarity measures used in non-rigid registration. We show that intensity differences for intensity normalised images can be used instead of...... standard normalised mutual information in registration without compromising the accuracy but leading to threefold decrease in the computation time. We study and validate also different methods for atlas selection. Finally, we propose two new approaches for combining multi-atlas segmentation and intensity...... average similarity index between automatically and manually generated volumes was 0.849 (IBSR, six subcortical structures) and 0.880 (ADNI, hippocampus). The correlation coefficient for hippocampal volumes was 0.95 with the ADNI data. The computation time using a standard multicore PC computer was about 3...

  18. ATLAS starts moving in

    CERN Multimedia

    Della Mussia, S

    2004-01-01

    The first large active detector component was lowered into the ATLAS cavern on 1st March. It consisted of the 8 modules forming the lower part of the central barrel of the tile hadronic calorimeter. The work of assembling the barrel, which comprises 64 modules, started the following day. Two road trailers each with 64 wheels, positioned side by side. This was the solution chosen to transport the lower part of the central barrel of ATLAS' tile hadronic calorimeter from Building 185 to the PX16 shaft at Point 1 (see Figure 1). The transportation, and then the installation of the component in the experimental cavern, which took place over three days were, to say the least, rather spectacular. On 25 February, the component, consisting of eight 6-metre modules, was loaded on to the trailers. The segment of the barrel was transported on a steel support so that it wouldn't move an inch during the journey. On 26 February, once all the necessary safety checks had been carried out, the convoy was able to leave Buildi...

  19. The ATLAS Event Builder

    CERN Document Server

    Vandelli, W; Battaglia, A; Beck, H P; Blair, R; Bogaerts, A; Bosman, M; Ciobotaru, M; Cranfield, R; Crone, G; Dawson, J; Dobinson, Robert W; Dobson, M; Dos Anjos, A; Drake, G; Ermoline, Y; Ferrari, R; Ferrer, M L; Francis, D; Gadomski, S; Gameiro, S; Gorini, B; Green, B; Haberichter, W; Haberli, C; Hauser, R; Hinkelbein, C; Hughes-Jones, R; Joos, M; Kieft, G; Klous, S; Korcyl, K; Kordas, K; Kugel, A; Leahu, L; Lehmann, G; Martin, B; Mapelli, L; Meessen, C; Meirosu, C; Misiejuk, A; Mornacchi, G; Müller, M; Nagasaka, Y; Negri, A; Pasqualucci, E; Pauly, T; Petersen, J; Pope, B; Schlereth, J L; Spiwoks, R; Stancu, S; Strong, J; Sushkov, S; Szymocha, T; Tremblet, L; Ünel, G; Vermeulen, J; Werner, P; Wheeler-Ellis, S; Wickens, F; Wiedenmann, W; Yu, M; Yasu, Y; Zhang, J; Zobernig, H; 2007 IEEE Nuclear Science Symposium and Medical Imaging Conference

    2008-01-01

    Event data from proton-proton collisions at the LHC will be selected by the ATLAS experiment in a three-level trigger system, which, at its first two trigger levels (LVL1+LVL2), reduces the initial bunch crossing rate of 40~MHz to $sim$3~kHz. At this rate, the Event Builder collects the data from the readout system PCs (ROSs) and provides fully assembled events to the Event Filter (EF). The EF is the third trigger level and its aim is to achieve a further rate reduction to $sim$200~Hz on the permanent storage. The Event Builder is based on a farm of O(100) PCs, interconnected via a Gigabit Ethernet to O(150) ROSs. These PCs run Linux and multi-threaded software applications implemented in C++. All the ROSs, and substantial fractions of the Event Builder and Event Filter PCs have been installed and commissioned. We report on performance tests on this initial system, which is capable of going beyond the required data rates and bandwidths for Event Building for the ATLAS experiment.

  20. Spring comes for ATLAS

    CERN Multimedia

    Butin, F.

    2004-01-01

    (First published in the CERN weekly bulletin 24/2004, 7 June 2004.) A short while ago the ATLAS cavern underwent a spring clean, marking the end of the installation of the detector's support structures and the cavern's general infrastructure. The list of infrastructure to be installed in the ATLAS cavern from September 2003 was long: a thousand tonnes of mechanical structures spread over 13 storeys, two lifts, two 65-tonne overhead travelling cranes 25 metres above cavern floor, with a telescopic boom and cradle to access the remaining 10 metres of the cavern, a ventilation system for the 55 000 cubic metre cavern, a drainage system, a standard sprinkler system and an innovative foam fire-extinguishing system, as well as the external cryogenic system for the superconducting magnets and the liquid argon calorimeters (comprising, amongst other things, two helium refrigeration units, a nitrogen refrigeration unit and 5 km of piping for gaseous or liquid helium and nitrogen), not to mention the handling eq...

  1. ATLAS construction schedule

    CERN Multimedia

    Kotamaki, M

    The goal during the last few months has been to freeze and baseline as much as possible the schedules of various ATLAS systems and activities. The main motivations for the re-baselining of the schedules have been the new LHC schedule aiming at first collisions in early 2006 and the encountered delays in civil engineering as well as in the production of some of the detectors. The process was started by first preparing a new installation schedule that takes into account all the new external constraints and the new ATLAS staging scenario. The installation schedule version 3 was approved in the March EB and it provides the Ready For Installation (RFI) milestones for each system, i.e. the date when the system should be available for the start of the installation. TCn is now interacting with the systems aiming at a more realistic and resource loaded version 4 before the end of the year. Using the new RFI milestones as driving dates a new summary schedule has been prepared, or is under preparation, for each system....

  2. ATLAS Physicist in Space

    CERN Multimedia

    Bengt Lund-Jensen

    2007-01-01

    On December 9, the former ATLAS physicist Christer Fuglesang was launched into space onboard the STS-116 Space Shuttle flight from Kennedy Space Center in Florida. Christer worked on the development of the accordion-type liquid argon calorimeter and SUSY simulations in what eventually became ATLAS until summer 1992 when he became one out of six astronaut trainees with the European Space Agency (ESA). His selection out of a very large number of applicants from all over the ESA member states involved a number of tests in order to choose the most suitable candidates. As ESA astronaut Christer trained with the Russian Soyuz programme in Star City outside of Moscow from 1993 until 1996, when he moved to Houston to train for space shuttle missions with NASA. Christer belonged to the backup crew for the Euromir95 mission. After additional training in Russia, Christer qualified as ‘Soyuz return commander’ in 1998. Christer rerouting cables during his second space walk. (Photo: courtesy NASA) During...

  3. ATLAS Future Upgrade

    CERN Document Server

    Vankov, Peter; The ATLAS collaboration

    2016-01-01

    After the successful operation at the center-of-mass energies of 7 and 8 TeV in 2010 - 2012, the LHC is ramped up and successfully took data at the center-of-mass energies of 13 TeV in 2015. Meanwhile, plans are actively advancing for a series of upgrades of the accelerator, culminating roughly ten years from now in the high-luminosity LHC (HL-LHC) project, delivering of the order of five times the LHC nominal instantaneous luminosity along with luminosity leveling. The ultimate goal is to extend the dataset from about few hundred fb−1 expected for LHC running to 3000 fb−1 by around 2035 for ATLAS and CMS. In parallel, the experiments need to be keep lockstep with the accelerator to accommodate running beyond the nominal luminosity this decade. Along with maintenance and consolidation of the detector in the past few years, ATLAS has added inner b-layer to its tracking system. The challenge of coping with the HL-LHC instantaneous and integrated luminosity, along with the associated radiation levels, requir...

  4. The ATLAS tau trigger

    International Nuclear Information System (INIS)

    The ATLAS experiment at CERN's LHC has implemented a dedicated tau trigger system to select hadronically decaying tau leptons from the enormous background of QCD jets. This promises a significant increase in the discovery potential to the Higgs boson and in searches for physics beyond the Standard Model. The three level trigger system has been optimized for efficiency and good background rejection. The first level uses information from the calorimeters only, while the two higher levels include also information from the tracking detectors. Shower shape variables and the track multiplicity are important variables to distinguish taus from QCD jets. At the initial luminosity of 1031 cm-2s-1, single tau triggers with a transverse energy threshold of 50 GeV or higher can be run stand-alone. Below this level, the tau signatures will be combined with other event signatures. During the collection of a large sample of cosmic ray events in Autumn 2008, the tau trigger was operated as an integrated part of the ATLAS trigger system. This allowed the commissioning of technical aspects of the tau trigger.

  5. ATLAS Solenoid Integration

    CERN Multimedia

    Ruber, R

    Last month the central solenoid was installed in the barrel cryostat, which it shares with the liquid argon calorimeter. Figure 1: Some members of the solenoid and liquid argon teams proudly pose in front of the barrel cryosat, complete with detector and magnet. Some two years ago the central solenoid arrived at CERN after being manufactured and tested in Japan. It was kept in storage until last October when it was finally moved to the barrel cryostat integration area. Here a position survey of the solenoid (with respect to the cryostat's inner warm vessel) was performed. Figure 2: The alignment survey by Dirk Mergelkuhl and Aude Wiart. (EST-SU) At the start of the New Year the solenoid was moved to the cryostat insertion stand. Figure 3: The solenoid on the insertion stand, with Akira Yamamoto the solenoid designer and project leader. Figure 4: Taka Kondo, ATLAS Japan spokesperson, and Shoichi Mizumaki, Toshiba project engineer for the ATLAS solenoid, celebrate the insertion. Aft...

  6. ATLAS Christmas lunch

    CERN Multimedia

    Francois Butin; Markus Nordberg

    The end of the year ATLAS pit lunch is now a well established tradition: the 4th edition took place in the most prestigious place at CERN; the "Globe de l'innovation", or simply "the Globe". This end-of-year event is the opportunity to thank all those working so hard at Point 1. The first event took place in December 2003. At that time, there was no Globe yet, and the party took place in SX1 building, at the top of the shafts leading to the ATLAS cavern, with some 100 guests. In December 2004, we had the privilege to be the first to organize a lunch in the Globe with some 200 guests. Since then, many have followed our example! Well, almost: we were requested to refrain from serving "Tartiflette" again in there (a Savoyard specialty, using vast amounts of Reblochon, a smelly cheese...). It was said to have left a poignant odour for following events throughout 2004... Long queues formed for this special event. In December 2005, we were authorized to party in the Globe again (once we promised we would b...

  7. The PeptideAtlas Project

    OpenAIRE

    Deutsch, Eric W.

    2010-01-01

    PeptideAtlas is a multi-species compendium of peptides observed with tandem mass spectrometry methods. Raw mass spectrometer output files are collected from the community and reprocessed through a uniform analysis and validation pipeline that continues to advance. The results are loaded into a database and the information derived from the raw data is returned to the community via several web-based data exploration tools. The PeptideAtlas resource is useful for experiment planning, improving g...

  8. SLHC and ATLAS, Initial Plans

    CERN Document Server

    Nessi, M

    2008-01-01

    The recent developments in the plans and scenarios proposed by the LHC machine experts towards the SLHC, have triggered various concerns and reserves in the ATLAS community. In particular the eventual need to insert dipoles, quadrupoles and protection elements inside the detector creates major concerns, because of its complex logistics and the risk of reducing the effectiveness of the ATLAS internal radiation shielding. Justifications and constraints on how to best use this space are given.

  9. ATLAS discoveries of optical transients

    Science.gov (United States)

    Tonry, J.; Denneau, L.; Stalder, B.; Heinze, A.; Sherstyuk, A.; Rest, A.; Smith, K. W.; Smartt, S. J.

    2016-06-01

    We report the following transients found by the ATLAS survey (see Tonry et al. ATel #8680). ATLAS is a twin 0.5m telescope system on Haleakala and Mauna Loa. The first unit is operational on Haleakala is robotically surveying the sky. Two filters are used, cyan and orange (denoted c and o, all mags in AB system), more information is on http://www.fallingstar.com.

  10. ATLAS Overview Week 2009 Barcelona

    CERN Multimedia

    Claudia Marcelloni

    2009-01-01

    From October 5th to October 9th about 400 physicists from the ATLAS Collaboration met in Barcelona (Catalonia) to discuss the status of the experiment. The event was organized by the Institut de Física d'Altes Energies (IFAE), a member of the ATLAS Collaboration. Besides the Scientific program, few social events were organized, such as Reception at the Palau de Pedralbes, a visit to the Fundacio Joan Miro and a social dinner at Maremagnunm hall.

  11. ATLAS Civil Engineering Point 1

    CERN Multimedia

    Jean-Claude Vialis

    2000-01-01

    Different phases of realisation to Point 1 : zone of the ATLAS experiment The film is about the excavation work in the cavern and tunnels of ATLAS experiment in the point 1. You can see people working for iron mounting at the side of the pit where the parts of the detector will be lowered in the future. Partly the film concentrates the USA 15 and the work done there.

  12. Equity valuation : Atlas Copco AB

    OpenAIRE

    Santos, Ricardo Manuel Castro Lopes Alba

    2016-01-01

    This Dissertation presents a literature review of some of the most appraised theories on equity valuation models. A thoughtful analysis is made, presenting the main advantages and restrictions of each model and setting the path for a discussion about improvements to be made on this field of study. A practical implementation follows, proposing a fair value estimation of Atlas Copco AB shares. Atlas Copco is a Swedish-based capital goods company, operating across four differen...

  13. ATLAS discoveries of optical transients

    Science.gov (United States)

    Tonry, J.; Denneau, L.; Stalder, B.; Heinze, A.; Sherstyuk, A.; Rest, A.; Smith, K. W.; Smartt, S. J.

    2016-08-01

    We report the following transients found by the ATLAS survey (see Tonry et al. ATel #8680). ATLAS is a twin 0.5m telescope system on Haleakala and Mauna Loa. The first unit is operational on Haleakala is robotically surveying the sky. Two filters are used, cyan and orange (denoted c and o, all mags in AB system), more information is on http://www.fallingstar.com.

  14. EnviroAtlas - Metrics for Memphis, TN

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The layers in this web...

  15. EnviroAtlas - Portland, OR - Block Groups

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset is the base layer for the Portland, OR EnviroAtlas area. The block groups are from the US Census Bureau and are included/excluded based on...

  16. Women of ATLAS - International Women's Day 2016

    CERN Multimedia

    Biondi, Silvia

    2016-01-01

    Women play key roles in the ATLAS Experiment: from young physicists at the start of their careers to analysis group leaders and spokespersons of the collaboration. Celebrate International Women's Day by meeting a few of these inspiring ATLAS researchers.

  17. EnviroAtlas - Austin, TX - Block Groups

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset is the base layer for the Austin, TX EnviroAtlas area. The block groups are from the US Census Bureau and are included/excluded based on...

  18. Methodology of Lithuanian climate atlas mapping

    Directory of Open Access Journals (Sweden)

    Valiukas Donatas

    2015-06-01

    Full Text Available Climate atlases summarize large sets of quantitative and qualitative data and are results of complex analytical cartographic work. These special geographical publications summarize long term meteorological observations, provide maps and figures which characterise different climate elements. Visual information is supplemented with explanatory texts. A lot of information on short and long term changes of climate elements were provided in published Lithuanian atlases (Atlas of Lithuanian SDR, 1981; Climate Atlas of Lithuania, 2013, as well as in prepared but unpublished Lithuanian Atlas (1989 and in upcoming new national atlas publications (National Atlas of Lithuania. 1st part, 2014. Climate atlases has to be constantly updated to be relevant and to describe current climate conditions. Comprehensive indicators of Lithuanian climate are provided in different cartographic publications. Different time periods, various data sets and diverse cartographic data analysis tools and visualisation methods were used in these different publications.

  19. Forward Physics at the ATLAS experiment

    CERN Document Server

    The ATLAS collaboration

    2009-01-01

    Poster summarize forward physics at the ATLAS experiment. It aims to AFP project which is the project to install forward detectors at 220m (AFP220) and 420m (AFP420) around ATLAS for measurements at high luminosity.

  20. EnviroAtlas - Memphis, TN - Block Groups

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset is the base layer for the Memphis, TN EnviroAtlas community. The block groups are from the US Census Bureau and are included/excluded based...

  1. ATLAS : civil engineering at Point 1

    CERN Multimedia

    CERN Audiovisual Unit

    2002-01-01

    The ATLAS experimental area is located in Point 1, just across the main CERN entrance, in the commune of Meyrin. There people are ever so busy to finish the different infrastructures for ATLAS. Real underground video.

  2. EnviroAtlas - Metrics for Portland, ME

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The layers in this web...

  3. EnviroAtlas - Metrics for Phoenix, AZ

    Data.gov (United States)

    U.S. Environmental Protection Agency — These EnviroAtlas web services support research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The layers in these web...

  4. EnviroAtlas - Metrics for Paterson, NJ

    Data.gov (United States)

    U.S. Environmental Protection Agency — These EnviroAtlas web services support research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The layers in these web...

  5. EnviroAtlas - Metrics for Pittsburgh, PA

    Data.gov (United States)

    U.S. Environmental Protection Agency — These EnviroAtlas web services support research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The layers in these web...

  6. EnviroAtlas - Metrics for Tampa, FL

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The layers in this web...

  7. EnviroAtlas - Metrics for Milwaukee, WI

    Data.gov (United States)

    U.S. Environmental Protection Agency — These EnviroAtlas web services support research and online mapping activities related to EnviroAtlas (http://www.epa.gov/enviroatlas). The layers in these web...

  8. EnviroAtlas - Metrics for Woodbine, IA

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The layers in this web...

  9. EnviroAtlas - Metrics for Durham, NC

    Data.gov (United States)

    U.S. Environmental Protection Agency — These EnviroAtlas web services support research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas ). The layers in these web...

  10. EnviroAtlas - Paterson, NJ - Block Groups

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset is the base layer for the Paterson, NJ EnviroAtlas area. The block groups are from the US Census Bureau and are included/excluded based on...

  11. EnviroAtlas - Metrics for Fresno, CA

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The layers in this web...

  12. EnviroAtlas - Metrics for Portland, OR

    Data.gov (United States)

    U.S. Environmental Protection Agency — These EnviroAtlas web services support research and online mapping activities related to EnviroAtlas (http:/www.epa.gov/enviroatlas). The layers in these web...

  13. EnviroAtlas - Pittsburgh, PA - Block Groups

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset is the base layer for the Pittsburgh, PA EnviroAtlas area. The block groups are from the US Census Bureau and are included/excluded based...

  14. Atlas-Based Prostate Segmentation Using an Hybrid Registration

    OpenAIRE

    Martin, Sébastien; Daanen, Vincent; Troccaz, Jocelyne

    2008-01-01

    Purpose: This paper presents the preliminary results of a semi-automatic method for prostate segmentation of Magnetic Resonance Images (MRI) which aims to be incorporated in a navigation system for prostate brachytherapy. Methods: The method is based on the registration of an anatomical atlas computed from a population of 18 MRI exams onto a patient image. An hybrid registration framework which couples an intensity-based registration with a robust point-matching algorithm is used for both atl...

  15. Preparing ATLAS reconstruction software for LHC's Run 2

    CERN Document Server

    Mitrevski, Jovan; The ATLAS collaboration

    2015-01-01

    In order to maximize the physics potential of the ATLAS detector during LHC's Run 2, the reconstruction software has been updated. Flat computing budgets required a factor of three improved run time, while the new xAOD data format forced changes in the reconstruction algorithms. Physics performance improvements have been made in the reconstruction of various objects, using improved techniques like multivariate discriminants, etc. This paper will present an overview of the improvements that have been made.

  16. Adaptation of a 3D prostate cancer atlas for transrectal ultrasound guided target-specific biopsy

    International Nuclear Information System (INIS)

    Due to lack of imaging modalities to identify prostate cancer in vivo, current TRUS guided prostate biopsies are taken randomly. Consequently, many important cancers are missed during initial biopsies. The purpose of this study was to determine the potential clinical utility of a high-speed registration algorithm for a 3D prostate cancer atlas. This 3D prostate cancer atlas provides voxel-level likelihood of cancer and optimized biopsy locations on a template space (Zhan et al 2007). The atlas was constructed from 158 expert annotated, 3D reconstructed radical prostatectomy specimens outlined for cancers (Shen et al 2004). For successful clinical implementation, the prostate atlas needs to be registered to each patient's TRUS image with high registration accuracy in a time-efficient manner. This is implemented in a two-step procedure, the segmentation of the prostate gland from a patient's TRUS image followed by the registration of the prostate atlas. We have developed a fast registration algorithm suitable for clinical applications of this prostate cancer atlas. The registration algorithm was implemented on a graphical processing unit (GPU) to meet the critical processing speed requirements for atlas guided biopsy. A color overlay of the atlas superposed on the TRUS image was presented to help pick statistically likely regions known to harbor cancer. We validated our fast registration algorithm using computer simulations of two optimized 7- and 12-core biopsy protocols to maximize the overall detection rate. Using a GPU, patient's TRUS image segmentation and atlas registration took less than 12 s. The prostate cancer atlas guided 7- and 12-core biopsy protocols had cancer detection rates of 84.81% and 89.87% respectively when validated on the same set of data. Whereas the sextant biopsy approach without the utility of 3D cancer atlas detected only 70.5% of the cancers using the same histology data. We estimate 10-20% increase in prostate cancer detection rates

  17. Adaptation of a 3D prostate cancer atlas for transrectal ultrasound guided target-specific biopsy

    Energy Technology Data Exchange (ETDEWEB)

    Narayanan, R; Suri, J S [Eigen Inc, Grass Valley, CA (United States); Werahera, P N; Barqawi, A; Crawford, E D [University of Colorado, Denver, CO (United States); Shinohara, K [University of California, San Francisco, CA (United States); Simoneau, A R [University of California, Irvine, CA (United States)], E-mail: jas.suri@eigen.com

    2008-10-21

    Due to lack of imaging modalities to identify prostate cancer in vivo, current TRUS guided prostate biopsies are taken randomly. Consequently, many important cancers are missed during initial biopsies. The purpose of this study was to determine the potential clinical utility of a high-speed registration algorithm for a 3D prostate cancer atlas. This 3D prostate cancer atlas provides voxel-level likelihood of cancer and optimized biopsy locations on a template space (Zhan et al 2007). The atlas was constructed from 158 expert annotated, 3D reconstructed radical prostatectomy specimens outlined for cancers (Shen et al 2004). For successful clinical implementation, the prostate atlas needs to be registered to each patient's TRUS image with high registration accuracy in a time-efficient manner. This is implemented in a two-step procedure, the segmentation of the prostate gland from a patient's TRUS image followed by the registration of the prostate atlas. We have developed a fast registration algorithm suitable for clinical applications of this prostate cancer atlas. The registration algorithm was implemented on a graphical processing unit (GPU) to meet the critical processing speed requirements for atlas guided biopsy. A color overlay of the atlas superposed on the TRUS image was presented to help pick statistically likely regions known to harbor cancer. We validated our fast registration algorithm using computer simulations of two optimized 7- and 12-core biopsy protocols to maximize the overall detection rate. Using a GPU, patient's TRUS image segmentation and atlas registration took less than 12 s. The prostate cancer atlas guided 7- and 12-core biopsy protocols had cancer detection rates of 84.81% and 89.87% respectively when validated on the same set of data. Whereas the sextant biopsy approach without the utility of 3D cancer atlas detected only 70.5% of the cancers using the same histology data. We estimate 10-20% increase in prostate cancer

  18. ATLAS Award for Shield Supplier

    CERN Multimedia

    2004-01-01

    ATLAS technical coordinator Dr. Marzio Nessi presents the ATLAS supplier award to Vojtech Novotny, Director General of Skoda Hute.On 3 November, the ATLAS experiment honoured one of its suppliers, Skoda Hute s.r.o., of Plzen, Czech Republic, for their work on the detector's forward shielding elements. These huge and very massive cylinders surround the beampipe at either end of the detector to block stray particles from interfering with the ATLAS's muon chambers. For the shields, Skoda Hute produced 10 cast iron pieces with a total weight of 780 tonnes at a cost of 1.4 million CHF. Although there are many iron foundries in the CERN member states, there are only a limited number that can produce castings of the necessary size: the large pieces range in weight from 59 to 89 tonnes and are up to 1.5 metres thick.The forward shielding was designed by the ATLAS Technical Coordination in close collaboration with the ATLAS groups from the Czech Technical University and Charles University in Prague. The Czech groups a...

  19. A computerized adjustable brain atlas

    International Nuclear Information System (INIS)

    A computerized brain atlas, adjustable to the patients anatomy, has been developed. It is primarily intended for use in positron emission tomography, but may also be employed in other fields utilizing neuro imaging, such as sterotactic surgery, transmission computerized tomography (CT) and magnetic resonance imaging (MRI). The atlas is based on anatomical information obtained from digitized cryosectioned brains. It can be adjusted to fit a wide range of images from individual brains with normal anatomy. The corresponding transformation is chosen so that the modified atlas agrees with a set of CT or NMR images of the patient. The computerized atlas can be used to improve the quantification and evaluation of PET data by: Aiding and improving the selection of regions of interests. Facilitating comparisons of functional image data from different individuals or groups of individuals. Facilitating the comparison of different examinations of the same patient, thus reducing the need of reproducible fixation systems. Providing external a priori anatomical information to be used in the image reconstruction. Improving the attenuation and scatter corrections. Aiding in selecting a suitable patient orientation during the PET study. By applying the inverse atlas transformation to PET data set it is possible to relate the PET information to the anatomy of the reference atlas. Reformatted PET data from different patients can thus be averaged, and averages from different categories of patients can be compared. The method will facilitate the identification of statistically significant differences in the PET information from different groups of patients. (orig.)

  20. DIALECT ATLASES / AĞIZ ATLASLARI

    OpenAIRE

    Prof. Dr. Erdoğan BOZ

    2008-01-01

    Dialect atlases provide characteristics of dialects spoken in acountry in terms of phonology, morphology and syntax, and also includesatisfactory information about vocabulary. Interpreted in the light ofthis knowledge, dialect atlases provide important inferences with respectto both sociocultural and political matters. Despite many developedcountries in the world completed their dialect atlases, such a work hasn'tstarted in Turkey yet. Preparing Turkey Turkish's dialect atlas beforedisappeara...