WorldWideScience

Sample records for atlas level-1 muctpi

  1. The ATLAS Level-1 Muon to Central Trigger Processor Interface

    CERN Document Server

    Berge, D; Farthouat, P; Haas, S; Klofver, P; Krasznahorkay, A; Messina, A; Pauly, T; Schuler, G; Spiwoks, R; Wengler, T; PH-EP

    2007-01-01

    The Muon to Central Trigger Processor Interface (MUCTPI) is part of the ATLAS Level-1 trigger system and connects the output of muon trigger system to the Central Trigger Processor (CTP). At every bunch crossing (BC), the MUCTPI receives information on muon candidates from each of the 208 muon trigger sectors and calculates the total multiplicity for each of six transverse momentum (pT) thresholds. This multiplicity value is then sent to the CTP, where it is used together with the input from the Calorimeter trigger to make the final Level-1 Accept (L1A) decision. In addition the MUCTPI provides summary information to the Level-2 trigger and to the data acquisition (DAQ) system for events selected at Level-1. This information is used to define the regions of interest (RoIs) that drive the Level-2 muontrigger processing. The MUCTPI system consists of a 9U VME chassis with a dedicated active backplane and 18 custom designed modules. The design of the modules is based on state-of-the-art FPGA devices and special ...

  2. Run Control Communication for the Upgrade of the ATLAS Muon-to-Central-Trigger-Processor Interface (MUCTPI)

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00223859; The ATLAS collaboration; Armbruster, Aaron James; Carrillo-Montoya, German D.; Chelstowska, Magda Anna; Czodrowski, Patrick; Deviveiros, Pier-Olivier; Eifert, Till; Ellis, Nicolas; Galster, Gorm Aske Gram Krohn; Haas, Stefan; Helary, Louis; Lagkas Nikolos, Orestis; Marzin, Antoine; Pauly, Thilo; Ryjov, Vladimir; Schmieden, Kristof; Silva Oliveira, Marcos Vinicius; Stelzer, Harald Joerg; Vichoudis, Paschalis; Wengler, Thorsten; Farthouat, Philippe

    2018-01-01

    The Muon-to-Central Trigger Processor Interface (MUCTPI) of the ATLAS experiment at the Large Hadron Collider (LHC) at CERN will be upgraded to an ATCA blade system for Run 3. The new design requires development of new communication models for control, configuration and monitoring. A System-on-Chip (SoC) with a programmable logic part and a processor part will be used for communication to the run control system and to the MUCTPI processing FPGAs. Different approaches have been compared. First, we tried an available UDP-based implementation in firmware for the programmable logic. Although this approach works as expected, it does not provide any flexibility to extend the functionality to more complex operations, e.g. for serial protocols. Second, we used the SoC processor with an embedded Linux operating system and an application-specific software written in C++ using a TCP remote-procedure-call approach. The software is built and maintained using the Yocto/OpenEmbedded framework. This approach was successfully...

  3. Run control communication for the upgrade of the ATLAS Muon-to-Central Trigger Processor Interface (MUCTPI)

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00223859; The ATLAS collaboration; Armbruster, Aaron James; Carrillo-Montoya, German D.; Chelstowska, Magda Anna; Czodrowski, Patrick; Deviveiros, Pier-Olivier; Eifert, Till; Ellis, Nicolas; Farthouat, Philippe; Galster, Gorm Aske Gram Krohn; Haas, Stefan; Helary, Louis; Lagkas Nikolos, Orestis; Marzin, Antoine; Pauly, Thilo; Ryjov, Vladimir; Schmieden, Kristof; Silva Oliveira, Marcos Vinicius; Stelzer, Harald Joerg; Vichoudis, Paschalis; Wengler, Thorsten

    The Muon-to-Central-Trigger-Processor Interface (MUCTPI) of the ATLAS experiment at the Large Hadron Collider (LHC) at CERN will be upgraded to an ATCA blade system for Run 3, starting in 2021. The new design requires development of new communication models for control, configuration and monitoring. A System-on-Chip (SoC) with a programmable logic part and a processor part will be used for communication to the run control system and to the MUCTPI processing FPGAs. Different approaches have been compared. First, we tried an available UDP-based implementation in firmware for the programmable logic. Although this approach works as expected, it does not provide any flexibility to extend the functionality to more complex operations, e.g. for serial protocols. Second, we used a SoC processor with an embedded Linux operating system and an application-specific software written in C++ using a TCP remote-procedure-call approach. The software is built and maintained using the framework of the Yocto Project. This approa...

  4. The ATLAS Muon to Central Trigger Processor Interface Upgrade for the Run 3 of the LHC

    CERN Document Server

    Armbruster, Aaron James; The ATLAS collaboration; Chelstowska, Magda Anna

    2017-01-01

    To cope with the higher luminosity and physics cross-sections for the third run of the Large Hadron Collider (LHC) and beyond, the Trigger and Data Acquisition (TDAQ) system of ATLAS experiment at CERN is being upgraded. Part of the TDAQ system, the Muon to Central Trigger Processor Interface (MUCTPI) receives muon candidates information from each of the 208 barrel and endcap muon trigger sectors, counts muon candidates for each transverse momentum threshold and sends the result to the Central Trigger Processor (CTP). The MUCTPI takes into account the possible overlap between trigger sectors in order to avoid double counting of muon candidates. A full redesign and replacement of the existing MUCTPI is required in order to provide full-granularity muon position information at the bunch crossing rate to the Topological Trigger processor (L1Topo) and to be able to interface with the new sector logic modules. State-of-the-art FPGA technology and high-density ribbon fiber-optic transmitters and receivers is being...

  5. The ATLAS Muon-to-Central Trigger Processor Interface Upgrade for the Run 3 of the LHC

    CERN Document Server

    Armbruster, Aaron James; The ATLAS collaboration

    2017-01-01

    To cope with the higher luminosity and physics cross-sections for the third run of the Large Hadron Collider (LHC) and beyond, the Trigger and Data Acquisition (TDAQ) system of ATLAS experiment at CERN is being upgraded. Part of the TDAQ system, the Muon to Central Trigger Processor Interface (MUCTPI) receives muon candidates information from each of the 208 barrel and endcap muon trigger sectors, counts muon candidates for each transverse momentum threshold and sends the result to the Central Trigger Processor (CTP). The MUCTPI takes into account the possible overlap between trigger sectors in order to avoid double counting of muon candidates. A full redesign and replacement of the existing MUCTPI is required in order to provide full-granularity muon position information at the bunch crossing rate to the Topological Trigger processor (L1Topo) and to be able to interface with the new sector logic modules. State-of-the-art FPGA technology and high-density ribbon fiber-optic transmitters and receivers is being...

  6. Integration tests of prototype LVL1 calorimeter trigger CP/JEP ROD and LVL2 trigger Region-of-Interest Builder. Also visible in the photo are two further racks containing the demonstrator prototypes of the LVL1 CTP and the MUCTPI.

    CERN Multimedia

    Gee, N

    2001-01-01

    Integration tests of prototype LVL1 calorimeter trigger CP/JEP ROD and LVL2 trigger Region-of-Interest Builder. Also visible in the photo are two further racks containing the demonstrator prototypes of the LVL1 CTP and the MUCTPI.

  7. The ATLAS Level-1 Calorimeter Trigger

    International Nuclear Information System (INIS)

    Achenbach, R; Andrei, V; Adragna, P; Apostologlou, P; Barnett, B M; Brawn, I P; Davis, A O; Edwards, J P; Asman, B; Bohm, C; Ay, C; Bauss, B; Bendel, M; Dahlhoff, A; Eckweiler, S; Booth, J R A; Thomas, P Bright; Charlton, D G; Collins, N J; Curtis, C J

    2008-01-01

    The ATLAS Level-1 Calorimeter Trigger uses reduced-granularity information from all the ATLAS calorimeters to search for high transverse-energy electrons, photons, τ leptons and jets, as well as high missing and total transverse energy. The calorimeter trigger electronics has a fixed latency of about 1 μs, using programmable custom-built digital electronics. This paper describes the Calorimeter Trigger hardware, as installed in the ATLAS electronics cavern

  8. The ATLAS Level-1 Calorimeter Trigger

    Energy Technology Data Exchange (ETDEWEB)

    Achenbach, R; Andrei, V [Kirchhoff-Institut fuer Physik, University of Heidelberg, D-69120 Heidelberg (Germany); Adragna, P [Physics Department, Queen Mary, University of London, London E1 4NS (United Kingdom); Apostologlou, P; Barnett, B M; Brawn, I P; Davis, A O; Edwards, J P [STFC Rutherford Appleton Laboratory, Harwell Science and Innovation Campus, Didcot, Oxon OX11 0QX (United Kingdom); Asman, B; Bohm, C [Fysikum, Stockholm University, SE-106 91 Stockholm (Sweden); Ay, C; Bauss, B; Bendel, M; Dahlhoff, A; Eckweiler, S [Institut fuer Physik, University of Mainz, D-55099 Mainz (Germany); Booth, J R A; Thomas, P Bright; Charlton, D G; Collins, N J; Curtis, C J [School of Physics and Astronomy, University of Birmingham, Birmingham B15 2TT (United Kingdom)], E-mail: e.eisenhandler@qmul.ac.uk (and others)

    2008-03-15

    The ATLAS Level-1 Calorimeter Trigger uses reduced-granularity information from all the ATLAS calorimeters to search for high transverse-energy electrons, photons, {tau} leptons and jets, as well as high missing and total transverse energy. The calorimeter trigger electronics has a fixed latency of about 1 {mu}s, using programmable custom-built digital electronics. This paper describes the Calorimeter Trigger hardware, as installed in the ATLAS electronics cavern.

  9. The ATLAS Level-1 Topological Trigger Performance

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00371751; The ATLAS collaboration

    2016-01-01

    The LHC will collide protons in the ATLAS detector with increasing luminosity through 2016, placing stringent operational and physical requirements to the ATLAS trigger system in order to reduce the 40 MHz collision rate to a manageable event storage rate of 1 kHz, while not rejecting interesting physics events. The Level-1 trigger is the first rate-reducing step in the ATLAS trigger system with an output rate of 100 kHz and decision latency smaller than 2.5 μs. It consists of a calorimeter trigger, muon trigger and a central trigger processor. During the LHC shutdown after the Run 1 finished in 2013, the Level-1 trigger system was upgraded including hardware, firmware and software updates. In particular, new electronics modules were introduced in the real-time data processing path: the Topological Processor System (L1Topo). It consists of a single AdvancedCTA shelf equipped with two Level-1 topological processor blades. They receive real-time information from the Level-1 calorimeter and muon triggers, which...

  10. ATLAS Level-1 Calorimeter Trigger: Status and Development

    CERN Document Server

    Bracinik, J; The ATLAS collaboration

    2013-01-01

    The ATLAS Level-1 Calorimeter Trigger seeds all the calorimeter-based triggers in the ATLAS experiment at LHC. The inputs to the system are analogue signals of reduced granularity, formed by summing cells from both the ATLAS Liquid Argon and Tile calorimeters. Several stages of analogue then digital processing, largely performed in FPGAs, refine these signals via configurable and flexible algorithms into identified physics objects, for example electron, tau or jet candidates. The complete processing chain is performed in a pipelined system at the LHC bunch-crossing frequency, and with a fixed latency of about 1us. The first LHC run from 2009-2013 provided a varied and challenging environment for first level triggers. While the energy and luminosity were below the LHC design, the pile-up conditions were similar to the nominal conditions. The physics ambitions of the experiment also tested the performance of the Level-1 system while keeping within the rate limits set by detector readout. This presentation will ...

  11. Commissioning the ATLAS Level-1 Central Trigger System

    CERN Document Server

    Sherman, Daniel

    2010-01-01

    The ATLAS Level-1 central trigger is a critical part of ATLAS operation. It receives the 40 MHz bunch clock from the LHC and distributes it to all sub-detectors. It initiates their read-out by forming the Level-1 Accept decision, which is based on information from the calorimeter and muon trigger processors and a variety of additional trigger inputs from detectors in the forward region. It also provides trigger summary information to the data acquisition system and the Level-2 trigger system. In this paper, we present the completion of the installed central trigger system, its performance during cosmic-ray data taking and the experience gained with triggering on the first LHC beams.

  12. Towards a Level-1 Tracking Trigger for the ATLAS Experiment

    CERN Document Server

    De Santo, A; The ATLAS collaboration

    2016-01-01

    In preparation for the high-luminosity phase of the Large Hadron Collider, ATLAS is planning a trigger upgrade that will enable the experiment to use tracking information already at the first trigger level. This will provide enhanced background rejection power at trigger level while preserving much needed flexibility for the trigger system. The status and current plans for the new ATLAS Level-1 tracking trigger are presented.

  13. Calibration for the ATLAS Level-1 Calorimeter-Trigger

    International Nuclear Information System (INIS)

    Foehlisch, F.

    2007-01-01

    This thesis describes developments and tests that are necessary to operate the Pre-Processor of the ATLAS Level-1 Calorimeter Trigger for data acquisition. The major tasks of Pre-Processor comprise the digitizing, time-alignment and the calibration of signals that come from the ATLAS calorimeter. Dedicated hardware has been developed that must be configured in order to fulfill these tasks. Software has been developed that implements the register-model of the Pre-Processor Modules and allows to set up the Pre-Processor. In order to configure the Pre-Processor in the context of an ATLAS run, user-settings and the results of calibration measurements are used to derive adequate settings for registers of the Pre-Processor. The procedures that allow to perform the required measurements and store the results into a database are demonstrated. Furthermore, tests that go along with the ATLAS installation are presented and results are shown. (orig.)

  14. Calibration for the ATLAS Level-1 Calorimeter-Trigger

    Energy Technology Data Exchange (ETDEWEB)

    Foehlisch, F.

    2007-12-19

    This thesis describes developments and tests that are necessary to operate the Pre-Processor of the ATLAS Level-1 Calorimeter Trigger for data acquisition. The major tasks of Pre-Processor comprise the digitizing, time-alignment and the calibration of signals that come from the ATLAS calorimeter. Dedicated hardware has been developed that must be configured in order to fulfill these tasks. Software has been developed that implements the register-model of the Pre-Processor Modules and allows to set up the Pre-Processor. In order to configure the Pre-Processor in the context of an ATLAS run, user-settings and the results of calibration measurements are used to derive adequate settings for registers of the Pre-Processor. The procedures that allow to perform the required measurements and store the results into a database are demonstrated. Furthermore, tests that go along with the ATLAS installation are presented and results are shown. (orig.)

  15. The Topological Processor for the future ATLAS Level-1 Trigger

    CERN Document Server

    Kahra, C; The ATLAS collaboration

    2014-01-01

    ATLAS is an experiment on the Large Hadron Collider (LHC), located at the European Organization for Nuclear Research (CERN) in Switzerland. By 2015 the LHC instantaneous luminosity will be increased from $10^{34}$ up to $3\\cdot 10^{34} \\mathrm{cm}^{-2} \\mathrm{s}^{-1}$. This places stringent operational and physical requirements on the ATLAS Trigger in order to reduce the 40MHz collision rate to a manageable event storage rate of 1kHz while at the same time, selecting those events that contain interesting physics events. The Level-1 Trigger is the first rate-reducing step in the ATLAS Trigger, with an output rate of 100kHz and decision latency of less than $2.5 \\mu \\mathrm{s}$. It is composed of the Calorimeter Trigger, the Muon Trigger and the Central Trigger Processor (CTP). In 2014, there will be a new electronics module: the Topological Processor (L1Topo). The L1Topo will make it possible, for the first time, to use detailed information from subdetectors in a single Level-1 module. This allows the determi...

  16. Performance of ATLAS RPC Level-1 Muon trigger during the 2015 data taking

    CERN Document Server

    Corradi, Massimo; The ATLAS collaboration

    2016-01-01

    The Level-1 Muon Barrel Trigger is one of the main elements of the event selection of the ATLAS experiment at the Large Hadron Collider. Its input stage consists of an array of processors receiving the full granularity of data from Resistive Plate Chambers in the central area of the ATLAS detector ("Barrel"). The trigger efficiency and the level of synchronisation of its elements with the rest of ATLAS and the LHC clock are crucial figures of this system: many parameters of the constituent RPC detector and the trigger electronics have to be constantly and carefully checked to assure a correct functioning of the Level-1 selection. Notwithstanding the complexity of such a large array of integrated RPC detectors, the ATLAS Level-1 system has resumed operations successfully after the past 2 year shutdown, with levels similar to those of Run 1. We present the inclusive monitoring of the RPC+L1 system that we have developed to characterise the behaviour of the system, using reconstructed muons in events selected by...

  17. L1Track: A fast Level 1 track trigger for the ATLAS high luminosity upgrade

    International Nuclear Information System (INIS)

    Cerri, Alessandro

    2016-01-01

    With the planned high-luminosity upgrade of the LHC (HL-LHC), the ATLAS detector will see its collision rate increase by approximately a factor of 5 with respect to the current LHC operation. The earliest hardware-based ATLAS trigger stage (“Level 1”) will have to provide a higher rejection factor in a more difficult environment: a new improved Level 1 trigger architecture is under study, which includes the possibility of extracting with low latency and high accuracy tracking information in time for the decision taking process. In this context, the feasibility of potential approaches aimed at providing low-latency high-quality tracking at Level 1 is discussed. - Highlights: • HL-LH requires highly performing event selection. • ATLAS is studying the implementation of tracking at the very first trigger level. • Low latency and high-quality seem to be achievable with dedicated hardware and adequate detector readout architecture.

  18. Digital Filter Performance for the ATLAS Level-1 Calorimeter Trigger

    CERN Document Server

    Hadley, D R; The ATLAS collaboration

    2010-01-01

    The ATLAS Level-1 Calorimeter Trigger is a hardware-based system designed to identify high-pT jets, electron/photon and tau candidates, and to measure total and missing ET in the ATLAS Liquid Argon and Tile calorimeters. It is a pipelined processor system, with a new set of inputs being evaluated every 25ns. The overall trigger decision has a latency budget of 2µs, including all transmission delays. The calorimeter trigger uses about 7200 reduced granularity analogue signals, which are first digitized at the 40 MHz LHC bunch-crossing frequency, before being passed to a digital Finite Impulse Response (FIR) filter. Due to latency and chip real-estate constraints, only a simple 5-element filter with limited precision can be used. Nevertheless this filter achieves a significant reduction in noise, along with improving the bunch-crossing assignment and energy resolution for small signals. The context in which digital filters are used for the ATLAS Level-1 Calorimeter Trigger will be presented, before describing ...

  19. The Digital Algorithm Processors for the ATLAS Level-1 Calorimeter Trigger

    CERN Document Server

    Silverstein, S

    2010-01-01

    The ATLAS Level-1 Calorimeter Trigger identifies high-ET jets, electrons/photons and hadrons and measures total and missing transverse energy in proton-proton collisions at the Large Hadron Collider. Two subsystems – the Jet/Energy-sum Processor (JEP) and the Cluster Processor(CP) – process data from every crossing, and report feature multiplicities and energy sums to the ATLAS Central Trigger Processor, which produces a Level-1 Accept decision. Locations and types of identified features are read out to the Level-2 Trigger as regions-of-interest, and quality-monitoring information is read out to the ATLAS data acquisition system. The JEP and CP subsystems share a great deal of common infrastructure, including a custom backplane, several common hardware modules, and readout hardware. Some of the common modules use FPGAs with selectable firmware configurations based on the location in the system. This approach saved substantial development effort and provided a uniform model for software development. We pre...

  20. The Digital Algorithm Processors for the ATLAS Level-1 Calorimeter Trigger

    CERN Document Server

    Silverstein, S; The ATLAS collaboration

    2009-01-01

    The ATLAS Level-1 Calorimeter Trigger identifies high-ET jets, electrons/photons and hadrons and measures total and missing transverse energy in proton-proton collisions at the Large Hadron Collider. Two subsystems – the Jet/Energy-sum Processor (JEP) and the Cluster Processor(CP) – process data from every crossing, and report feature multiplicities and energy sums to the ATLAS Central Trigger Processor, which produces a Level-1 Accept decision. Locations and types of identified features are read out to the Level-2 Trigger as regions-of-interest, and quality-monitoring information is read out to the ATLAS data acquisition system. The JEP and CP subsystems share a great deal of common infrastructure, including a custom backplane, several common hardware modules, and readout hardware. Some of the common modules use FPGAs with selectable firmware configurations based on the location in the system. This approach saved substantial development effort and provided a uniform model for software development. We pre...

  1. The ATLAS Level-1 Trigger Timing Setup

    CERN Document Server

    Spiwoks, R; Ellis, Nick; Farthouat, P; Gällnö, P; Haller, J; Krasznahorkay, A; Maeno, T; Pauly, T; Pessoa-Lima, H; Resurreccion-Arcas, I; Schuler, G; De Seixas, J M; Torga-Teixeira, R; Wengler, T

    2005-01-01

    The ATLAS detector at CERN's LHC will be exposed to proton-proton collisions at a bunch-crossing rate of 40 MHz. In order to reduce the data rate, a three-level trigger system selects potentially interesting physics. The first trigger level is implemented in electronics and firmware. It aims at reducing the output rate to less than 100 kHz. The Central Trigger Processor combines information from the calorimeter and muon trigger processors and makes the final Level-1-Accept decision. It is a central element in the timing setup of the experiment. Three aspects are considered in this article: the timing setup with respect to the Level-1 trigger, with respect to the expriment, and with respect to the world.

  2. Digital Filtering Performance in the ATLAS Level-1 Calorimeter Trigger

    CERN Document Server

    Hadley, D R; The ATLAS collaboration

    2010-01-01

    The ATLAS Level-1 Calorimeter Trigger is a hardware-based system designed to identify high-pT jets, elec- tron/photon and tau candidates, and to measure total and missing ET in the ATLAS Liquid Argon and Tile calorimeters. It is a pipelined processor system, with a new set of inputs being evaluated every 25ns. The overall trigger decision has a latency budget of 2µs, including all transmission delays. The calorimeter trigger uses about 7200 reduced granularity analogue signals, which are first digitized at the 40 MHz LHC bunch-crossing frequency, before being passed to a digital Finite Impulse Re- sponse (FIR) filter. Due to latency and chip real-estate constraints, only a simple 5-element filter with limited precision can be used. Nevertheless, this filter achieves a significant reduction in noise, along with improving the bunch-crossing assignment and energy resolution for small signals. The context in which digital filters are used for the ATLAS Level-1 Calorimeter Trigger is presented, before descr...

  3. Operation of the Upgraded ATLAS Level-1 Central Trigger System

    CERN Document Server

    Glatzer, Julian Maximilian Volker; The ATLAS collaboration

    2015-01-01

    The ATLAS Level-1 Central Trigger (L1CT) system is a central part of ATLAS data-taking and has undergone a major upgrade for Run 2 of the LHC, in order to cope with the expected increase of instantaneous luminosity of a factor of 2 with respect to Run 1. The upgraded hardware offers more flexibility in the trigger decisions due to the double amount of trigger inputs and usable trigger channels. It also provides an interface to the new topological trigger system. Operationally - particularly useful for commissioning, calibration and test runs - it allows concurrent running of up to 3 different sub-detector combinations. In this contribution, we give an overview of the operational software framework of the L1CT system with particular emphasis of the configuration, controls and monitoring aspects. The software framework allows a consistent configuration with respect to the ATLAS experiment and the LHC machine, upstream and downstream trigger processors, and the data acquisition. Trigger and dead-time rates are m...

  4. The ATLAS Level-1 Trigger System with 13TeV nominal LHC collisions

    CERN Document Server

    Helary, Louis; The ATLAS collaboration

    2017-01-01

    The Level-1 (L1) Trigger system of the ATLAS experiment at CERN's Large Hadron Collider (LHC) plays a key role in the ATLAS detector data-taking. It is a hardware system that selects in real time events containing physics-motivated signatures. Selection is purely based on calorimetry energy depositions and hits in the muon chambers consistent with muon candidates. The L1 Trigger system has been upgraded to cope with the more challenging run-II LHC beam conditions, including increased centre-of-mass energy, increased instantaneous luminosity and higher levels of pileup. This talk summarises the improvements, commissioning and performance of the L1 ATLAS Trigger for the LHC run-II data period. The acceptance of muon triggers has been improved by increasing the hermiticity of the muon spectrometer. New strategies to obtain a better muon trigger signal purity were designed for certain geometrically difficult transition regions by using the ATLAS hadronic calorimeter. Algorithms to reduce noise spikes in muon trig...

  5. ATLAS Level-1 Topological Trigger

    CERN Document Server

    Zheng, Daniel; The ATLAS collaboration

    2018-01-01

    The ATLAS experiment has introduced and recently commissioned a completely new hardware sub-system of its first-level trigger: the topological processor (L1Topo). L1Topo consist of two AdvancedTCA blades mounting state-of-the-art FPGA processors, providing high input bandwidth (up to 4 Gb/s) and low latency data processing (200 ns). L1Topo is able to select collision events by applying kinematic and topological requirements on candidate objects (energy clusters, jets, and muons) measured by calorimeters and muon sub-detectors. Results from data recorded using the L1Topo trigger will be presented. These results demonstrate a significantly improved background event rejection, thus allowing for a rate reduction without efficiency loss. This improvement has been shown for several physics processes leading to low-pT leptons, including H->tau tau and J/Psi->mu mu. In addition to describing the L1Topo trigger system, we will discuss the use of an accurate L1Topo simulation as a powerful tool to validate and optimize...

  6. The data path of the ATLAS level-1 calorimeter trigger preprocessor

    Energy Technology Data Exchange (ETDEWEB)

    Andrei, George Victor

    2010-10-27

    The PreProcessor of the ATLAS Level-1 Calorimeter Trigger provides digital values of transverse energy in real-time to the subsequent object-finding processors. The input comprises more than 7000 analogue signals of reduced granularity from the calorimeters of the ATLAS detector. The Level-1 trigger decision must be verified. For this, the PreProcessor transmits copies of the real-time digital data to the Data Acquisition (DAQ) system. In addition, the PreProcessor system provides a standard VMEbus interface to the computing infrastructure of the experiment, on which configuration data is loaded and control or monitoring data are read out. A dedicated system that ensures both the transfer of event data to storage in ATLAS and the data transfer over the VME was implemented on the 124 modules of the PreProcessor system in the form of a ''Readout Manager''. The ''Field Programmable Gate Array'' (FPGA) is located on each module. The rst part of this work describes the algorithms developed to meet the functionality of the Readout Manager. The second part deals with the tests that were carried out to ensure the proper functionality of the modules before they were installed at CERN in the ATLAS cavern. (orig.)

  7. The data path of the ATLAS level-1 calorimeter trigger preprocessor

    International Nuclear Information System (INIS)

    Andrei, George Victor

    2010-01-01

    The PreProcessor of the ATLAS Level-1 Calorimeter Trigger provides digital values of transverse energy in real-time to the subsequent object-finding processors. The input comprises more than 7000 analogue signals of reduced granularity from the calorimeters of the ATLAS detector. The Level-1 trigger decision must be verified. For this, the PreProcessor transmits copies of the real-time digital data to the Data Acquisition (DAQ) system. In addition, the PreProcessor system provides a standard VMEbus interface to the computing infrastructure of the experiment, on which configuration data is loaded and control or monitoring data are read out. A dedicated system that ensures both the transfer of event data to storage in ATLAS and the data transfer over the VME was implemented on the 124 modules of the PreProcessor system in the form of a ''Readout Manager''. The ''Field Programmable Gate Array'' (FPGA) is located on each module. The rst part of this work describes the algorithms developed to meet the functionality of the Readout Manager. The second part deals with the tests that were carried out to ensure the proper functionality of the modules before they were installed at CERN in the ATLAS cavern. (orig.)

  8. The data path of the ATLAS level-1 calorimeter trigger preprocessor

    Energy Technology Data Exchange (ETDEWEB)

    Andrei, George Victor

    2010-10-27

    The PreProcessor of the ATLAS Level-1 Calorimeter Trigger provides digital values of transverse energy in real-time to the subsequent object-finding processors. The input comprises more than 7000 analogue signals of reduced granularity from the calorimeters of the ATLAS detector. The Level-1 trigger decision must be verified. For this, the PreProcessor transmits copies of the real-time digital data to the Data Acquisition (DAQ) system. In addition, the PreProcessor system provides a standard VMEbus interface to the computing infrastructure of the experiment, on which configuration data is loaded and control or monitoring data are read out. A dedicated system that ensures both the transfer of event data to storage in ATLAS and the data transfer over the VME was implemented on the 124 modules of the PreProcessor system in the form of a ''Readout Manager''. The ''Field Programmable Gate Array'' (FPGA) is located on each module. The rst part of this work describes the algorithms developed to meet the functionality of the Readout Manager. The second part deals with the tests that were carried out to ensure the proper functionality of the modules before they were installed at CERN in the ATLAS cavern. (orig.)

  9. Overview and performance of the ATLAS Level-1 Topological Trigger

    CERN Document Server

    Damp, Johannes Frederic; The ATLAS collaboration

    2018-01-01

    In 2017 the LHC provided proton-proton collisions to the ATLAS experiment with high luminosity (up to 2.06x10^34), placing stringent operational and physical requirements on the ATLAS trigger system in order to reduce the 40 MHz collision rate to a manageable event storage rate of 1 kHz, while not rejecting interesting physics events. The Level-1 trigger is the first rate-reducing step in the ATLAS trigger system with an output rate of 100 kHz and decision latency of less than 2.5 μs. An important role is played by its newly commissioned component: the L1 topological trigger (L1Topo). This innovative system consists of two blades designed in AdvancedTCA form factor, mounting four individual state-of-the-art processors, and providing high input bandwidth and low latency data processing. Up to 128 topological trigger algorithms can be implemented to select interesting events by applying kinematic and angular requirements on electromagnetic clusters, jets, muons and total energy. This results in a significantly...

  10. The ATLAS Level-1 Central Trigger Processor (CTP)

    CERN Document Server

    Spiwoks, Ralf; Ellis, Nick; Farthouat, P; Gällnö, P; Haller, J; Krasznahorkay, A; Maeno, T; Pauly, T; Pessoa-Lima, H; Resurreccion-Arcas, I; Schuler, G; De Seixas, J M; Torga-Teixeira, R; Wengler, T

    2005-01-01

    The ATLAS Level-1 Central Trigger Processor (CTP) combines information from calorimeter and muon trigger processors and makes the final Level-1 Accept (L1A) decision on the basis of lists of selection criteria (trigger menus). In addition to the event-selection decision, the CTP also provides trigger summary information to the Level-2 trigger and the data acquisition system. It further provides accumulated and bunch-by-bunch scaler data for monitoring of the trigger, detector and beam conditions. The CTP is presented and results are shown from tests with the calorimeter adn muon trigger processors connected to detectors in a particle beam, as well as from stand-alone full-system tests in the laboratory which were used to validate the CTP.

  11. Upgrade of the ATLAS Level-1 Calorimeter Trigger

    CERN Document Server

    Wessels, M; The ATLAS collaboration

    2014-01-01

    The Level-1 Calorimeter Trigger (L1Calo) of the ATLAS experiment has been operating well since the start of LHC data taking, and played a major role in the Higgs boson discovery. To face the new challenges posed by the upcoming increases of the LHC proton beam energy and luminosity, a series of upgrades is planned for L1Calo. The initial upgrade phase in 2013-14 includes substantial improvements to the analogue and digital signal processing to allow more sophisticated digital filters for energy and timing measurement, as well as compensate for pile-up and baseline shifting effects. Two existing digital algorithm processor subsystems will receive substantial hardware and firmware upgrades to increase the real-time data path bandwidth, allowing topological information to be transmitted and processed at Level-1. An entirely new subsystem, the Level-1 Topological Processor, will receive real-time data from both the upgraded L1Calo and Level-1 Muon Trigger to perform trigger algorithms based on entire event topolo...

  12. ATLAS Level-1 Calorimeter Trigger: Initial Timing and Energy Calibration

    CERN Document Server

    Childers, J T; The ATLAS collaboration

    2010-01-01

    The ATLAS Level-1 Calorimeter Trigger identifies high-pT objects in the Liquid Argon and Tile Calorimeters with a fixed latency of ~2.0 µs using a hardware-based, pipelined system built with custom electronics. The Preprocessor Module conditions and digitizes about 7200 pre-summed analogue signals from the calorimeters at the LHC bunch-crossing frequency of 40 MHz, and performs bunch-crossing identification (BCID) and deposited energy measurement for each input signal. This information is passed to further processors for object classification and total energy calculation, and the results used to make the Level-1 trigger decision for the ATLAS detector. The BCID and energy measurement in the trigger depend on precise timing adjustment to achieve correct sampling of the input signal peak. Test pulses from the calorimeters were analysed to derive the initial timing and energy calibration, and first data from the LHC restart in autumn 2009 and early 2010 were used for validation and further optimization. The res...

  13. Upgrade of the ATLAS Level-1 Trigger with event topology information

    CERN Document Server

    Simioni, Eduard; The ATLAS collaboration; Bauss, B; Büscher, V; Jakobi, K; Kaluza, A; Kahra, C; Reiss, A; Schäffer, J; Schulte, A; Simon, M; Tapprogge, S; Vogel, A; Zinser, M; Palka, M

    2015-01-01

    The Large Hadron Collider (LHC) in 2015 will collide proton beams with increased luminosity from \\unit{10^{34}} up to \\unit{3 \\times 10^{34}cm^{-2}s^{-1}}. ATLAS is an LHC experiment designed to measure decay properties of high energetic particles produced in the protons collisions. The higher luminosity places stringent operational and physical requirements on the ATLAS Trigger in order to reduce the 40MHz collision rate to a manageable event storage rate of 1kHz while at the same time, selecting those events with valuable physics meaning. The Level-1 Trigger is the first rate-reducing step in the ATLAS Trigger, with an output rate of 100kHz and decision latency of less than 2.5$\\mu s$. It is composed of the Calorimeter Trigger (L1Calo), the Muon Trigger (L1Muon) and the Central Trigger Processor (CTP). In 2014, there will be a new electronics element in the chain: the Topological Processor System (L1Topo system).\\\\ The L1Topo system consist of a single AdvancedTCA shelf equipped with three L1Topo processor ...

  14. Instrumentation of a Level-1 Track Trigger at ATLAS with Double Buffer Front-End Architecture

    CERN Document Server

    Cooper, B; The ATLAS collaboration

    2012-01-01

    Around 2021 the Large Hadron Collider will be upgraded to provide instantaneous luminosities 5x10^34, leading to excessive rates from the ATLAS Level-1 trigger. We describe a double-buffer front-end architecture for the ATLAS tracker replacement which should enable tracking information to be used in the Level-1 decision. This will allow Level-1 rates to be controlled whilst preserving high efficiency for single lepton triggers at relatively low transverse momentum thresholds pT ~25 GeV, enabling ATLAS to remain sensitive to physics at the electroweak scale. In particular, a potential hardware solution for the communication between the upgraded silicon barrel strip detectors and the external processing within this architecture will be described, and discrete event simulations used to demonstrate that this fits within the tight latency constraints.

  15. Upgrade of the ATLAS Level-1 trigger with an FPGA based Topological Processor

    CERN Document Server

    Caputo, R; The ATLAS collaboration; Buescher, V; Degele, R; Kiese, P; Maldaner, S; Reiss, A; Schaefer, U; Simioni, E; Tapprogge, S; Urrejola, P

    2013-01-01

    The ATLAS experiment is located at the European Centre for Nuclear Research (CERN) in Switzerland. It is designed to measure decay properties of high energetic particles produced in the protons collisions at the Large Hadron Collider (LHC). The LHC has a proton collision at a frequency of 40 MHz, and thus requires a trigger system to efficiently select events down to a manageable event storage rate of about 400Hz. Event triggering is therefore one of the extraordinary challenges faced by the ATLAS detector. The Level-1 Trigger is the first rate-reducing step in the ATLAS Trigger, with an output rate of 75kHz and decision latency of less than 2.5$\\mu$s. It is primarily composed of the Calorimeter Trigger, Muon Trigger, the Central Trigger Processor (CTP). Due to the increase in the LHC instantaneous luminosity up to 3$\\times$10$^{34}$ cm$^{−2}$ s$^{−1}$ from 2015 onwards, a new element will be included in the Level-1 Trigger scheme: the Topological Processor (L1Topo). The L1Topo receives data in a dedicate...

  16. Operation and performance of the ATLAS Level-1 Calorimeter and Level-1 Topological Triggers in Run 2 at the LHC

    CERN Document Server

    Whalen, Kate; The ATLAS collaboration

    2017-01-01

    In Run 2 at CERN's Large Hadron Collider, the ATLAS detector uses a two-level trigger system to reduce the event rate from the nominal collision rate of 40 MHz to the event storage rate of 1 kHz, while preserving interesting physics events. The first step of the trigger system, Level-1, reduces the event rate to 100 kHz with a latency of less than 2.5 μs. One component of this system is the Level-1 Calorimeter Trigger (L1Calo), which uses coarse-granularity information from the electromagnetic and hadronic calorimeters to identify regions of interest corresponding to electrons, photons, taus, jets, and large amounts of transverse energy and missing transverse energy. In this talk, we will discuss the improved performance of the L1Calo system in the challenging, high-luminosity conditions provided by the LHC in Run 2. As the LHC exceeds its design luminosity, it is becoming even more critical to reduce event rates while preserving physics. A new feature of the ATLAS trigger system for Run 2 is the Level-1 Top...

  17. Precision Timing of the ATLAS Level-1 Calorimeter Trigger

    CERN Document Server

    Davygora, Yuriy; The ATLAS collaboration

    2012-01-01

    The ATLAS Level-1 Calorimeter Trigger is one of the main elements of the first-stage online selection of LHC collision events measured at the ATLAS experiment. Using 7168 pre-summed trigger tower signals from the Liquid Argon and Tile calorimeters as input, the hardware-based system identifies high-pT objects and determines the total and missing transverse energy sums within a fixed latency of 2.5 us. The Preprocessor system digitizes the analogue calorimeter signals at the LHC bunch-crossing frequency of 40MHz and provides bunch-crossing identification and energy measurement. Prerequisite for high stability and accuracy of this procedure is a timing synchronization at the nanosecond level of the signals which belong to the same collision event. The synchronization of the trigger tower signals was first established in the analysis of beam splash events in November 2009 and then refined and sustained with data from proton-proton collisions at a centre-of-mass energy of 7TeV, recorded at the LHC in 2010 and 201...

  18. The Topological Processor for the future ATLAS Level-1 Trigger: from design to commissioning

    CERN Document Server

    Simioni, E; The ATLAS collaboration

    2014-01-01

    The ATLAS experiment is located at the European Centre for Nuclear Research (CERN) in Switzerland. It is designed to measure decay properties of highly energetic particles produced in the protons collisions at the Large Hadron Collider (LHC). The LHC has a beam collision frequency of 40 MHz, and thus requires a trigger system to efficiently select events, thereby reducing the storage rate to a manageable level of about 400 Hz. Event triggering is therefore one of the extraordinary challenges faced by the ATLAS detector. The Level-1 Trigger is the first rate-reducing step in the ATLAS Trigger, with an output rate of 75kHz and decision latency of less than 2.5 s. It is primarily composed of the Calorimeter Trigger, Muon Trigger, the Central Trigger Processor (CTP). Due to the increase in the LHC instantaneous luminosity up 3 x 10^34/cm2 s from 2015 onwards, a new element will be included in the Level-1 Trigger scheme: the Topological Processor (L1Topo). The L1Topo receives data in a specialized format from the ...

  19. The ATLAS Level-1 Calorimeter Trigger Architecture

    CERN Document Server

    Garvey, J; Mahout, G; Moye, T H; Staley, R J; Watkins, P M; Watson, A T; Achenbach, R; Hanke, P; Kluge, E E; Meier, K; Meshkov, P; Nix, O; Penno, K; Schmitt, K; Ay, Cc; Bauss, B; Dahlhoff, A; Jakobs, K; Mahboubi, K; Schäfer, U; Trefzger, T M; Eisenhandler, E F; Landon, M; Moyse, E; Thomas, J; Apostoglou, P; Barnett, B M; Brawn, I P; Davis, A O; Edwards, J; Gee, C N P; Gillman, A R; Perera, V J O; Qian, W; Bohm, C; Hellman, S; Hidvégi, A; Silverstein, S; RT 2003 13th IEEE-NPSS Real Time Conference

    2004-01-01

    The architecture of the ATLAS Level-1 Calorimeter Trigger system (L1Calo) is presented. Common approaches have been adopted for data distribution, result merging, readout, and slow control across the three different subsystems. A significant amount of common hardware is utilized, yielding substantial savings in cost, spares, and development effort. A custom, high-density backplane has been developed with data paths suitable for both the em/tt cluster processor (CP) and jet/energy-summation processor (JEP) subsystems. Common modules also provide interfaces to VME, CANbus and the LHC Timing, Trigger and Control system (TTC). A common data merger module (CMM) uses FPGAs with multiple configurations for summing electron/photon and tau/hadron cluster multiplicities, jet multiplicities, or total and missing transverse energy. The CMM performs both crate- and system-level merging. A common, FPGA-based readout driver (ROD) is used by all of the subsystems to send input, intermediate and output data to the data acquis...

  20. ATLAS level-1 calorimeter trigger hardware: initial timing and energy calibration

    CERN Document Server

    Childers, JT; The ATLAS collaboration

    2010-01-01

    The ATLAS Level-1 Calorimeter Trigger identifies high-pT objects in the Liquid Argon and Tile Calorimeters with a fixed latency of up to 2.4 microseconds using a hardware-based, pipelined system built with custom electronics. The Preprocessor Module conditions and digitizes about 7200 pre-summed analogue signals from the calorimeters at the LHC bunch-crossing frequency of 40 MHz, and performs bunch-crossing identification (BCID) and deposited energy measurement for each input signal. This information is passed to further processors for object classification and total energy calculation, and the results are used to make the Level-1 trigger decision for the ATLAS detector. The BCID and energy measurement in the trigger depend on precise timing adjustments to achieve correct sampling of the input signal peak. Test pulses from the calorimeters were analysed to derive the initial timing and energy calibration, and first data from the LHC restart in autumn 2009 and early 2010 were used for validation and further op...

  1. The new Level-1 Topological Trigger for the ATLAS experiment at the Large Hadron Collider

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00047907; The ATLAS collaboration

    2017-01-01

    At the CERN Large Hadron Collider, the world’s most powerful particle accelerator, the ATLAS experiment records high-energy proton collision to investigate the properties of fundamental particles. These collisions take place at a 40 MHz, and the ATLAS trigger system selects the interesting ones, reducing the rate to 1 kHz, allowing for their storage and subsequent offline analysis. The ATLAS trigger system is organized in two levels, with increasing degree of details and of accuracy. The first level trigger reduces the event rate to 100 kHz with a decision latency of less than 2.5 micro seconds. It is composed of the calorimeter trigger, muon trigger and central trigger processor. A new component of the first-level trigger was introduced in 2015: the Topological Processor (L1Topo). It allows to use detailed real-time information from the Level-1 calorimeter and muon systems, to compute advanced kinematic quantities using state of the art FPGA processors, and to select interesting events based on several com...

  2. The ATLAS Level-1 Topological Trigger performance in Run 2

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00120419; The ATLAS collaboration

    2017-01-01

    The Level-1 trigger is the first event rate reducing step in the ATLAS detector trigger system, with an output rate of up to 100 kHz and decision latency smaller than 2.5 μs. During the LHC shutdown after Run 1, the Level-1 trigger system was upgraded at hardware, firmware and software levels. In particular, a new electronics sub-system was introduced in the real-time data processing path: the Level-1 Topological trigger system. It consists of a single electronics shelf equipped with two Level-1 Topological processor blades. They receive real-time information from the Level-1 calorimeter and muon triggers, which is processed to measure angles between trigger objects, invariant masses or other kinematic variables. Complementary to other requirements, these measurements are taken into account in the final Level-1 trigger decision. The system was installed and commissioning started in 2015 and continued during 2016. As part of the commissioning, the decisions from individual algorithms were simulated and compar...

  3. Performance of the ATLAS Muon Trigger and Phase-1 Upgrade of Level-1 Endcap Muon Trigger

    CERN Document Server

    Mizukami, Atsushi; The ATLAS collaboration

    2017-01-01

    The ATLAS experiment utilises a trigger system to efficiently record interesting events. It consists of first-level and high-level triggers. The first-level trigger is implemented with custom-built hardware to reduce the event rate from 40 MHz to100 kHz. Then the software-based high-level triggers refine the trigger decisions reducing the output rate down to 1 kHz. Events with muons in the final state are an important signature for many physics topics at the LHC. An efficient trigger on muons and a detailed understanding of its performance are required. Trigger efficiencies are, for example, obtained from the muon decay of Z boson, with a Tag&Probe method, using proton-proton collision data collected in 2016 at a centre-of-mass energy of 13 TeV. The LHC is expected to increase its instantaneous luminosity to $3\\times10^{34} \\rm{cm^{-2}s^{-1}}$ after the phase-1 upgrade between 2018-2020. The upgrade of the ATLAS trigger system is mandatory to cope with this high-luminosity. In the phase-1 upgrade, new det...

  4. Physics performances with the new ATLAS Level-1 Topological trigger in Run 2

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00414333; The ATLAS collaboration

    2016-01-01

    The ATLAS trigger system aims at reducing the 40 MHz proton-proton collision event rate to a manageable event storage rate of 1 kHz, preserving events valuable for physics analysis. The Level-1 trigger is the first rate-reducing step in the ATLAS trigger system, with an output rate of 100 kHz and decision latency of less than 2.5 micro seconds. It is composed of the calorimeter trigger, muon trigger and central trigger processor. During the last upgrade, a new electronics element was introduced to Level-1: The Topological Processor System. It will make it possible to use detailed realtime information from the Level-1 calorimeter and muon triggers, processed in individual state of the art FPGA processors to determine angles between jets and/or leptons and calculate kinematic variables based on lists of selected/sorted objects. More than one hundred VHDL algorithms are producing trigger outputs to be incorporated into the central trigger processor. This information will be essential to improve background reject...

  5. ATLAS level-1 calorimeter trigger hardware: initial timing and energy calibration

    International Nuclear Information System (INIS)

    Childers, J T

    2011-01-01

    The ATLAS Level-1 Calorimeter Trigger identifies high-pT objects in the Liquid Argon and Tile Calorimeters with a fixed latency of up to 2.5μs using a hardware-based, pipelined system built with custom electronics. The Preprocessor Module conditions and digitizes about 7200 pre-summed analogue signals from the calorimeters at the LHC bunch-crossing frequency of 40 MHz, and performs bunch-crossing identification (BCID) and deposited energy measurement for each input signal. This information is passed to further processors for object classification and total energy calculation, and the results are used to make the Level-1 trigger decision for the ATLAS detector. The BCID and energy measurement in the trigger depend on precise timing adjustments to achieve correct sampling of the input signal peak. Test pulses from the calorimeters were analysed to derive the initial timing and energy calibration, and first data from the LHC restart in autumn 2009 and early 2010 were used for validation and further optimization. The results from these calibration measurements are presented.

  6. Performances of the ATLAS Level-1 Muon barrel trigger during the Run-II data taking

    CERN Document Server

    Sessa, Marco; The ATLAS collaboration

    2017-01-01

    The Level-1 Muon Barrel Trigger is one of the main elements of the event selection of the ATLAS experiment at the Large Hadron Collider. It exploits the Resistive Plate Chambers (RPC) detectors to generate the trigger signal. The RPCs are placed in the barrel region of the ATLAS experiment: they are arranged in three concentric double layers and operate in a strong magnetic toroidal field. RPC detectors cover the pseudo-rapidity range $|\\eta|<1.05$ for a total surface of more than $4000\\ m^2$ and about 3600 gas volumes. The Level-1 Muon Trigger in the barrel region allows to select muon candidates with respect to their transverse momentum and associates them with the correct bunch-crossing number. The trigger system is able to take a decision within a latency of about 2 $\\mu s$. The detailed measurement of the RPC detector efficiencies and of the trigger performance during the ATLAS Run-II data taking is here presented.

  7. Performance of the ATLAS Level-1 muon barrel trigger during the Run 2 data taking

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00404546; The ATLAS collaboration

    2018-01-01

    The Level-1 Muon Barrel Trigger is one of the main elements of the event selection of the ATLAS experiment at the Large Hadron Collider. It exploits the Resistive Plate Chambers (RPC) detectors to generate the trigger signal. The RPCs are placed in the barrel region of the ATLAS experiment: they are arranged in three concentric double layers and operate in a strong magnetic toroidal field. RPC detectors cover the pseudo-rapidity range |η| < 1.05 for a total surface of more than 4000 m 2 and about 3600 gas volumes. The Level-1 Muon Trigger in the barrel region allows to select muon candidates according to their transverse momentum and associates them with the correct bunch-crossing. The trigger system is able to take a decision within a latency of about 2 μs. The measurement of the RPC detector efficiencies and the trigger performance during the ATLAS Run-II data taking are here presented.

  8. Pre-Production Validation of the ATLAS Level-1 Calorimeter Trigger System

    CERN Document Server

    Achenbach, R; Barnett, B M; Bauss, B; Belkin, A; Bohm, C; Brawn, I P; Davis, A O; Edwards, J; Eisenhandler, E F; Föhlisch, F; Gee, C N P; Geweniger, C; Gillman, A R; Hanke, P; Hellman, S; Hidvégi, A; Hillier, S J; Kluge, E E; Landon, M; Mahboubi, K; Mahout, G; Meier, K; Mirea, A; Moye, T H; Perera, V J O; Qian, W; Rieke, S; Rühr, F; Sankey, D P C; Schäfer, U; Schmitt, K; Schultz-Coulon, H C; Silverstein, S; Staley, R J; Tapprogge, S; Thomas, J P; Trefzger, T; Typaldos, D; Watkins, P M; Watson, A; Weber, G A; Weber, P; 14th IEEE - NPSS Real Time Conference 2005 Nuclear Plasma Sciences Society

    2005-01-01

    The Level-1 Calorimeter Trigger is a major part of the first stage of event selection for the ATLAS experiment at the LHC. It is a digital, pipelined system with several stages of processing, largely based on FPGAs, which perform programmable algorithms in parallel with a fixed latency to process about 300 Gbyte/s of input data. The real-time output consists of counts of different types of trigger objects and energy sums. Prototypes of all module types have been undergoing intensive testing before final production during 2005. Verification of their correct operation has been performed standalone and in the ATLAS test-beam at CERN. Results from these investigations will be presented, along with a description of the methodology used to perform the tests.

  9. The performance of the ATLAS Level-1 Calorimeter Trigger with LHC collision data

    CERN Document Server

    Bracinik, J

    2011-01-01

    The ATLAS first-level calorimeter trigger is a hardware-based system designed to identify high-E$_T$ jets, electron/photon and $ au$ candidates and to measure total and missing E$_T$ in the ATLAS calorimeters. After more than two years of commissioning in situ with calibration data and cosmic rays, the system has now been used extensively to select the most interesting proton-proton collision events. Fine tuning of timing and energy calibration has been carried out in 2010 to improve the trigger response to physics objects. In these proceedings, an analysis of the performance of the level-1 calorimeter trigger is presented, along with the techniques used to achieve these results.

  10. Performances of the ATLAS RPC Level-1 Muon trigger during the Run-II data taking

    CERN Document Server

    Alberghi, Gian Luigi; The ATLAS collaboration

    2018-01-01

    The Level-1 Muon Barrel Trigger is one of the main elements of the event selection of the ATLAS experiment at the Large Hadron Collider. Its input stage consists of an array of processors receiving the full granularity of data from Resistive Plate Chambers in the central area of the ATLAS detector ("Barrel"). The RPCs, placed in the barrel region of the ATLAS detector, are arranged in three concentric double layers and operate in a strong magnetic toroidal field. RPC detectors cover the pseudo-rapidity range |η|<1.05 for a total surface of more than 4000 m2 and about 3600 gas volumes. The Level-1 Muon Trigger in the barrel region allows to select muon candidates with respect to their transverse momentum and associates them with the correct bunch-crossing number. The trigger system is able to take a decision within a latency of about 2 μs. We illustrate the selections, strategy and validation for an unbiased determination of the efficiency and timing of the RPC and the L1 from data; and show the results w...

  11. The ATLAS high level trigger region of interest builder

    International Nuclear Information System (INIS)

    Blair, R.; Dawson, J.; Drake, G.; Haberichter, W.; Schlereth, J.; Zhang, J.; Ermoline, Y.; Pope, B.; Aboline, M.; High Energy Physics; Michigan State Univ.

    2008-01-01

    This article describes the design, testing and production of the ATLAS Region of Interest Builder (RoIB). This device acts as an interface between the Level 1 trigger and the high level trigger (HLT) farm for the ATLAS LHC detector. It distributes all of the Level 1 data for a subset of events to a small number of (16 or less) individual commodity processors. These processors in turn provide this information to the HLT. This allows the HLT to use the Level 1 information to narrow data requests to areas of the detector where Level 1 has identified interesting objects

  12. gFEX, the ATLAS Calorimeter Level-1 Real Time Processor

    CERN Document Server

    AUTHOR|(SzGeCERN)759889; The ATLAS collaboration; Begel, Michael; Chen, Hucheng; Lanni, Francesco; Takai, Helio; Wu, Weihao

    2016-01-01

    The global feature extractor (gFEX) is a component of the Level-1 Calorimeter trigger Phase-I upgrade for the ATLAS experiment. It is intended to identify patterns of energy associated with the hadronic decays of high momentum Higgs, W, & Z bosons, top quarks, and exotic particles in real time at the LHC crossing rate. The single processor board will be packaged in an Advanced Telecommunications Computing Architecture (ATCA) module and implemented as a fast reconfigurable processor based on three Xilinx Vertex Ultra-scale FPGAs. The board will receive coarse-granularity information from all the ATLAS calorimeters on 276 optical fibers with the data transferred at the 40 MHz Large Hadron Collider (LHC) clock frequency. The gFEX will be controlled by a single system-on-chip processor, ZYNQ, that will be used to configure all the processor Field-Programmable Gate Array (FPGAs), monitor board health, and interface to external signals. Now, the pre-prototype board which includes one ZYNQ and one Vertex-7 FPGA ...

  13. gFEX, the ATLAS Calorimeter Level 1 Real Time Processor

    CERN Document Server

    Tang, Shaochun; The ATLAS collaboration

    2015-01-01

    The global feature extractor (gFEX) is a component of the Level-1Calorimeter trigger Phase-I upgrade for the ATLAS experiment. It is intended to identify patterns of energy associated with the hadronic decays of high momentum Higgs, W, & Z bosons, top quarks, and exotic particles in real time at the LHC crossing rate. The single processor board will be packaged in an Advanced Telecommunications Computing Architecture (ATCA) module and implemented as a fast reconfigurable processor based on three Xilinx Ultra-scale FPGAs. The board will receive coarse-granularity information from all the ATLAS calorimeters on 264 optical fibers with the data transferred at the 40 MHz LHC clock frequency. The gFEX will be controlled by a single system-on-chip processor, ZYNQ, that will be used to configure all the processor FPGAs, monitor board health, and interface to external signals. Now, the pre-prototype board which includes one ZYNQ and one Vertex-7 FPGA has been designed for testing and verification. The performance ...

  14. The development of Global Feature eXtractor (gFEX) - the ATLAS calorimeter Level 1 trigger for ATLAS at High Luminosity LHC

    CERN Document Server

    AUTHOR|(SzGeCERN)759889; The ATLAS collaboration; Begel, Michael; Chen, Hucheng; Chen, Kai; Lanni, Francesco; Takai, Helio; Wu, Weihao

    2017-01-01

    As part of the ATLAS Phase-I Upgrade, the gFEX is designed to help maintain the ATLAS Level-1 trigger acceptance rate with the increasing LHC luminosity. The gFEX identifies patterns of energy associated with the hadronic decays of high momentum Higgs, W, & Z bosons, top quarks, and exotic particles in real time at the 40MHz LHC bunch crossing rate. The prototype v1 and v2 were designed and fully tested in 2015 and 2016 respectively. A pre-production gFEX board has been manufactured, which is an ATCA module consisting of three UltraScale+ FPGAs and one ZYNQ UltraScale+, and 35 MiniPODs are implemented in an ATCA module. This board receives coarse-granularity (0.2x0.2) information from the entire ATLAS calorimeters on up to 300 optical fibers and 96 links to the L1Topo at the speed up to 12.8 Gb/s.

  15. Multi-threaded algorithms for GPGPU in the ATLAS High Level Trigger

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00212700; The ATLAS collaboration

    2017-01-01

    General purpose Graphics Processor Units (GPGPU) are being evaluated for possible future inclusion in an upgraded ATLAS High Level Trigger farm. We have developed a demonstrator including GPGPU implementations of Inner Detector and Muon tracking and Calorimeter clustering within the ATLAS software framework. ATLAS is a general purpose particle physics experiment located on the LHC collider at CERN. The ATLAS Trigger system consists of two levels, with Level-1 implemented in hardware and the High Level Trigger implemented in software running on a farm of commodity CPU. The High Level Trigger reduces the trigger rate from the 100 kHz Level-1 acceptance rate to 1.5 kHz for recording, requiring an average per-event processing time of ∼ 250 ms for this task. The selection in the high level trigger is based on reconstructing tracks in the Inner Detector and Muon Spectrometer and clusters of energy deposited in the Calorimeter. Performing this reconstruction within the available farm resources presents a significa...

  16. The ATLAS Level-1 Topological Trigger Design and Operation in Run-2

    CERN Document Server

    Igonkina, Olga; The ATLAS collaboration

    2018-01-01

    The ATLAS Level-1 Trigger system performs initial event selection using data from calorimeters and the muon spectrometer to reduce the LHC collision event rate down to about 100 kHz. Trigger decisions from the different sub-systems are combined in the Central Trigger Processor for the final Level-1 decision. A new FPGAs-based AdvancedTCA sub-system was introduced to calculate in real time complex kinematic observables: the Topological Processor System. It was installed during the shutdown and commissioning started in 2015 and continued during 2016. The design and operation of the Level-1 Topological Trigger in Run-2 will be illustrated.

  17. Feasibility studies of a Level-1 Tracking Trigger for ATLAS

    CERN Document Server

    Warren, M; Brenner, R; Konstantinidis, N; Sutton, M

    2009-01-01

    The existing ATLAS Level-1 trigger system is seriously challenged at the SLHC's higher luminosity. A hardware tracking trigger might be needed, but requires a detailed understanding of the detector. Simulation of high pile-up events, with various data-reduction techniques applied will be described. Two scenarios are envisaged: (a) regional readout - calorimeter and muon triggers are used to identify portions of the tracker; and (b) track-stub finding using special trigger layers. A proposed hardware system, including data reduction on the front-end ASICs, readout within a super-module and integrating regional triggering into all levels of the readout system, will be discussed.

  18. Digital signal integrity and stability in the ATLAS Level-1 Calorimeter Trigger

    CERN Document Server

    Achenbach, R; Aharrouche, M; Andrei, V; Åsman, B; Barnett, B M; Bauss, B; Bendel, M; Bohm, C; Booth, J R A; Bracinik, J; Brawn, I P; Charlton, D G; Childers, J T; Collins, N J; Curtis, C J; Davis, A O; Eckweiler, S; Eisenhandler, E F; Faulkner, P J W; Fleckner, J; Föhlisch, F; Gee, C N P; Gillman, A R; Goringer, C; Groll, M; Hadley, D R; Hanke, P; Hellman, S; Hidvegi, A; Hillier, S J; Johansen, M; Kluge, E E; Kühl, T; Landon, M; Lendermann, V; Lilley, J N; Mahboubi, K; Mahout, G; Meier, K; Middleton, R P; Moa, T; Morris, J D; Müller, F; Neusiedl, A; Ohm, C; Oltmann, B; Perera, V J O; Prieur, D P F; Qian, W; Rieke, S; Rühr, F; Sankey, D P C; Schäfer, U; Schmitt, K; Schultz-Coulon, H C; Silverstein, S; Sjölin, J; Staley, R J; Stamen, R; Stockton, M C; Tan, C L A; Tapprogge, S; Thomas, J P; Thompson, P D; Watkins, P M; Watson, A; Weber, P; Wessels, M; Wildt, M

    2008-01-01

    The ATLAS Level-1 calorimeter trigger is a hardware-based system with the goal of identifying high-pT objects and to measure total and missing ET in the ATLAS calorimeters within an overall latency of 2.5 microseconds. This trigger system is composed of the Preprocessor which digitises about 7200 analogue input channels and two digital processors to identify high-pT signatures and to calculate the energy sums. The digital part consists of multi-stage, pipelined custom-built modules. The high demands on connectivity between the initial analogue stage and digital part and between the custom-built modules are presented. Furthermore the techniques to establish timing regimes and verify connectivity and stable operation of these digital links will be described.

  19. Spinal canal stenosis at the level of Atlas

    Directory of Open Access Journals (Sweden)

    Suchanda Bhattacharjee

    2011-01-01

    Full Text Available We report here a rare case of high cervical stenosis at the level of atlas who presented with progressively deteriorating quadriparesis and respiratory distress. A 10-year-old boy presented with above symptoms of one-year duration with a preceding history of trivial trauma prior to onset of such symptoms. Cervical spine MRI revealed a significant stenosis at the level of atlas from the posterior side with a syrinx extending above and below. High-resolution computed tomography of the above level yielded an ill-defined osseous bar compressing the canal at the level of C 1 posterior arch, which appeared bifid in the midline. The patient was immediately taken up for surgery in view of his respiratory complaints. The child showed an excellent recovery after excision of the posterior arch of atlas and removal of the compressing osseous structure.

  20. Upgrade of the ATLAS Level-1 Calorimeter Trigger

    CERN Document Server

    AUTHOR|(CDS)2072874

    2014-01-01

    The Level-1 calorimeter trigger (L1Calo) operated successfully during the first data taking phase of the ATLAS experiment at the LHC. Facing the new challenges posed by the upcoming increases of the LHC beam energy and luminosity, and from the experience of the previous running, a series of upgrades is planned for L1Calo. The initial upgrade phase in 2013-14 includes substantial improvements to the analogue and digital signal processing to cope with baseline shifts due to signal pile-up. Additionally a newly introduced system will receive real-time data from both the upgraded L1Calo and L1Muon trigger to perform trigger algorithms based on entire event topologies. During the second upgrade phase in 2018-19 major parts of L1Calo will be rebuilt in order to exploit a tenfold increase in the available calorimeter data granularity compared to that of the current system. The contribution gives an overview of the existing system and the lessons learned during the first period of LHC data taking. Based on these, the...

  1. Upgrade of the ATLAS Level-1 Calorimeter Trigger

    CERN Document Server

    Mueller, Felix; The ATLAS collaboration

    2014-01-01

    The Level-1 calorimeter trigger (L1Calo) operated successfully during the first data taking phase of the ATLAS experiment at the LHC. Based on the lessons learned , a series of upgrades is planned for L1Calo to face the new challenges posed by the upcoming increases of the LHC beam energy and luminosity. The initial upgrade phase in 2013-14 includes substantial improvements to the analogue and digital signal processing to cope with baseline shifts due to signal pile-up. Additionally a newly introduced system will receive real-time data from both the upgraded L1Calo and L1Muon trigger to perform trigger algorithms based on entire event topologies. During the second upgrade phase in 2018-19 major parts of L1Calo will be rebuilt in order to exploit a tenfold increase in the available calorimeter data granularity compared to that of the current system. In this contribution we present the lessons learned during the first period of LHC data taking. Based on these we discuss the expected performance improvements tog...

  2. The ATLAS Level-2 Trigger Pilot Project

    CERN Document Server

    Wickens, F J

    2000-01-01

    The Level-2 Trigger Pilot Project of ATLAS, one of the two general purpose LHC experiments, is part of the on-going programme to develop the ATLAS High Level Triggers (HLT). The Level-2 Trigger will receive events at up to 100 kHz, which has to be reduced to a rate suitable for full event-building of the order of 1 kHz. To reduce the data collection bandwidth and processing power required for the challenging Level-2 task it is planned to use Region of Interest guidance (from Level-1) and sequential processing. The Pilot Project included the construction and use of testbeds of up to 48 processing nodes, development of optimised components and computer simulations of a full system. It has shown how the required performance can be achieved, using largely commodity components and operating systems, and validated an architecture for the Level-2 system. This paper describes the principal achievements and conclusions of this project. (28 refs).

  3. The ATLAS Level-2 Trigger Pilot Project

    CERN Document Server

    Blair, R; Haberichter, W N; Schlereth, J L; Bock, R; Bogaerts, A; Boosten, M; Dobinson, Robert W; Dobson, M; Ellis, Nick; Elsing, M; Giacomini, F; Knezo, E; Martin, B; Shears, T G; Tapprogge, Stefan; Werner, P; Hansen, J R; Wäänänen, A; Korcyl, K; Lokier, J; George, S; Green, B; Strong, J; Clarke, P; Cranfield, R; Crone, G J; Sherwood, P; Wheeler, S; Hughes-Jones, R E; Kolya, S; Mercer, D; Hinkelbein, C; Kornmesser, K; Kugel, A; Männer, R; Müller, M; Sessler, M; Simmler, H; Singpiel, H; Abolins, M; Ermoline, Y; González-Pineiro, B; Hauser, R; Pope, B; Sivoklokov, S Yu; Boterenbrood, H; Jansweijer, P; Kieft, G; Scholte, R; Slopsema, R; Vermeulen, J C; Baines, J T M; Belias, A; Botterill, David R; Middleton, R; Wickens, F J; Falciano, S; Bystrický, J; Calvet, D; Gachelin, O; Huet, M; Le Dû, P; Mandjavidze, I D; Levinson, L; González, S; Wiedenmann, W; Zobernig, H

    2002-01-01

    The Level-2 Trigger Pilot Project of ATLAS, one of the two general purpose LHC experiments, is part of the on-going program to develop the ATLAS high-level triggers (HLT). The Level-2 Trigger will receive events at up to 100 kHz, which has to be reduced to a rate suitable for full event-building of the order of 1 kHz. To reduce the data collection bandwidth and processing power required for the challenging Level-2 task it is planned to use Region of Interest guidance (from Level-1) and sequential processing. The Pilot Project included the construction and use of testbeds of up to 48 processing nodes, development of optimized components and computer simulations of a full system. It has shown how the required performance can be achieved, using largely commodity components and operating systems, and validated an architecture for the Level-2 system. This paper describes the principal achievements and conclusions of this project. (28 refs).

  4. ATLAS level-1 jet trigger rates and study of the ATLAS discovery potential of the neutral MSSM Higgs bosons in b-jet decay channels

    CERN Document Server

    Mahboubi, Kambiz

    2001-01-01

    The response of the ATLAS calorimeters to electrons, photons and hadrons, in terms of the longitudinal and lateral shower development, is parameterized using the GEANT package and a detailed detector description (DICE). The parameterizations are implemented in the ATLAS Level-1 (LVL1) Calorimeter Trigger fast simulation package which, based on an average detector geometry, simulates the complete chain of the LVL1 calorimeter trigger system. In addition, pile-up effects due to multiple primary interactions are implemented taking into account the shape and time history of the trigger signals. An interface to the fast physics simulation package (ATLFAST) is also developed in order to perform ATLAS physics analysis, including the LVL1 trigger effects, in a consistent way. The simulation tools, the details of the parameterization and the interface are described. The LVL1 jet trigger thresholds corresponding to the current trigger menus are determined within the framework of the fast simulation, and the LVL1 jet tr...

  5. Towards a Level-1 tracking trigger for the ATLAS experiment

    CERN Document Server

    Cerri, A; The ATLAS collaboration

    2014-01-01

    The future plans for the LHC accelerator allow, through a schedule of phased upgrades, an increase in the average instantaneous luminosity by a factor 5 with respect to the original design luminosity. The ATLAS experiment at the LHC will be able to maximise the physics potential from this higher luminosity only if the detector, trigger and DAQ infrastructure are adapted to handle the sustained increase in particle production rates. In this paper the changes expected to be required to the ATLAS detectors and trigger system to fulfill the requirement for working in such high luminosity scenario are described. The increased number of interactions per bunch crossing will result in higher occupancy in the detectors and increased rates at each level of the trigger system. The trigger selection will improve the selectivity partly from increased granularity for the sub detectors and the consequent higher resolution. One of the largest challenges will be the provision of tracking information at the first trigger level...

  6. Upgrade of the PreProcessor System for the ATLAS Level-1 Calorimeter Trigger

    CERN Document Server

    Khomich, A

    2010-01-01

    The ATLAS Level-1 Calorimeter Trigger is a hardware-based pipelined system designed to identify high-pT objects in the ATLAS calorimeters within a fixed latency of 2.5\\,us. It consists of three subsystems: the PreProcessor which conditions and digitizes analogue signals and two digital processors. The majority of the PreProcessor's tasks are performed on a dense Multi-Chip Module(MCM) consisting of FADCs, a time-adjustment and digital processing ASICs, and LVDS serialisers designed and implemented in ten years old technologies. An MCM substitute, based on today's components (dual channel FADCs and FPGA), is being developed to profit from state-of-the-art electronics and to enhance the flexibility of the digital processing. Development and first test results are presented.

  7. The Phase-1 Upgrade of the 
ATLAS Level-1 Endcap Muon Trigger

    CERN Document Server

    Akatsuka, Shunichi; The ATLAS collaboration

    2018-01-01

    Talk slides for RealTime 2018, 9th -15th June 2018 @ Williamsburg, Virginia, USA. Time slot 20 min. (probably 15 min. presentation + 5 min. discussion). This talk is on Phase-1 Upgrade of the Level-1 Endcap Muon trigger. The first part of this presentation describes the overview of the ATLAS trigger system, muon trigger in Run 2 and the Phase-1 Upgrade, and the strategy of phase-1 upgrade. Then in the following few pages, the physics algorithm of the Run 3 muon trigger and its performance is described. The main focus of this talk is on the implementation of the trigger logic to the FPGA. The key component of the trigger part implementation is described, using a schematic diagram and a simulation output screenshot.

  8. Commissioning and validation of the ATLAS Level-1 topological trigger

    CERN Document Server

    AUTHOR|(SzGeCERN)788741; The ATLAS collaboration; Hong, Tae Min

    2017-01-01

    The ATLAS experiment has recently commissioned a new hardware component of its first-level trigger: the topological processor (L1Topo). This innovative system, using state-of-the-art FPGA processors, selects events by applying kinematic and topological requirements on candidate objects (energy clusters, jets, and muons) measured by calorimeters and muon sub-detectors. Since the first-level trigger is a synchronous pipelined system, such requirements are applied within a latency of 200ns. We will present the first results from data recorded using the L1Topo trigger; these demonstrate a significantly improved background event rejection, thus allowing for a rate reduction without efficiency loss. This improvement has been shown for several physics processes leading to low-$P_{T}$ leptons, including $H\\to{}\\tau{}\\tau{}$ and $J/\\Psi\\to{}\\mu{}\\mu{}$. In addition, we will discuss the use of an accurate L1Topo simulation as a powerful tool to validate and optimize the performance of this new trigger system. To reach ...

  9. Towards a Level-1 tracking trigger for the ATLAS experiment at the High Luminosity LHC

    CERN Document Server

    Martin, T A D; The ATLAS collaboration

    2014-01-01

    At the high luminosity HL-LHC, upwards of 160 individual proton-proton interactions (pileup) are expected per bunch-crossing at luminosities of around $5\\times10^{34}$ cm$^{-2}$s$^{-1}$. A proposal by the ATLAS collaboration to split the ATLAS first level trigger in to two stages is briefly detailed. The use of fast track finding in the new first level trigger is explored as a method to provide the discrimination required to reduce the event rate to acceptable levels for the read out system while maintaining high efficiency on the selection of the decay products of electroweak bosons at HL-LHC luminosities. It is shown that available bandwidth in the proposed new strip tracker is sufficiency for a region of interest based track trigger given certain optimisations, further methods for improving upon the proposal are discussed.

  10. Physics performances with the new ATLAS Level-1 Topological trigger in the LHC High-Luminosity Era

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00414333; The ATLAS collaboration

    2016-01-01

    The ATLAS trigger system aim at reducing the 40 MHz protons collision event rate to a manageable event storage rate of 1 kHz, preserving events with valuable physics meaning. The Level-1 trigger is the first rate-reducing step in the ATLAS trigger system, with an output rate of 100 kHz and decision latency of less than 2.5 micro seconds. It is composed of the calorimeter trigger, muon trigger and central trigger processor. During the last upgrade, a new electronics element was introduced to Level-1: L1Topo, the Topological Processor System. It will make it possible to use detailed realtime information from the Level-1 calorimeter and muon triggers, processed in individual state of the art FPGA processors to determine angles between jets and/or leptons and calculate kinematic variables based on lists of selected/sorted objects. Over hundred VHDL algorithms are producing trigger outputs to be incorporated into the central trigger processor. Such information will be essential to improve background rejection and ...

  11. The Topological Processor for the future ATLAS Level-1 Trigger: from design to commissioning

    CERN Document Server

    Simioni, E; The ATLAS collaboration

    2014-01-01

    The ATLAS detector at the Large Hadron Collider (LHC) is designed to measure decay properties of high energetic particles produced in the proton-proton collisions. During its first run, the LHC collided proton bunches at a frequency of 20 MHz, and therefore the detector required a Trigger system to efficiently select events down to a manageable event storage rate of about 400 Hz. By 2015 the LHC instantaneous luminosity will be increased up to 3$\\times$$10^{34}cm^{-2}s^{-1}$: this represent an unprecedented challenge faced by the ATLAS Trigger system. To cope with the higher event rate and efficiently select relevant events from physics point of view, a new element will be included in the Level-1 Trigger scheme after 2015: the Topological Processor (L1Topo).\\\\ The L1Topo system, currently developed at CERN, will consist initially of an ATCA crate and two L1Topo modules. A high density opto-electroconverter (AVAGO miniPOD) drives up to 1.6 Tb/s of data from the calorimeter and muon detectors into two high end ...

  12. Instrumentation of a Level-1 Track Trigger at ATLAS with Double Buffer Front-End Architecture

    CERN Document Server

    Cooper, B; The ATLAS collaboration

    2012-01-01

    The increased collision rate and pile-up produced at the HLLHC requires a substantial upgrade of the ATLAS level-1 trigger in order to maintain a broad physics reach. We show that tracking information can be used to control trigger rates, and describe a proposal for how this information can be extracted within a two-stage level-1 trigger design that has become the baseline for the HLLHC upgrade. We demonstrate that, in terms of the communication between the external processing and the tracking detector frontends, a hardware solution is possible that fits within the latency constraints of level-1.

  13. ATLAS Level-1 Topological Trigger : Commissioning and Validation in Run 2

    CERN Document Server

    AUTHOR|(SzGeCERN)788741; The ATLAS collaboration; Hong, Tae Min

    2017-01-01

    The ATLAS experiment has recently commissioned a new hardware component of its first-level trigger: the topological processor (L1Topo). This innovative system, using state-of-the-art FPGA processors, selects events by applying kinematic and topological requirements on candidate objects (energy clusters, jets, and muons) measured by calorimeters and muon sub-detectors. Since the first-level trigger is a synchronous pipelined system, such requirements are applied within a latency of 200ns. We will present the first results from data recorded using the L1Topo trigger; these demonstrate a significantly improved background event rejection, thus allowing for a rate reduction without efficiency loss. This improvement has been shown for several physics processes leading to low-$P_{T}$ leptons, including $H\\to{}\\tau{}\\tau{}$ and $J/\\Psi\\to{}\\mu{}\\mu{}$. In addition, we will discuss the use of an accurate L1Topo simulation as a powerful tool to validate and optimize the performance of this new trigger system. To reach ...

  14. The Phase-1 Upgrade of the ATLAS First Level Calorimeter Trigger

    CERN Document Server

    Andrei, George Victor; The ATLAS collaboration

    2017-01-01

    The ATLAS Level-1 calorimeter trigger is planning a series of upgrades in order to face the challenges posed by the upcoming increase of the LHC luminosity. The hardware built for the Phase-1 upgrade will be installed during the long shutdown of the LHC starting in 2019, with the aim of being fully commissioned before the restart in 2021. The upgrade will benefit from new front end electronics for parts of the calorimeter which provide the trigger system with digital data with a tenfold increase in granularity. This makes possible the use of more complex algorithms than currently used and while maintaining low trigger thresholds under much harsher collision conditions. Of principal significance among these harsher conditions will be the increased number interactions per bunch crossing, known as pile-up. The Level-1 calorimeter system upgrade consists of an active and a passive system for digital data distribution and three different Feature EXtraction systems (FEXs) which run complex algorithms to identify el...

  15. The ATLAS High Level Trigger Steering Framework and the Trigger 
Configuration System.

    CERN Document Server

    Pérez Cavalcanti, Tiago; The ATLAS collaboration

    2011-01-01

    The ATLAS High Level Trigger Steering Framework and the Trigger 
Configuration System.
 
The ATLAS detector system installed in the Large Hadron Collider (LHC) 
at CERN is designed to study proton-proton and nucleus-nucleus 
collisions with a maximum center of mass energy of 14 TeV at a bunch 
collision rate of 40MHz.  In March 2010 the four LHC experiments saw 
the first proton-proton collisions at 7 TeV. Still within the year a 
collision rate of nearly 10 MHz is expected. At ATLAS, events of 
potential interest for ATLAS physics are selected by a three-level 
trigger system, with a final recording rate of about 200 Hz. The first 
level (L1) is implemented in custom hardware; the two levels of 
the high level trigger (HLT) are software triggers, running on large 
farms of standard computers and network devices. 

Within the ATLAS physics program more than 500 trigger signatures are 
defined. The HLT tests each signature on each L1-accepted event; the 
test outcome is recor...

  16. The ATLAS level-1 trigger: Status of the system and first results from cosmic-ray data

    Energy Technology Data Exchange (ETDEWEB)

    Aielli, G [Universita degli Studi di Roma ' Tor Vergata' and INFN Roma II, Rome (Italy); Andrei, V; Achenbach, R [Kirchhoff-Institut fuer Physik, University of Heidelberg, D-69120 Heidelberg (Germany); Adragna, P [Physics Department, Queen Mary, University of London, London E1 4NS (United Kingdom); Aloisio, A; Alviggi, M G [Universita degli Studi di Napoli ' Federico II' and INFN Napoli (Italy); Antonelli, S [INFN Bologna and Universita degli Studi di Bologna (Italy); Ask, S [CERN, PH Department (Switzerland); Barnett, B M [CCLRC Rutherford Appleton Laboratory, Chilton, Didcot, Oxon OX11 0QX (United Kingdom); Bauss, B [Institut fuer Physik, University of Mainz, D-55099 Mainz (Germany); Bellagamba, L [INFN Bologna and Universita degli Studi di Bologna (Italy); Ben Ami, S [Technion Israel Institute of Technology (Israel); Bendel, M [Institut fuer Physik, University of Mainz, D-55099 Mainz (Germany); Benhammou, Y [Tel Aviv University (Israel); Berge, D. [CERN, PH Department (Switzerland)], E-mail: David.Berge@cern.ch; Bianco, M [Universita degli Studi di Lecce and INFN Lecce (Italy); Biglietti, M G [Universita degli Studi di Napoli ' Federico II' and INFN Napoli (Italy); Bohm, C [Fysikum, University of Stockholm, SE-10691 Stockholm (Sweden); Booth, J R.A. [School of Physics and Astronomy, University of Birmingham, Birmingham B15 2TT (United Kingdom); CCLRC Rutherford Appleton Laboratory, Chilton, Didcot, Oxon OX11 0QX (United Kingdom); Boscherini, D [INFN Bologna and Universita degli Studi di Bologna (Italy)

    2007-10-21

    The ATLAS detector at CERN's Large Hadron Collider (LHC) will be exposed to proton-proton collisions from beams crossing at 40 MHz. At the design luminosity of 10{sup 34}cm{sup -2}s{sup -1} there are on average 23 collisions per bunch crossing. A three-level trigger system will select potentially interesting events in order to reduce the readout rate to about 200 Hz. The first trigger level is implemented in custom-built electronics and makes an initial fast selection based on detector data of coarse granularity. It has to reduce the rate by a factor of 10{sup 4} to less than 100 kHz. The other two consecutive trigger levels are in software and run on PC farms. We present an overview of the first-level trigger system and report on the current installation status. Moreover, we show analysis results of cosmic-ray data recorded in situ at the ATLAS experimental site with final or close-to-final hardware.

  17. The ATLAS level-1 trigger: Status of the system and first results from cosmic-ray data

    International Nuclear Information System (INIS)

    Aielli, G.; Andrei, V.; Achenbach, R.; Adragna, P.; Aloisio, A.; Alviggi, M.G.; Antonelli, S.; Ask, S.; Barnett, B.M.; Bauss, B.; Bellagamba, L.; Ben Ami, S.; Bendel, M.; Benhammou, Y.; Berge, D.; Bianco, M.; Biglietti, M.G.; Bohm, C.; Booth, J.R.A.; Boscherini, D.

    2007-01-01

    The ATLAS detector at CERN's Large Hadron Collider (LHC) will be exposed to proton-proton collisions from beams crossing at 40 MHz. At the design luminosity of 10 34 cm -2 s -1 there are on average 23 collisions per bunch crossing. A three-level trigger system will select potentially interesting events in order to reduce the readout rate to about 200 Hz. The first trigger level is implemented in custom-built electronics and makes an initial fast selection based on detector data of coarse granularity. It has to reduce the rate by a factor of 10 4 to less than 100 kHz. The other two consecutive trigger levels are in software and run on PC farms. We present an overview of the first-level trigger system and report on the current installation status. Moreover, we show analysis results of cosmic-ray data recorded in situ at the ATLAS experimental site with final or close-to-final hardware

  18. The ATLAS High Level Trigger Steering Framework and the Trigger Configuration System.

    CERN Document Server

    Perez Cavalcanti, Tiago; The ATLAS collaboration

    2011-01-01

    The ATLAS detector system installed in the Large Hadron Collider (LHC) at CERN is designed to study proton-proton and nucleus-nucleus collisions with a maximum centre of mass energy of 14 TeV at a bunch collision rate of 40MHz. In March 2010 the four LHC experiments saw the first proton-proton collisions at 7 TeV. Still within the year a collision rate of nearly 10 MHz is expected. At ATLAS, events of potential interest for ATLAS physics are selected by a three-level trigger system, with a final recording rate of about 200 Hz. The first level (L1) is implemented in custom hardware; the two levels of the high level trigger (HLT) are software triggers, running on large farms of standard computers and network devices. Within the ATLAS physics program more than 500 trigger signatures are defined. The HLT tests each signature on each L1-accepted event; the test outcome is recorded for later analysis. The HLT-Steering is responsible for this. It foremost ensures the independent test of each signature, guarantying u...

  19. Towards a Level-1 Tracking Trigger for the ATLAS Experiment

    CERN Document Server

    De Santo, A; The ATLAS collaboration

    2014-01-01

    Plans for a physics-driven upgrade of the LHC foresee staged increases of the accelerator's average instantaneous luminosity, of up to a factor of five compared to the original design. In order to cope with the sustained luminosity increase, and the resulting higher detector occupancy and particle interaction rates, the ATLAS experiment is planning phased upgrades of the trigger system and of the DAQ infrastructure. In the new conditions, maintaining an adequate signal acceptance for electro-weak processes will pose unprecedented challenges, as the default solution to cope with the higher rates would be to increase thresholds on the transverse momenta of physics objects (leptons, jets, etc). Therefore the possibility to apply fast processing at the first trigger level in order to use tracking information as early as possible in the trigger selection represents a most appealing opportunity, which can preserve the ATLAS trigger's selectivity without reducing its flexibility. Studies to explore the feasibility o...

  20. The Topological Processor for the future ATLAS Level-1 Trigger: from design to commissioning

    CERN Document Server

    INSPIRE-00226165

    2014-01-01

    The ATLAS detector at LHC will require a Trigger system to efficiently select events down to a manageable event storage rate of about 400 Hz. By 2015 the LHC instantaneous luminosity will be increased up to 3 x 10^34 cm-2s-1, this represents an unprecedented challenge faced by the ATLAS Trigger system. To cope with the higher event rate and efficiently select relevant events from a physics point of view, a new element will be included in the Level-1 Trigger scheme after 2015: the Topological Processor (L1Topo). The L1Topo system, currently developed at CERN, will consist initially of an ATCA crate and two L1Topo modules. A high density opto-electroconverter (AVAGO miniPOD) drives up to 1.6 Tb/s of data from the calorimeter and muon detectors into two high-end FPGA (Virtex7-690), to be processed in about 200 ns. The design has been optimized to guarantee excellent signal in- tegrity of the high-speed links and low latency data transmission on the Real Time Data Path (RTDP). The L1Topo receives data in a standa...

  1. Simulation and Validation of the ATLAS Level-1 Topological Trigger

    CERN Document Server

    Bakker, Pepijn Johannes; The ATLAS collaboration

    2017-01-01

    The ATLAS experiment has recently commissioned a new component of its first-level trigger: the L1 topological trigger. This system, using state-of-the-art FPGA processors, makes it possible to reject events by applying topological requirements, such as kinematic criteria involving clusters, jets, muons, and total transverse energy. The data recorded using the L1Topological trigger demonstrates that this innovative trigger strategy allows for an improved rejection rate without efficiency loss. This improvement has been shown for several relevant physics processes leading to low-$p_T$ leptons, including $H\\to{}\\tau{}\\tau{}$ and $J/\\Psi\\to{}\\mu{}\\mu{}$. In addition, an accurate simulation of the L1Topological trigger is used to validate and optimize the performance of this trigger. To reach such an accuracy, this simulation must take into account the fact that the firmware algorithms are executed on a FPGA architecture, while the simulation is executed on a floating point architecture.

  2. ATLAS TDAQ application gateway upgrade during LS1

    CERN Document Server

    KOROL, A; The ATLAS collaboration; BOGDANCHIKOV, A; BRASOLIN, F; CONTESCU, A C; DUBROV, S; HAFEEZ, M; LEE, C J; SCANNICCHIO, D A; TWOMEY, M; VORONKOV, A; ZAYTSEV, A

    2014-01-01

    The ATLAS Gateway service is implemented with a set of dedicated computer nodes to provide a fine-grained access control between CERN General Public Network (GPN) and ATLAS Technical Control Network (ATCN). ATCN connects the ATLAS online farm used for ATLAS Operations and data taking, including the ATLAS TDAQ (Trigger and Data Aquisition) and DCS (Detector Control System) nodes. In particular, it provides restricted access to the web services (proxy), general login sessions (via SSH and RDP protocols), NAT and mail relay from ATCN. At the Operating System level the implementation is based on virtualization technologies. Here we report on the Gateway upgrade during Long Shutdown 1 (LS1) period: it includes the transition to the last production release of the CERN Linux distribution (SLC6), the migration to the centralized configuration management system (based on Puppet) and the redesign of the internal system architecture.

  3. Towards a Level-1 tracking trigger for the ATLAS experiment

    CERN Document Server

    AUTHOR|(CDS)2070911; The ATLAS collaboration

    2015-01-01

    Among the upgrades for the High-Luminosity LHC era, the ATLAS collaboration is studying and developing the availability of inner detector tracking information at the first level of its three- tiered event selection chain. This will provide additional flexibility and rejection power: essential ingredients in order to cope with the demanding conditions of the upgraded LHC, as well as with unforeseen bandwidth constraints. The current state of the feasibility and performances studies is discussed.

  4. ATLAS High-Level Trigger Performance for Calorimeter-Based Algorithms in LHC Run-I

    CERN Document Server

    Mann, A; The ATLAS collaboration

    2013-01-01

    The ATLAS detector operated during the three years of the Run-I of the Large Hadron Collider collecting information on a large number of proton-proton events. One the most important results obtained so far is the discovery of one Higgs boson. More precise measurements of this particle must be performed as well as there are other very important physics topics still to be explored. One of the key components of the ATLAS detector is its trigger system. It is composed of three levels: one (called Level 1 - L1) built on custom hardware and the two others based on software algorithms - called Level 2 (L2) and Event Filter (EF) – altogether referred to as the ATLAS High Level Trigger. The ATLAS trigger is responsible for reducing almost 20 million of collisions per second produced by the accelerator to less than 1000. The L2 operates only in the regions tagged by the first hardware level as containing possible interesting physics while the EF operates in the full detector, normally using offline-like algorithms to...

  5. Development of the new trigger processor board for the ATLAS Level-1 endcap muon trigger for Run-3

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00525035; The ATLAS collaboration

    2017-01-01

    The instantaneous luminosity of the LHC will be increased by up to a factor of three with respect to the original design value at Run-3 (starting 2021). The ATLAS Level-1 end-cap muon trigger in LHC Run-3 will identify muons by combining data from the Thin-Gap Chamber detector (TGC) and the New Small Wheel (NSW), which is a new detector and will be able to operate in a high background hit rate at Run-3, to suppress the Level-1 trigger rate. In order to handle data from both TGC and NSW, a new trigger processor board has been developed. The board has a modern FPGA to make use of Multi-Gigabit transceiver technology. The readout system for trigger data has also been designed with TCP/IP instead of a dedicated ASIC. This letter presents the electronics and its firmware of the ATLAS Level-1 end-cap muon trigger processor board for LHC Run-3.

  6. Multi-Threaded Algorithms for GPGPU in the ATLAS High Level Trigger

    Science.gov (United States)

    Conde Muíño, P.; ATLAS Collaboration

    2017-10-01

    General purpose Graphics Processor Units (GPGPU) are being evaluated for possible future inclusion in an upgraded ATLAS High Level Trigger farm. We have developed a demonstrator including GPGPU implementations of Inner Detector and Muon tracking and Calorimeter clustering within the ATLAS software framework. ATLAS is a general purpose particle physics experiment located on the LHC collider at CERN. The ATLAS Trigger system consists of two levels, with Level-1 implemented in hardware and the High Level Trigger implemented in software running on a farm of commodity CPU. The High Level Trigger reduces the trigger rate from the 100 kHz Level-1 acceptance rate to 1.5 kHz for recording, requiring an average per-event processing time of ∼ 250 ms for this task. The selection in the high level trigger is based on reconstructing tracks in the Inner Detector and Muon Spectrometer and clusters of energy deposited in the Calorimeter. Performing this reconstruction within the available farm resources presents a significant challenge that will increase significantly with future LHC upgrades. During the LHC data taking period starting in 2021, luminosity will reach up to three times the original design value. Luminosity will increase further to 7.5 times the design value in 2026 following LHC and ATLAS upgrades. Corresponding improvements in the speed of the reconstruction code will be needed to provide the required trigger selection power within affordable computing resources. Key factors determining the potential benefit of including GPGPU as part of the HLT processor farm are: the relative speed of the CPU and GPGPU algorithm implementations; the relative execution times of the GPGPU algorithms and serial code remaining on the CPU; the number of GPGPU required, and the relative financial cost of the selected GPGPU. We give a brief overview of the algorithms implemented and present new measurements that compare the performance of various configurations exploiting GPGPU cards.

  7. Building a scalable event-level metadata service for ATLAS

    International Nuclear Information System (INIS)

    Cranshaw, J; Malon, D; Goosens, L; Viegas, F T A; McGlone, H

    2008-01-01

    The ATLAS TAG Database is a multi-terabyte event-level metadata selection system, intended to allow discovery, selection of and navigation to events of interest to an analysis. The TAG Database encompasses file- and relational-database-resident event-level metadata, distributed across all ATLAS Tiers. An oracle hosted global TAG relational database, containing all ATLAS events, implemented in Oracle, will exist at Tier O. Implementing a system that is both performant and manageable at this scale is a challenge. A 1 TB relational TAG Database has been deployed at Tier 0 using simulated tag data. The database contains one billion events, each described by two hundred event metadata attributes, and is currently undergoing extensive testing in terms of queries, population and manageability. These 1 TB tests aim to demonstrate and optimise the performance and scalability of an Oracle TAG Database on a global scale. Partitioning and indexing strategies are crucial to well-performing queries and manageability of the database and have implications for database population and distribution, so these are investigated. Physics query patterns are anticipated, but a crucial feature of the system must be to support a broad range of queries across all attributes. Concurrently, event tags from ATLAS Computing System Commissioning distributed simulations are accumulated in an Oracle-hosted database at CERN, providing an event-level selection service valuable for user experience and gathering information about physics query patterns. In this paper we describe the status of the Global TAG relational database scalability work and highlight areas of future direction

  8. Control, Test and Monitoring Software Framework for the ATLAS Level-1 Calorimeter Trigger

    CERN Document Server

    Achenbach, R; Aharrouche, M; Andrei, V; Åsman, B; Barnett, B M; Bauss, B; Bendel, M; Bohm, C; Booth, J R A; Bracinik, J; Brawn, I P; Charlton, D G; Childers, J T; Collins, N J; Curtis, C J; Davis, A O; Eckweiler, S; Eisenhandler, E F; Faulkner, P J W; Fleckner, J; Föhlisch, F; Gee, C N P; Gillman, A R; Goringer, C; Groll, M; Hadley, D R; Hanke, P; Hellman, S; Hidvegi, A; Hillier, S J; Johansen, M; Kluge, E E; Kühl, T; Landon, M; Lendermann, V; Lilley, J N; Mahboubi, K; Mahout, G; Meier, K; Middleton, R P; Moa, T; Morris, J D; Müller, F; Neusiedl, A; Ohm, C; Oltmann, B; Perera, V J O; Prieur, D P F; Qian, W; Rieke, S; Rühr, F; Sankey, D P C; Schäfer, U; Schmitt, K; Schultz-Coulon, H C; Silverstein, S; Sjölin, J; Staley, R J; Stamen, R; Stockton, M C; Tan, C L A; Tapprogge, S; Thomas, J P; Thompson, P D; Watkins, P M; Watson, A; Weber, P; Wessels, M; Wildt, M

    2008-01-01

    The ATLAS first-level calorimeter trigger is a hardware-based system designed to identify high-pT jets, electron/photon and tau candidates and to measure total and missing ET in the ATLAS calorimeters. The complete trigger system consists of over 300 customdesignedVME modules of varying complexity. These modules are based around FPGAs or ASICs with many configurable parameters, both to initialize the system with correct calibrations and timings and to allow flexibility in the trigger algorithms. The control, testing and monitoring of these modules requires a comprehensive, but well-designed and modular, software framework, which we will describe in this paper.

  9. Multi­-Threaded Algorithms for General purpose Graphics Processor Units in the ATLAS High Level Trigger

    CERN Document Server

    Conde Mui\\~no, Patricia; The ATLAS collaboration

    2016-01-01

    General purpose Graphics Processor Units (GPGPU) are being evaluated for possible future inclusion in an upgraded ATLAS High Level Trigger farm. We have developed a demonstrator including GPGPU implementations of Inner Detector and Muon tracking and Calorimeter clustering within the ATLAS software framework. ATLAS is a general purpose particle physics experiment located on the LHC collider at CERN. The ATLAS Trigger system consists of two levels, with level 1 implemented in hardware and the High Level Trigger implemented in software running on a farm of commodity CPU. The High Level Trigger reduces the trigger rate from the 100 kHz level 1 acceptance rate to 1 kHz for recording, requiring an average per­-event processing time of ~250 ms for this task. The selection in the high level trigger is based on reconstructing tracks in the Inner Detector and Muon Spectrometer and clusters of energy deposited in the Calorimeter. Performing this reconstruction within the available farm resources presents a significant ...

  10. CERN Open Days 2013, Point 1 - ATLAS: ATLAS Experiment

    CERN Multimedia

    CERN Photolab

    2013-01-01

    Stand description: The ATLAS Experiment at CERN is one of the largest and most complex scientific endeavours ever assembled. The detector, located at collision point 1 of the LHC, is designed to explore the fundamental components of nature and to study the forces that shape our universe. The past year’s discovery of a Higgs boson is one of the most important scientific achievements of our time, yet this is only one of many key goals of ATLAS. During a brief break in their journey, some of the 3000-member ATLAS collaboration will be taking time to share the excitement of this exploration with you. On surface no restricted access  The exhibit at Point 1 will give visitors a chance to meet these modern-day explorers and to learn from them how answers to the most fundamental questions of mankind are being sought. Activities will include a visit to the ATLAS detector, located 80m below ground; watching the prize-winning ATLAS movie in the ATLAS cinema; seeing real particle tracks in a cloud chamber and discussi...

  11. Tracking and Level-1 triggering in the forward region of the ATLAS Muon Spectrometer at sLHC

    International Nuclear Information System (INIS)

    Bittner, B; Dubbert, J; Kroha, H; Richter, R; Schwegler, P

    2012-01-01

    In the endcap region of the ATLAS Muon Spectrometer (η > 1) precision tracking and Level-1 triggering are performed by different types of chambers. Monitored Drift Tube chambers (MDT) and Cathode Strip Chambers (CSC) are used for precision tracking, while Thin Gap Chambers (TGC) form the Level-1 muon trigger, selecting muons with high transverse momentum (p T ). When by 2018 the LHC peak luminosity of 10 34 cm −2 s −1 will be increased by a factor of ∼ 2 and by another factor of ∼ 2–2.5 in about a decade from now (''SLHC''), an improvement of both systems, precision tracking and Level-1 triggering, will become mandatory in order to cope with the high rate of uncorrelated background hits (''cavern background'') and to stay below the maximum trigger rate for the muon system, which is in the range of 10–20 % of the 100 kHz rate, allowed for ATLAS. For the Level-1 trigger of the ATLAS Muon Spectrometer this means a stronger suppression of sub-threshold muons in the high-p T trigger as well as a better rejection of tracks not coming from the primary interaction point. Both requirements, however, can only be fulfilled if spatial resolution and angular pointing accuracy of the trigger chambers, in particular of those in the Inner Station of the endcap, are improved by a large factor. This calls for a complete replacement of the currrently used TGC chambers by a new type of trigger chambers with better performance. In parallel, the precision tracking chambers must be replaced by chambers with higher rate capability to be able to cope with the intense cavern background. In this article we present concepts to decisively improve the Level-1 trigger with newly developed trigger chambers, being characterized by excellent spatial resolution, good time resolution and sufficiently short latency. We also present new types of precision chambers, designed to maintain excellent tracking efficiency and spatial resolution in the presence of high levels of uncorrelated

  12. The Development of the Global Feature eXtractor (gFEX) for ATLAS Level 1 Calorimeter Trigger at the LHC

    CERN Document Server

    Tang, Shaochun; The ATLAS collaboration; Chen, Hucheng

    2018-01-01

    During the ATLAS Phase-I upgrade, the gFEX will be designed to maintain the trigger acceptance against the increasing luminosity for the ATLAS Level-1 calorimeter trigger system. The gFEX is designed to identify patterns of energy associated with the hadronic decays of high momentum Higgs, W, & Z bosons, top quarks, and exotic particles in real time at the LHC crossing rate. The prototype v1 and v2 have been designed and fully tested in 2015 and 2016 respectively. With the lessons learned, a pre-production board with three UltraScale+ FPGAs and one ZYNQ UltraScale+, and 35 MiniPODs are implemented in an ATCA module. This board will receive coarse-granularity (0.2x0.2) information from the entire ATLAS calorimeters on up to 300 optical fibers and each FPGA has 24 links to the L1Topo at the speed up to 12.8 Gb/s.

  13. The design of a fast Level-1 track trigger for the high luminosity upgrade of ATLAS.

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00413032; The ATLAS collaboration

    2016-01-01

    The high/luminosity upgrade of the LHC will increase the rate of the proton-proton collisions by approximately a factor of 5 with respect to the initial LHC-design. The ATLAS experiment will upgrade consequently, increasing its robustness and selectivity in the expected high radiation environment. In particular, the earliest, hardware based, ATLAS trigger stage ("Level 1") will require higher rejection power, still maintaining efficient selection on many various physics signatures. The key ingredient is the possibility of extracting tracking information from the brand new full-silicon detector and use it for the process. While fascinating, this solution poses a big challenge in the choice of the architecture, due to the reduced latency available at this trigger level (few tens of micro-seconds) and the high expected working rates (order of MHz). In this paper, we review the design possibilities of such a system in a potential new trigger and readout architecture, and present the performance resulting from a d...

  14. Supervision of the ATLAS High Level Trigger System

    CERN Document Server

    Wheeler, S.; Meessen, C.; Qian, Z.; Touchard, F.; Negri, France A.; Zobernig, H.; CHEP 2003 Computing in High Energy Physics; Negri, France A.

    2003-01-01

    The ATLAS High Level Trigger (HLT) system provides software-based event selection after the initial LVL1 hardware trigger. It is composed of two stages, the LVL2 trigger and the Event Filter. The HLT is implemented as software tasks running on large processor farms. An essential part of the HLT is the supervision system, which is responsible for configuring, coordinating, controlling and monitoring the many hundreds of processes running in the HLT. A prototype implementation of the supervision system, using tools from the ATLAS Online Software system is presented. Results from scalability tests are also presented where the supervision system was shown to be capable of controlling over 1000 HLT processes running on 230 nodes.

  15. ATLAS Level-1 Muon Barrel Trigger robustness study at X5 test facility

    CERN Document Server

    Di Mattia, A; Nisati, A; Pastore, F C; Vari, R; Veneziano, Stefano; Aielli, G; Camarri, P; Cardarelli, R; Di Ciaccio, A; Di Simone, A; Liberti, B; Santonico, R

    2004-01-01

    The present paper describes the Level-1 Barrel Muon Trigger performance as expected with the current configuration of the RPC detectors, as designed for the Barrel Muon Spectrometer of ATLAS. Results of a beam test performed at the X5-GIF facility at CERN are presented in order to show the trigger efficiency with different conditions of RPC detection efficiency and several background rates. Small RPC chambers with part of the final trigger electronics are used, while the trigger coincidence logic is applied off-line using a detailed simulation model. copy 2003 Published by Esevier B.V. 3 Refs.

  16. Optimisation of the level-1 calorimeter trigger at ATLAS for Run II

    Energy Technology Data Exchange (ETDEWEB)

    Suchek, Stanislav [Kirchhoff-Institute for Physics, Im Neuenheimer Feld 227, 69120 Heidelberg (Germany); Collaboration: ATLAS-Collaboration

    2015-07-01

    The Level-1 Calorimeter Trigger (L1Calo) is a central part of the ATLAS Level-1 Trigger system, designed to identify jet, electron, photon, and hadronic tau candidates, and to measure their transverse energies, as well total transverse energy and missing transverse energy. The optimisation of the jet energy resolution is an important part of the L1Calo upgrade for Run II. A Look-Up Table (LUT) is used to translate the electronic signal from each trigger tower to its transverse energy. By optimising the LUT calibration we can achieve better jet energy resolution and better performance of the jet transverse energy triggers, which are vital for many physics analyses. In addition, the improved energy calibration leads to significant improvements of the missing transverse energy resolution. A new Multi-Chip Module (MCM), as a part of the L1Calo upgrade, provides two separate LUTs for jets and electrons/photons/taus, allowing to optimise jet transverse energy and missing transverse energy separately from the electromagnetic objects. The optimisation is validated using jet transverse energy and missing transverse energy triggers turn-on curves and rates.

  17. Development of the detector control system for the ATLAS Level-1 trigger and measurement of the single top production cross section

    CERN Document Server

    Curtis, Christopher J

    This thesis discusses the development of the Detector Control System (DCS) for the ATLAS Level-1 Trigger. Microcontroller code has been developed to read out slow controls data from the Level-1 Calorimeter Trigger modules into the wider DCS. Back-end software has been developed for archiving this data. A Finite State Machine (FSM) has also been developed to offer remote access to the L1 Trigger hardware from the ATLAS Control Room. This Thesis also discusses the discovery potential for electroweak single top production during early running. Using Monte Carlo data some of the major systematics are discussed. A potential upper limit on the production cross section is calculated to be 45.2 pb. If the Standard Model prediction is assumed, a measured signal could potentially have a significance of up to 2.23¾ using 200 pb−1 of data.

  18. The ATLAS online High Level Trigger framework experience reusing offline software components in the ATLAS trigger

    CERN Document Server

    Wiedenmann, W

    2009-01-01

    Event selection in the Atlas High Level Trigger is accomplished to a large extent by reusing software components and event selection algorithms developed and tested in an offline environment. Many of these offline software modules are not specifically designed to run in a heavily multi-threaded online data flow environment. The Atlas High Level Trigger (HLT) framework based on the Gaudi and Atlas Athena frameworks, forms the interface layer, which allows the execution of the HLT selection and monitoring code within the online run control and data flow software. While such an approach provides a unified environment for trigger event selection across all of Atlas, it also poses strict requirements on the reused software components in terms of performance, memory usage and stability. Experience of running the HLT selection software in the different environments and especially on large multi-node trigger farms has been gained in several commissioning periods using preloaded Monte Carlo events, in data taking peri...

  19. ATLAS Level-1 Calorimeter Trigger Subsystem Tests of a Prototype Cluster Processor Module

    CERN Document Server

    Garvey, J; Apostologlou, P; Ay, C; Barnett, B M; Bauss, B; Brawn, I P; Bohm, C; Dahlhoff, A; Davis, A O; Edwards, J; Eisenhandler, E F; Gee, C N P; Gillman, A R; Hanke, P; Hellman, S; Hidévgi, A; Hillier, S J; Jakobs, K; Kluge, E E; Landon, M; Mahboubi, K; Mahout, G; Meier, K; Meshkov, P; Moye, T H; Mills, D; Moyse, E; Nix, O; Penno, K; Perera, V J O; Qian, W; Schmitt, K; Schäfer, U; Silverstein, S; Staley, R J; Thomas, J; Trefzger, T M; Watkins, P M; Watson, A; 9th Workshop On Electronics For LHC Experiments - LECC 2003

    2003-01-01

    The Level-1 Calorimeter Trigger consists of a Preprocessor (PP), a Cluster Processor (CP), and a Jet/Energy-sum Processor (JEP). The CP and JEP receive digitised trigger-tower data from the Preprocessor and produce trigger multiplicity and Region-of-Interest (RoI) information. The trigger will also provide intermediate results to the data acquisition (DAQ) system for monitoring and diagnostic purposes by using Readout Driver (ROD) Modules. The CP Modules (CPM) are designed to find isolated electron/photon and hadron/tau clusters in overlapping windows of trigger towers. Each pipelined CPM processes 8-bit data from a total of 128 trigger towers at each LHC crossing. Four full-specification prototypes of CPMs have been built and results of complete tests on individual boards will be presented. These modules were then integrated with other modules to build an ATLAS Level-1 Calorimeter Trigger subsystem test bench. Realtime data were exchanged between modules, and time-slice readout data were tagged and transferr...

  20. Upgrade of the Level-1 muon trigger of the ATLAS detector in the barrel-endcap transition region with RPC chambers

    CERN Document Server

    Massa, L; The ATLAS collaboration

    2014-01-01

    This report presents a project for the upgrade of the Level-1 muon trigger in the barrel-endcap transition region (1.01) caused by charged particles originating from secondary interactions downstream of the interaction point. After the LHC phase-1 upgrade, forseen for 2018, the Level-1 muon trigger rate would saturate the allocated bandwidth unless new measures are adopted to improve the rejection of fake triggers. ATLAS is going to improve the trigger selectivity in the region |$\\eta$|>1.3 with the addition of the New Small Wheel detector as an inner trigger plane. To obtain a similar trigger selectivity in the barrel-endcap transition region 1.0<|$\\eta$|<1.3, it is proposed to add new RPC chambers at the edge of the inner layer of the barrel muon spectrometer. These chambers will be based on a three layer structure with thinner gas gaps and electrodes with respect to the ATLAS standard and a new low-profile light-weight mechanical structure that will allow the installation in the limited available spa...

  1. Simulation of dynamic pile-up corrections in the ATLAS level-1 calorimeter trigger

    Energy Technology Data Exchange (ETDEWEB)

    Narrias-Villar, Daniel; Wessels, Martin; Brandt, Oleg [Heidelberg University, Heidelberg (Germany)

    2015-07-01

    The Level-1 Calorimeter Trigger is a crucial part of the ATLAS trigger effort to select only relevant physics events out of the large number of interactions at the LHC. In Run II, in which the LHC will double the centre-of-mass energy and further increase the instantaneous luminosity, pile-up is a limiting key factor for triggering and reconstruction of relevant events. The upgraded L1Calo Multi-Chip-Modules (nMCM) will address this problem by applying dynamic pile-up corrections in real-time, of which a precise simulation is crucial for physics analysis. Therefore pile-up effects are studied in order to provide a predictable parametrised baseline correction for the Monte Carlo simulation. Physics validation plots, such as trigger rates and turn-on curves are laid out.

  2. A self seeded first level track trigger for ATLAS

    International Nuclear Information System (INIS)

    Schöning, A

    2012-01-01

    For the planned high luminosity upgrade of the Large Hadron Collider, aiming to increase the instantaneous luminosity to 5 × 10 34 cm −2 s −1 , the implementation of a first level track trigger has been proposed. This trigger could be installed in the year ∼ 2021 along with the complete renewal of the ATLAS inner detector. The fast readout of the hit information from the Inner Detector is considered as the main challenge of such a track trigger. Different concepts for the implementation of a first level trigger are currently studied within the ATLAS collaboration. The so called 'Self Seeded' track trigger concept exploits fast frontend filtering algorithms based on cluster size reconstruction and fast vector tracking to select hits associated to high momentum tracks. Simulation studies have been performed and results on efficiencies, purities and trigger rates are presented for different layouts.

  3. The Virtual Point 1 event display for the ATLAS experiment

    International Nuclear Information System (INIS)

    Kittelmann, Thomas; Tsulaia, Vakhtang; Boudreau, Joseph; Moyse, Edward

    2010-01-01

    We present an event display for the ATLAS Experiment, called Virtual Point 1 (VP1), designed initially for deployment at point 1 of the LHC, the location of the ATLAS detector. The Qt/OpenGL based application provides truthful and interactive 3D representations of both event and non-event data, and now serves a general-purpose role within the experiment. Thus, VP1 is used both online (in the control room itself or remotely via a special 'live' mode) and offline environments to provide fast debugging and understanding of events, detector status and software. In addition to a flexible plugin infrastructure and a high level of configurability, this multi-purpose role is mainly facilitated by embedding the application directly into the ATLAS offline software framework, enabling it to use the native Event Data Model directly, and thus run on any source of ATLAS data, or even directly from within processes such as reconstruction jobs. Finally, VP1 provides high-quality pictures and movies, useful for outreach purposes.

  4. The ATLAS Data Acquisition and High Level Trigger system

    International Nuclear Information System (INIS)

    2016-01-01

    This paper describes the data acquisition and high level trigger system of the ATLAS experiment at the Large Hadron Collider at CERN, as deployed during Run 1. Data flow as well as control, configuration and monitoring aspects are addressed. An overview of the functionality of the system and of its performance is presented and design choices are discussed.

  5. The ATLAS trigger high-level trigger commissioning and operation during early data taking

    CERN Document Server

    Goncalo, R

    2008-01-01

    The ATLAS experiment is one of the two general-purpose experiments due to start operation soon at the Large Hadron Collider (LHC). The LHC will collide protons at a centre of mass energy of 14~TeV, with a bunch-crossing rate of 40~MHz. The ATLAS three-level trigger will reduce this input rate to match the foreseen offline storage capability of 100-200~Hz. After the Level 1 trigger, which is implemented in custom hardware, the High-Level Trigger (HLT) further reduces the rate from up to 100~kHz to the offline storage rate while retaining the most interesting physics. The HLT is implemented in software running in commercially available computer farms and consists of Level 2 and Event Filter. To reduce the network data traffic and the processing time to manageable levels, the HLT uses seeded, step-wise reconstruction, aiming at the earliest possible rejection. Data produced during LHC commissioning will be vital for calibrating and aligning sub-detectors, as well as for testing the ATLAS trigger and setting up t...

  6. The Phase-1 Upgrade for the Level-1 Muon Barrel Trigger of the ATLAS Experiment at LHC

    CERN Document Server

    Izzo, Vincenzo; The ATLAS collaboration

    2018-01-01

    The Level-1 Muon Barrel Trigger of the ATLAS Experiment at LHC makes use of Resistive Plate Chamber (RPC) detectors. The on-detector trigger electronics modules are able to identify muons with predefined transverse momentum values (pT) by executing a coincidence logic on signals coming from the various detector layers. On-detector trigger boards then transfer trigger data to the off-detector electronics. A complex trigger system processes the incoming data by combining trigger information from the barrel and the endcap regions, and providing the combined muon candidate to the Central Trigger Processor (CTP). For almost a decade, the Level-1 Trigger system operated very well, despite the challenging requirements on trigger efficiency and performance, and the continuously increasing LHC luminosity. In order to cope with these constraints, various upgrades for the full trigger system were already deployed, and others have been designed to be installed in the next years. Most of the upgrades to the trigger system...

  7. The Phase-1 Upgrade for the Level-1 Muon Barrel Trigger of the ATLAS Experiment at LHC

    CERN Document Server

    Izzo, Vincenzo; The ATLAS collaboration

    2018-01-01

    The Level-1 Muon Barrel Trigger of the ATLAS Experiment at LHC makes use of Resistive Plate Chamber (RPC) detectors. The on-detector trigger electronics modules are able to identify muons with predefined transverse momentum values (pT) by executing a coincidence logic on signals coming from the various detector layers. Then, on-detector trigger boards transfer trigger data to the off-detector electronics. A complex trigger system processes the incoming data by combining trigger information from the Barrel and the End-cap regions, and by providing the combined muon candidate to the Central Trigger Processor (CTP). For almost a decade, the Level-1 Trigger system has been operating very well, despite the challenging requirements on trigger efficiency and performance, and the continuously increasing LHC luminosity. In order to cope with these constraints, various upgrades for the full trigger system were already deployed, and others have been designed to be installed in the next years. Most of the upgrades to the...

  8. Operation and Performance of the ATLAS Level-1 Calorimeter and Topological Triggers in Run 2

    CERN Document Server

    Weber, Sebastian Mario; The ATLAS collaboration

    2017-01-01

    In Run 2 at CERN's Large Hadron Collider, the ATLAS detector uses a two-level trigger system to reduce the event rate from the nominal collision rate of 40 MHz to the event storage rate of 1 kHz, while preserving interesting physics events. The first step of the trigger system, Level-1, reduces the event rate to 100 kHz within a latency of less than $2.5$ $\\mu\\text{s}$. One component of this system is the Level-1 Calorimeter Trigger (L1Calo), which uses coarse-granularity information from the electromagnetic and hadronic calorimeters to identify regions of interest corresponding to electrons, photons, taus, jets, and large amounts of transverse energy and missing transverse energy. In these proceedings, we discuss improved features and performance of the L1Calo system in the challenging, high-luminosity conditions provided by the LHC in Run 2. A new dynamic pedestal correction algorithm reduces pile-up effects and the use of variable thresholds and isolation criteria for electromagnetic objects allows for opt...

  9. The ATLAS online High Level Trigger framework: Experience reusing offline software components in the ATLAS trigger

    International Nuclear Information System (INIS)

    Wiedenmann, Werner

    2010-01-01

    Event selection in the ATLAS High Level Trigger is accomplished to a large extent by reusing software components and event selection algorithms developed and tested in an offline environment. Many of these offline software modules are not specifically designed to run in a heavily multi-threaded online data flow environment. The ATLAS High Level Trigger (HLT) framework based on the GAUDI and ATLAS ATHENA frameworks, forms the interface layer, which allows the execution of the HLT selection and monitoring code within the online run control and data flow software. While such an approach provides a unified environment for trigger event selection across all of ATLAS, it also poses strict requirements on the reused software components in terms of performance, memory usage and stability. Experience of running the HLT selection software in the different environments and especially on large multi-node trigger farms has been gained in several commissioning periods using preloaded Monte Carlo events, in data taking periods with cosmic events and in a short period with proton beams from LHC. The contribution discusses the architectural aspects of the HLT framework, its performance and its software environment within the ATLAS computing, trigger and data flow projects. Emphasis is also put on the architectural implications for the software by the use of multi-core processors in the computing farms and the experiences gained with multi-threading and multi-process technologies.

  10. Diagnostic Systems and Resources utilization of the ATLAS High Level Trigger

    CERN Document Server

    Sidoti, A; The ATLAS collaboration; Ospanov, R

    2010-01-01

    Since the LHC started colliding protons in December 2009, the ATLAS trigger has operated very successfully with a collision rate which has increased by several orders of magnitude. The trigger monitoring and data quality infrastructure was essential to this success. We describe the software tools used to monitor the trigger system performance and assess the overall quality of the trigger selection during collisions running. ATLAS has broad physics goals which require a large number of different active triggers due to complex event topology, requiring quite sophisticated software structures and concepts. The trigger of the ATLAS experiment is built as a three level system. The first level is realized in hardware while the high level triggers (HLT) are software based and run on large PC farms. The trigger reduces the bunch crossing rate of 40 MHz, at design, to an average event rate of about 200 Hz for storage. Since the ATLAS detector is a general purpose detector, the trigger must be sensitive to a large numb...

  11. Studies of ATM for ATLAS high-level triggers

    CERN Document Server

    Bystrický, J; Huet, M; Le Dû, P; Mandjavidze, I D

    2001-01-01

    This paper presents some of the conclusions of our studies on asynchronous transfer mode (ATM) and fast Ethernet in the ATLAS level-2 trigger pilot project. We describe the general concept and principles of our data-collection and event-building scheme that could be transposed to various experiments in high-energy and nuclear physics. To validate the approach in view of ATLAS high-level triggers, we assembled a testbed composed of up to 48 computers linked by a 7.5-Gbit/s ATM switch. This modular switch is used as a single entity or is split into several smaller interconnected switches. This allows study of how to construct a large network from smaller units. Alternatively, the ATM network can be replaced by fast Ethernet. We detail the operation of the system and present series of performance measurements made with event-building traffic pattern. We extrapolate these results to show how today's commercial networking components could be used to build a 1000-port network adequate for ATLAS needs. Lastly, we li...

  12. Instrumentation of a Level-1 Track Trigger in the ATLAS detector for the High Luminosity LHC

    CERN Document Server

    Boisvert, V; The ATLAS collaboration

    2012-01-01

    The Large Hadron Collider will be upgraded in order to reach an instantaneous luminosity of $L=5 \\times 10^{34}$ cm$^{-2}$ s$^{-1}$. A challenge for the detectors will be to cope with the excessive rate of events coming into the trigger system. In order to maintain the capability of triggering on single lepton objects with momentum thresholds of $p_T 25$ GeV, the ATLAS detector is planning to use tracking information at the Level-1 (hardware) stage of the trigger system. Two options are currently being studied: a L0/L1 trigger design using a double buffer front-end architecture and a single hardware trigger level which uses trigger layers in the new tracker system. Both options are presented as well as results from simulation studies.

  13. ATLAS Calorimeter system: Run-2 performance, Phase-1 and Phase-2 upgrades

    CERN Document Server

    Starz, Steffen; The ATLAS collaboration

    2018-01-01

    The ATLAS detector was designed and built to study proton-proton collisions produced at the LHC at centre-of-mass energies up to 14 TeV and instantaneous luminosities up to 10^{34} cm^{−2} s^{−1}. A liquid argon-lead sampling calorimeter (LAr) is employed as electromagnetic calorimeter and hadronic calorimeter, except in the barrel region, where a scintillator-steel sampling calorimeter (TileCal) is used as hadronic calorimeter. ATLAS recorded 87 fb^{-1} of data at a center-of-mass energy of 13 TeV between 2015 and 2017. In order to achieve the level-1 acceptance rate of 100 kHz, certain adjustments have been performed. The calorimetry system performed accordingly to its design values and have played a crucial role in the ATLAS physics programme. This contribution will give an overview of the detector operation, monitoring and data quality, as well as the achieved performance, including the calibration and stability of the energy scale, noise level, response uniformity and time resolution of the ATLAS cal...

  14. Performance of ATLAS L1 Calorimeter Trigger with data

    CERN Document Server

    Bracinik, J; The ATLAS collaboration

    2010-01-01

    The ATLAS first-level calorimeter trigger is a hardware-based system designed to identify high-pT jets, electron/photon and tau candidates and to measure total and missing ET in the ATLAS calorimeters. After more than two years of commissioning in situ with calibration data and cosmic rays, the system has now been extensively used to select the most interesting proton-proton collision events. Final tuning of timing and energy calibration has been carried out in 2010 to improve the trigger response to physics objects. An analysis of the performance of the level-1 calorimeter trigger will be presented, along with the techniques used to achieve these results.

  15. Atlas 1.1: An Update to the Theory of Effective Systems Engineers

    Science.gov (United States)

    2018-01-16

    Proficiency Model ........................................................................................................... 21 5.1.1 Area 1: Math ...which are the most discrete areas of proficiency included in Atlas. • For each proficiency area, there are Levels, which describe the extent to which... Math /Science/General Engineering: Foundational concepts from mathematics, physical sciences, and general engineering; 2. System’s Domain

  16. Slice Test Results of the ATLAS Barrel Muon Level-1 Trigger

    CERN Document Server

    Aielli, G; Alviggi, M G; Bocci, V; Brambilla, Elena; Canale, V; Caprio, M A; Cardarelli, R; Cataldi, G; De Asmundis, R; Della Volpe, D; Di Ciaccio, A; Di Simone, A; Distante, L; Gorini, E; Grancagnolo, F; Iengo, P; Nisati, A; Pastore, F; Patricelli, S; Perrino, R; Petrolo, E; Primavera, M; Salamon, A; Santonico, R; Sekhniaidze, G; Severi, M; Spagnolo, S; Vari, R; Veneziano, Stefano; 9th Workshop On Electronics For LHC Experiments - LECC 2003

    2003-01-01

    The muon spectrometer of the ATLAS experiment makes use of the Resistive Plate Chambers detectors for particle tracking in the barrel region. The level-1 muon trigger system has to measure and discriminate muon transverse momentum, perform a fast and coarse tracking of the muon candidates, associate them to the bunch crossing corresponding to the event of interest, measure the second coordinate in the non-bending projection. The on-detector electronics first collects front-end signals coming from the two inner RPC stations on the low-pT PAD boards, each one covering a region of DetaxDphi=0.2x0.2, and hosting four Coincidence Matrix ASICs. Each CMA performs the low-pT trigger algorithm and data readout on a region of DetaxDphi=0.2x0.1. Data coming from the four CMAs are assembled by the low-pT PAD logic. Each low-pT PAD board sends data to the corresponding high-pT PAD boards, located on the outer RPC station. Four CMA on each board make use of the low-pT trigger result and of the front-end signals coming from...

  17. Commissioning and Validation of the ATLAS Level-1 Topological Trigger in Run 2

    CERN Document Server

    Zheng, Daniel; The ATLAS collaboration

    2017-01-01

    The ATLAS experiment has introduced and recently commissioned a completely new hardware sub-system of its first-level trigger: the topological processor (L1Topo). L1Topo consist of two AdvancedTCA blades mounting state-of-the-art FPGA processors, providing high input bandwidth (up to 4 Gb/s) and low latency data processing (200 ns). L1Topo is able to select collision events by applying kinematic and topological requirements on candidate objects (energy clusters, jets, and muons) measured by calorimeters and muon sub-detectors. Results from data recorded using the L1Topo trigger will be presented. These results demonstrate a significantly improved background event rejection, thus allowing for rate reduction with minimal efficiency loss. This improvement has been shown for several physics processes leading to low-$p_T$ leptons, including $H\\rightarrow\\tau \\tau$ and $J/\\psi \\rightarrow \\mu \\mu$. In addition to describing the L1Topo trigger system, we will discuss the use of an accurate L1Topo simulation as a pow...

  18. Using FPGA coprocessor for ATLAS level 2 trigger application

    International Nuclear Information System (INIS)

    Khomich, Andrei; Hinkelbein, Christian; Kugel, Andreas; Maenner, Reinhard; Mueller, Matthias

    2006-01-01

    Tracking has a central role in the event selection for the High-Level Triggers of ATLAS. It is particularly important to have fast tracking algorithms in the trigger system. This paper investigates the feasibility of using FPGA coprocessor for speeding up of the TRT LUT algorithm-one of the tracking algorithms for second level trigger for ATLAS experiment (CERN). Two realisations of the same algorithm have been compared: one in C++ and a hybrid C++/VHDL implementation. Using a FPGA coprocessor gives an increase of speed by a factor of two compared to a CPU-only implementation

  19. Scaling up ATLAS Event Service to production levels on opportunistic computing platforms

    CERN Document Server

    Benjamin, Douglas; The ATLAS collaboration; Ernst, Michael; Guan, Wen; Hover, John; Lesny, David; Maeno, Tadashi; Nilsson, Paul; Tsulaia, Vakhtang; van Gemmeren, Peter; Vaniachine, Alexandre; Wang, Fuquan; Wenaus, Torre

    2016-01-01

    Continued growth in public cloud and HPC resources is on track to overcome the dedicated resources available for ATLAS on the WLCG. Example of such platforms are Amazon AWS EC2 Spot Instances, Edison Cray XC30 supercomputer, backfill at the Tier-2 and Tier-3 sites, opportunistic resources at the Open Science Grid, and ATLAS High Level Trigger farm between the data taking periods. Because of opportunistic resources specifics such as preemptive job scheduling and data I/O, their efficient usage requires workflow innovations provided by the ATLAS Event Service. Thanks to the finer granularity of the Event Service data processing workflow, the opportunistic resources are used more efficiently. We report on our progress in scaling opportunistic resource usage to double-digit levels in ATLAS production.

  20. ATLAS operations in the GridKa T1/T2 Cloud

    International Nuclear Information System (INIS)

    Duckeck, G; Serfon, C; Walker, R; Harenberg, T; Kalinin, S; Schultes, J; Kawamura, G; Leffhalm, K; Meyer, J; Nderitu, S; Olszewski, A; Petzold, A; Sundermann, J E

    2011-01-01

    The ATLAS GridKa cloud consists of the GridKa Tier1 centre and 12 Tier2 sites from five countries associated to it. Over the last years a well defined and tested operation model evolved. Several core cloud services need to be operated and closely monitored: distributed data management, involving data replication, deletion and consistency checks; support for ATLAS production activities, which includes Monte Carlo simulation, reprocessing and pilot factory operation; continuous checks of data availability and performance for user analysis; software installation and database setup. Of crucial importance is good communication between sites, operations team and ATLAS as well as efficient cloud level monitoring tools. The paper gives an overview of the operations model and ATLAS services within the cloud.

  1. A readout buffer prototype for ATLAS high-level triggers

    CERN Document Server

    Calvet, D; Huet, M; Le Dû, P; Mandjavidze, I D; Mur, M

    2001-01-01

    Readout buffers are critical components in the dataflow chain of the ATLAS trigger/data-acquisition system. At up to 75 kHz, after each Level-1 trigger accept signal, these devices receive and store digitized data from groups of front-end electronic channels. Several readout buffers are grouped to form a readout buffer complex that acts as a data server for the high-level trigger selection algorithms and for the final data-collection system. This paper describes a functional prototype of a readout buffer based on a custom-made PCI mezzanine card that is designed to accept input data at up to 160 MB /s, to store up to 8 MB of data, and to distribute data chunks at the desired request rate. We describe the hardware of the card that is based on an Intel 1960 processor and complex programmable logic devices. We present the integration of several of these cards in a readout buffer complex. We measure various performance figures and discuss to which extent these can fulfil ATLAS needs. (5 refs).

  2. Performance of ATLAS RPC Level-1 muon trigger during the 2015 data taking

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00001854; The ATLAS collaboration

    2016-01-01

    RPCs are used in the ATLAS experiment at the LHC for muon trigger in the barrel region, which corresponds to |eta|<1.05. The status of the barrel trigger system during the 2015 data taking is presented, including measurements of the RPC detector efficiencies and of the trigger performance. The RPC system has been active in more than 99.9% of the ATLAS data taking, showing very good reliability. The RPC detector efficiencies were close to Run-1 and to design value. The trigger efficiency for the high-pT thresholds used in single-muon triggers has been approximately 4% lower than in Run 1, mostly because of chambers disconnected from HV due to gas leaks. Two minor upgrades have been performed in preparation of Run 2 by adding the so-called feet and elevator chambers to increase the system acceptance. The feet chambers have been commissioned during 2015 and are included in the trigger since the last 2015 runs. Part of the elevator chambers are still in commissioning phase and will probably need a replacement ...

  3. Scaling up ATLAS Event Service to production levels on opportunistic computing platforms

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00066086; The ATLAS collaboration; Caballero, Jose; Ernst, Michael; Guan, Wen; Hover, John; Lesny, David; Maeno, Tadashi; Nilsson, Paul; Tsulaia, Vakhtang; van Gemmeren, Peter; Vaniachine, Alexandre; Wang, Fuquan; Wenaus, Torre

    2016-01-01

    Continued growth in public cloud and HPC resources is on track to exceed the dedicated resources available for ATLAS on the WLCG. Examples of such platforms are Amazon AWS EC2 Spot Instances, Edison Cray XC30 supercomputer, backfill at Tier 2 and Tier 3 sites, opportunistic resources at the Open Science Grid (OSG), and ATLAS High Level Trigger farm between the data taking periods. Because of specific aspects of opportunistic resources such as preemptive job scheduling and data I/O, their efficient usage requires workflow innovations provided by the ATLAS Event Service. Thanks to the finer granularity of the Event Service data processing workflow, the opportunistic resources are used more efficiently. We report on our progress in scaling opportunistic resource usage to double-digit levels in ATLAS production.

  4. Method for a top quark mass measurement with the ATLAS detector at LHC: Study of the ATLAS level-1 electromagnetic calorimeter trigger

    International Nuclear Information System (INIS)

    Marzin, A.

    2010-01-01

    The ATLAS detector at the LHC (CERN) is designed to study the Standard Model, with the precise measurement of its parameters and the search of the Higgs boson, and the physics beyond the Standard Model with the search of new particles predicted by several theories such as Supersymmetry. The top quark is distinguished in the Standard Model by its mass close to the scale of electroweak symmetry breaking and is therefore a good probe to study physics beyond the Standard Model. A precise measurement of the top quark mass is also required to constrain the mass of the Higgs boson via the radiative corrections to the W boson propagator what would be a test of consistency of the standard Model if the Higgs boson is discovered. The first part of this thesis presents the theoretical aspects of the top quark mass. The second part is devoted to the calibration of the ATLAS level-1 electromagnetic calorimeter trigger, and more specifically to the processing of the analogue signal coming form the calorimeter. The performances of this system with cosmic muons and first LHC collisions are also described. At last, the third part describes the methods for a top quark mass measurement which have been developed in the lepton plus jets and dilepton channels. (author) [fr

  5. Level-1 Data Driver Card of the ATLAS New Small Wheel upgrade compatible with the Phase II 1 MHz readout scheme

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00549793; The ATLAS collaboration

    2016-01-01

    The Level-1 Data Driver Card (L1DDC) will be designed for the needs of the future upgrades of the innermost stations of the ATLAS end-cap muon spectrometer. The L1DDC is a high speed aggregator board capable of communicating with a large number of front-end electronics. It collects the Level-1 data along with monitoring data and transmits them to a network interface through a single bidirectional fiber link. In addition, the L1DDC board distributes trigger, time and configuration data coming from the network interface to the front-end boards. The L1DDC is fully compatible with the Phase II upgrade where the trigger rate is expected to reach 1 MHz. This paper describes the overall scheme of the data acquisition process and especially the three different L1DDC boards that will be fabricated. Moreover the L1DDC prototype-1 is also described.

  6. The ATLAS High-Level Calorimeter Trigger in Run-2

    CERN Document Server

    Wiglesworth, Craig; The ATLAS collaboration

    2018-01-01

    The ATLAS Experiment uses a two-level triggering system to identify and record collision events containing a wide variety of physics signatures. It reduces the event rate from the bunch-crossing rate of 40 MHz to an average recording rate of 1 kHz, whilst maintaining high efficiency for interesting collision events. It is composed of an initial hardware-based level-1 trigger followed by a software-based high-level trigger. A central component of the high-level trigger is the calorimeter trigger. This is responsible for processing data from the electromagnetic and hadronic calorimeters in order to identify electrons, photons, taus, jets and missing transverse energy. In this talk I will present the performance of the high-level calorimeter trigger in Run-2, noting the improvements that have been made in response to the challenges of operating at high luminosity.

  7. ATLAS Transition Region Upgrade at Phase-1

    CERN Document Server

    Song, H; The ATLAS collaboration

    2014-01-01

    This report presents the L1 Muon trigger transition region (1.0<|ƞ|<1.3) upgrade of ATLAS Detector at phase-1. The high fake trigger rate in the Endcap region 1.0<|ƞ|<2.4 would become a serious problem for the ATLAS L1 Muon trigger system at high luminosity. For the region 1.3<|ƞ|<2.4, covered by the Small Wheel, ATLAS is enhancing the present muon trigger by adding local fake rejection and track angle measurement capabilities. To reduce the rate in the remaining ƞ interval it has been proposed a similar enhancement by adding at the edge of the inner barrel a structure of 3-layers RPCs of a new generation. These RPCs will be based on a thinner gas gap and electrodes with respect to the ATLAS standards, a new high performance Front End, integrating fast TDC capabilities, and a new low profile and light mechanical structure allowing the installation in the tiny space available.This design effectively suppresses fake triggers by making the coincidence with both end-cap and interaction point...

  8. ATLAS High Level Calorimeter Trigger Software Performance for Cosmic Ray Events

    CERN Document Server

    Oliveira Damazio, Denis; The ATLAS collaboration

    2009-01-01

    The ATLAS detector is undergoing intense commissioning effort with cosmic rays preparing for the first LHC collisions next spring. Combined runs with all of the ATLAS subsystems are being taken in order to evaluate the detector performance. This is an unique opportunity also for the trigger system to be studied with different detector operation modes, such as different event rates and detector configuration. The ATLAS trigger starts with a hardware based system which tries to identify detector regions where interesting physics objects may be found (eg: large energy depositions in the calorimeter system). An approved event will be further processed by more complex software algorithms at the second level where detailed features are extracted (full detector granularity data for small portions of the detector is available). Events accepted at this level will be further processed at the so-called event filter level. Full detector data at full granularity is available for offline like processing with complete calib...

  9. Development of the ATLAS High-Level Trigger Steering and Inclusive Searches for Supersymmetry

    CERN Document Server

    Eifert, T

    2009-01-01

    The presented thesis is divided into two distinct parts. The subject of the first part is the ATLAS high-level trigger (HLT), in particular the development of the HLT Steering, and the trigger user-interface. The second part presents a study of inclusive supersymmetry searches, including a novel background estimation method for the relevant Standard Model (SM) processes. The trigger system of the ATLAS experiment at the Large Hadron Collider (LHC) performs the on-line physics selection in three stages: level-1 (LVL1), level-2 (LVL2), and the event filter (EF). LVL2 and EF together form the HLT. The HLT receives events containing detector data from high-energy proton (or heavy ion) collisions, which pass the LVL1 selection at a maximum rate of 75 kHz. It must reduce this rate to ~200 Hz, while retaining the most interesting physics. The HLT is a software trigger and runs on a large computing farm. At the heart of the HLT is the Steering software. The HLT Steering must reach a decision whether or not to accept ...

  10. A read-out buffer prototype for ATLAS high level triggers

    CERN Document Server

    Calvet, D; Huet, M; Le Dû, P; Mandjavidze, I D; Mur, M

    2000-01-01

    Read-Out Buffers are critical components in the dataflow chain of the ATLAS Trigger/DAQ system. At up to 75 kHz, after each Level-1 trigger accept signal, these devices receive and store digitized data from groups of front-end electronic channels. Several Read-Out Buffers are grouped to form a Read-Out Buffer Complex that acts as a data server for the High Level Triggers selection algorithms and for the final data collection system. This paper describes a functional prototype of a Read-Out Buffer based on a custom made PCI mezzanine card that is designed to accept input data at up to 160 MB/s, to store up to 8 MB of data and to distribute data chunks at the desired request rate. We describe the hardware of the card that is based on an Intel I960 processor and CPLDs. We present the integration of several of these cards in a Read-Out Buffer Complex. We measure various performance figures and we discuss to which extent these can fulfill ATLAS needs. 5 Refs.

  11. Multi-threading in the ATLAS High-Level Trigger

    CERN Document Server

    Barton, Adam Edward; The ATLAS collaboration

    2018-01-01

    Over the next decade of LHC data-taking the instantaneous luminosity will reach up 7.5 times the design value with over 200 interactions per bunch-crossing and will pose unprecedented challenges for the ATLAS trigger system. With the evolution of the CPU market to many-core systems, both the ATLAS offline reconstruction and High-Level Trigger (HLT) software will have to transition from a multi-process to a multithreaded processing paradigm in order not to exhaust the available physical memory of a typical compute node. The new multithreaded ATLAS software framework, AthenaMT, has been designed from the ground up to support both the offline and online use-cases with the aim to further harmonize the offline and trigger algorithms. The latter is crucial both in terms of maintenance effort and to guarantee the high trigger efficiency and rejection factors needed for the next two decades of data-taking. We report on an HLT prototype in which the need for HLT­specific components has been reduced to a minimum while...

  12. Upgrade of the ATLAS Level‐1 trigger with an FPGA based Topological Processor

    CERN Document Server

    Caputo, R; The ATLAS collaboration; Buescher, V; Degele, R; Kiese, P; Maldaner, S; Reiss, A; Schaefer, U; Simioni, E; Tapprogge, S; Urejola, P

    2013-01-01

    The ATLAS experiment is located at the European Centre for Nuclear Research (CERN) in Switzerland. It is designed to measure decay properties of high energetic particles produced in the protons collisions at the Large Hadron Collider (LHC). LHC proton collision at a frequency of 40 MHz, requires a trigger system to efficiently select events down to a manageable event storage rate of about 400 Hz. Event triggering is therefore one of the extraordinary challenges faced by the ATLAS detector. The Level-1 Trigger is the first rate-reducing step in the ATLAS Trigger, with an output rate of 75kHz and decision latency of less than 2.5$\\mu$s. It is primarily composed of the Calorimeter Trigger, Muon Trigger, the Central Trigger Processor (CTP). Due to the increase in the LHC instantaneous luminosity up to 3$\\times$10$^{34}$cm$^{−2}$s$^{−1}$ in 2015, a new element will be included in the Level-1 Trigger scheme: the Topological Processor (L1Topo). The L1Topo receive data in a dedicated format from the calorimeters ...

  13. Yucca Mountain Project Site Atlas: Volume 1: Draft

    International Nuclear Information System (INIS)

    1988-10-01

    The Nevada Nuclear Waste Storage Investigations (NNWSI) Project Site Atlas is a reference document of field activities which have been, or are being, conducted by the US Department of Energy (DOE) to support investigations of Yucca Mountain as a potential site for an underground repository for high-level radioactive waste. These investigations, as well as future investigations, will yield geologic, geophysical, geochemical, geomechanical, hydrologic, volcanic, seismic, and environmental data necessary to characterize Yucca Mountain and its regional setting. This chapter summarizes the background of the NNWSI Project and the objective, scope, structure, and preparation of the Site Atlas. Chapter 2 describes in more detail the bibliography and map portfolio portions of the Atlas, which are presented in Chapter 4 and Volume 2, respectively. Chapter 3 describes how to use the Atlas. The objective of the Site Atlas is to create a management tool for the DOE Waste Management Project Office (WMPO) that will allow the WMPO to compile and disseminate information regarding the location of NNWSI Project field investigations, and document the permits acquired and the environmental, archaeological, and socioeconomic surveys conducted to support those investigations. The information contained in the Atlas will serve as a historical reference of site investigation field activities. A companion document to the Atlas is the NNWSI Project Surface Based Investigations Plan (SBIP)

  14. Commissioning of the ATLAS High Level Trigger with single beam and cosmic rays

    International Nuclear Information System (INIS)

    Di Mattia, A

    2010-01-01

    ATLAS is one of the two general-purpose detectors at the Large Hadron Collider (LHC). The trigger system is responsible for making the online selection of interesting collision events. At the LHC design luminosity of 10 34 cm -2 s -1 it will need to achieve a rejection factor of the order of 10 -7 against random proton-proton interactions, while selecting with high efficiency events that are needed for physics analyses. After a first processing level using custom electronics based on FPGAs and ASICs, the trigger selection is made by software running on two processor farms, containing a total of around two thousand multi-core machines. This system is known as the High Level Trigger (HLT). To reduce the network data traffic and the processing time to manageable levels, the HLT uses seeded, step-wise reconstruction, aiming at the earliest possible rejection of background events. The recent LHC startup and short single-beam run provided a 'stress test' of the system and some initial calibration data. Following this period, ATLAS continued to collect cosmic-ray events for detector alignment and calibration purposes. After giving an overview of the trigger design and its innovative features, this paper focuses on the experience gained from operating the ATLAS trigger with single LHC beams and cosmic-rays.

  15. The ATLAS trigger: high-level trigger commissioning and operation during early data taking

    International Nuclear Information System (INIS)

    Goncalo, R

    2008-01-01

    The ATLAS experiment is one of the two general-purpose experiments due to start operation soon at the Large Hadron Collider (LHC). The LHC will collide protons at a centre of mass energy of 14 TeV, with a bunch-crossing rate of 40 MHz. The ATLAS three-level trigger will reduce this input rate to match the foreseen offline storage capability of 100-200 Hz. This paper gives an overview of the ATLAS High Level Trigger focusing on the system design and its innovative features. We then present the ATLAS trigger strategy for the initial phase of LHC exploitation. Finally, we report on the valuable experience acquired through in-situ commissioning of the system where simulated events were used to exercise the trigger chain. In particular we show critical quantities such as event processing times, measured in a large-scale HLT farm using a complex trigger menu

  16. Sim@P1: Using Cloudscheduler for offline processing on the ATLAS HLT farm

    CERN Document Server

    Berghaus, Frank; The ATLAS collaboration

    2018-01-01

    The Simulation at Point1 (Sim@P1) project was built in 2013 to take advantage of the ATLAS Trigger and Data Acquisition High Level Trigger (HLT) farm. The HLT farm provides more than 2,000 compute nodes, which are critical to ATLAS during data taking. When ATLAS is not recording data, this large compute resource is used to generate and process simulation data for the experiment. The Sim@P1 system uses virtual machines, deployed by OpenStack, in order to isolate the resources from the ATLAS technical and control network. During the upcoming long shutdown in 2019 (LS2), the HLT farm including the Sim@P1 infrastructure will be upgraded. A previous paper on the project emphasized the need for “simple, reliable, and efficient tools” to quickly switch between data acquisition operation and offline processing.In this contribution we assess various options for updating and simplifying the provisional tools. Cloudscheduler is a tool for provisioning cloud resources for batch computing that has been managing cloud ...

  17. Frameworks to monitor and predict resource usage in the ATLAS High Level Trigger

    CERN Document Server

    Martin, Tim; The ATLAS collaboration

    2016-01-01

    The ATLAS High Level Trigger Farm consists of around 30,000 CPU cores which filter events at up to 100 kHz input rate. A costing framework is built into the high level trigger, this enables detailed monitoring of the system and allows for data-driven predictions to be made utilising specialist datasets. This talk will present an overview of how ATLAS collects in-situ monitoring data on both CPU usage and dataflow over the data-acquisition network during the trigger execution, and how these data are processed to yield both low level monitoring of individual selection-algorithms and high level data on the overall performance of the farm. For development and prediction purposes, ATLAS uses a special `Enhanced Bias' event selection. This mechanism will be explained along with how is used to profile expected resource usage and output event-rate of new physics selections, before they are executed on the actual high level trigger farm.

  18. Supporting ATLAS

    CERN Multimedia

    2003-01-01

    Eighteen feet made of stainless steel will support the barrel ATLAS detector in the cavern at Point 1. In total, the ATLAS feet system will carry approximately 6000 tons, and will give the same inclination to the detector as the LHC accelerator. The installation of the feet is scheduled to finish during January 2004 with an installation precision at the 1 mm level despite their height of 5.3 metres. The manufacture was carried out in Russia (Company Izhorskiye Zavody in St. Petersburg), as part of a Russian and JINR Dubna in-kind contribution to ATLAS. Involved in the installation is a team from IHEP-Protvino (Russia), the ATLAS technical co-ordination team at CERN, and the CERN survey team. In all, about 15 people are involved. After the feet are in place, the barrel toroid magnet and the barrel calorimeters will be installed. This will keep the ATLAS team busy for the entire year 2004.

  19. Commissioning of the ATLAS High Level Trigger with single beam and cosmic rays

    Energy Technology Data Exchange (ETDEWEB)

    Di Mattia, A, E-mail: dimattia@mail.cern.c [Michigan State University - Department of Physics and Astronomy 3218 Biomedical Physical Science - East Lansing, MI 48824-2320 (United States)

    2010-04-01

    ATLAS is one of the two general-purpose detectors at the Large Hadron Collider (LHC). The trigger system is responsible for making the online selection of interesting collision events. At the LHC design luminosity of 10{sup 34} cm{sup -2}s{sup -1} it will need to achieve a rejection factor of the order of 10{sup -7} against random proton-proton interactions, while selecting with high efficiency events that are needed for physics analyses. After a first processing level using custom electronics based on FPGAs and ASICs, the trigger selection is made by software running on two processor farms, containing a total of around two thousand multi-core machines. This system is known as the High Level Trigger (HLT). To reduce the network data traffic and the processing time to manageable levels, the HLT uses seeded, step-wise reconstruction, aiming at the earliest possible rejection of background events. The recent LHC startup and short single-beam run provided a 'stress test' of the system and some initial calibration data. Following this period, ATLAS continued to collect cosmic-ray events for detector alignment and calibration purposes. After giving an overview of the trigger design and its innovative features, this paper focuses on the experience gained from operating the ATLAS trigger with single LHC beams and cosmic-rays.

  20. The Phase-1 Upgrade for the Level-1 Muon Barrel Trigger of the ATLAS Experiment at LHC

    CERN Document Server

    Izzo, Vincenzo; The ATLAS collaboration

    2018-01-01

    The Level-1 Barrel Trigger of the ATLAS Experiment is based on Resistive Plate Chambers (RPC) detectors. The on-detector trigger electronics identifies muons with specific values of transverse momentum (pT), by using coincidences between different layers of detectors. Trigger data is then transferred from on-detector to the off-detector trigger electronics boards. Data is processed by a complex system, which combines trigger data from the Barrel and the End-cap regions, and provides the combined muon candidate to the Central Trigger Processor (CTP). The system has been performing very well for almost a decade. However, in order to cope with continuously increasing LHC luminosity and more demanding requirements on trigger efficiency and performance, various upgrades for the full trigger system were already deployed, and others are foreseen in the next years. Most of the trigger upgrades are based on state-of-the-art technologies and allow designing more complex trigger menus, increasing processing power and da...

  1. Discrete event simulation of the ATLAS second level trigger

    International Nuclear Information System (INIS)

    Vermeulen, J.C.; Dankers, R.J.; Hunt, S.; Harris, F.; Hortnagl, C.; Erasov, A.; Bogaerts, A.

    1998-01-01

    Discrete event simulation is applied for determining the computing and networking resources needed for the ATLAS second level trigger. This paper discusses the techniques used and some of the results obtained so far for well defined laboratory configurations and for the full system

  2. Performance of the ATLAS Calorimeter Trigger in the LHC Run 1 Data Taking Period

    CERN Document Server

    Oliveira Damazio, D; The ATLAS collaboration

    2013-01-01

    The ATLAS detector operated very successfully at the LHC Run 1 data taking period collecting a large number of events used for the discovery of the Higgs boson as well as for the search for beyond the Standard Model physics. In the main search channels related to the finding of the Higgs, the ATLAS calorimeter system played a major role measuring the energy of photons, electrons, jets, taus and neutrinos, via missing transverse energy measurement. The ATLAS trigger system selects from the huge amount of events produced every second, those few that must be recorded for physics analysis (less than one out of 40 thousand can be kept). The selection process is performed in 3 levels with increasing complexity and resolution. The first level is hardware based, seeding the two other software levels called together the High-Level Trigger. The paper will describe details of the calorimeter based HLT algorithms with special emphasis on the algorithms used for missing transverse energy and jet detection which were impro...

  3. Instrumentation of a Level-1 Track Trigger in the ATLAS detector for the High Luminosity LHC

    CERN Document Server

    Boisvert, V; The ATLAS collaboration

    2012-01-01

    One of the main challenges in particle physics experiments at hadron colliders is to build detector systems that can take advantage of the future luminosity increase that will take place during the next decade. More than 200 simultaneous collisions will be recorded in a single event which will make the task to extract the interesting physics signatures harder than ever before. Not all events can be recorded hence a fast trigger system is required to select events that will be stored for further analysis. In the ATLAS experiment at the Large Hadron Collider (LHC) two different architectures for accommodating a level-1 track trigger are being investigated. The tracker has more readout channels than can be readout in time for the trigger decision. Both architectures aim for a data reduction of 10-100 in order to make readout of data possible in time for a level-1 trigger decision. In the first architecture the data reduction is achieved by reading out only parts of the detector seeded by a high rate pre-trigger ...

  4. Test Specification of A1-1 Test for OECD-ATLAS Project

    International Nuclear Information System (INIS)

    Kang, Kyoung-Ho; Moon, Sang-Ki; Lee, Seung-Wook; Choi, Ki-Yong; Song, Chul-Hwa

    2014-01-01

    In the OECD-ATLAS project, design extension conditions (DECs) such as a station blackout (SBO) and a total loss of feed water (TLOFW) will be experimentally investigated to meet the international interests in the multiple high-risk DECs raised after the Fukushima accident. The proposed test matrix for the OECD-ATLAS project is summarized in Table 1.. In this study, detailed specification of the first test named as A1-1 in the OECD-ATLAS project was described. The target scenario of the A1-1 test is a prolonged SBO with delayed supply of turbine-driven auxiliary feedwater to only SG number 2 (SG-2). A SBO is one of the most important DECs in that without any proper operator actions, a total loss of heat sink leads to core uncover, to core damage, and ultimately a core melt-down scenario under high pressure. Due to this safety importance, a SBO is considered to be a base test item of the OECD-ATLAS project. A detailed specification of the first test named as A1-1 in the OECD-ATLAS project was described. The target scenario of the A1-1 test is a prolonged SBO with delayed supply of turbine-driven auxiliary feedwater to only SG-2 in order to consider an accident mitigation measure. The pre-test analysis using MARS code was performed with an aim of setting up the detailed test procedures for A1-1 test and also gaining the physical insights for a prolonged SBO transient. In the A1-1 test, a prolonged SBO transient will be simulated with two temporal phases: Phase (I) for conservative SBO transient without supply of turbine-driven auxiliary feedwater and Phase (II) for asymmetric cooling via single trained supply of turbine-driven auxiliary feedwater

  5. Full supersymmetry simulation for ATLAS in DC1

    International Nuclear Information System (INIS)

    Biglietti, Michela; Brochu, Frederic; Costanzo, Davide; De, Kaushik; Duchovni, Ehud; Gupta, Ambreesh; Hinchliffe, Ian; Lester, Chris; Lipniacka, Anna; Loch, Peter; Lytken, Else; Ma, Hong; Nielsen, Jakob L.; Paige, Frank; Polesello, Giacomo; Rajagopalan, Srini; Schrager, Dan; Stavropoulos, Georgios; Tovey, Dan; Wielers, Monika

    2004-01-01

    This note reports results from a simulation of 100k events for one example of a minimal SUGRA supersymmetry case at the LHC using full simulation of the ATLAS detector. It was carried out as part ATLAS Data Challenge 1

  6. Experimental Results of A1.2 Test for OECD-ATLAS Project

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Kyoung-Ho; Bae, Byoung-Uhn; Park, Yu-Sun; Kim, Jong-Rok; Choi, Nam-Hyun; Choi, Ki-Yong [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    In order to meet the international interests in the multiple high-risk design extension conditions (DECs) raised after the Fukushima accident, KAERI (Korea Atomic Energy Research Institute) is operating an OECD/NEA project (hereafter, OECD-ATLAS project) by utilizing a thermal-hydraulic integral effect test facility, ATLAS (Advanced Thermal-hydraulic Test Loop for Accident Simulation). As for a prolonged SBO transient of the OECD-ATLAS project, two tests, named A1.1 and A1.2, were determined to be performed. In particular, passive safety systems are considered as the most promising alternatives to reinforce the safety and reliability of an ultimate heat removal system without any operator actions in the SBO transients. As one of the new safety improvement concepts to mitigate an SBO accident efficiently, a cooling and operational performance of the passive auxiliary feedwater system (PAFS) is investigated in the framework of the OECD-ATLAS project to produce clearer knowledge of the actual phenomena and to provide the best guidelines for accident management. As the second test of the OECD-ATLAS project, the A1.2 test was conducted to simulate a prolonged SBO with asymmetric secondary cooling through the supply of passive auxiliary feedwater only to SG-2. When the collapsed water level of steam generator reached a wide range of 25%, PAFS was actuated. PAFS played a key role in cooling down the primary system by the heat transfer and the natural circulation. With the actuation of PAFS, the fluid temperatures at the core inlet and outlet started to decrease without any excursion of the maximum heater surface temperature in the core. This integral effect test data of A1.2 test can be used to evaluate the prediction capability of existing safety analysis codes and identify any code deficiency for an SBO simulation with an operation of a passive system such as PAFS.

  7. The Level-1 Calorimeter Global Feature Extractor (gFEX) Boosted Object Trigger for the Phase-I Upgrade of the ATLAS Experiment

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00235957; The ATLAS collaboration; Stark, Giordon; Miller, David

    2016-01-01

    The Global Feature Extractor (gFEX) module is a planned component of the Level 1 online trigger system for the ATLAS experiment planned for installation during the Phase I upgrade in 2018. This unique single electronics board with multiple high speed processors will receive coarse-granularity information from all the ATLAS calorimeters enabling the identification in real time of large-radius jets for capturing Lorentz-boosted objects such as top quarks, Higgs, $Z$ and $W$ bosons. The gFEX architecture also facilitates the calculation of global event variables such as missing transverse energy, centrality for heavy ion collisions, and event-by-event pile-up energy density. Details of the electronics architecture that provides these capabilities are presented, along with results of tests of the prototype systems now available. The status of the firmware algorithm design and implementation as well as monitoring capabilities are also presented.

  8. Design, Results, Evolution and Status of the ATLAS Simulation at Point1 Project

    CERN Document Server

    AUTHOR|(SzGeCERN)377840; Fressard-Batraneanu, Silvia Maria; Ballestrero, Sergio; Contescu, Alexandru Cristian; Fazio, Daniel; Di Girolamo, Alessandro; Lee, Christopher Jon; Pozo Astigarraga, Mikel Eukeni; Scannicchio, Diana; Sedov, Alexey; Twomey, Matthew Shaun; Wang, Fuquan; Zaytsev, Alexander

    2015-01-01

    Abstract. During the LHC Long Shutdown 1 period (LS1), that started in 2013, the Simulation at Point1 (Sim@P1) Project takes advantage, in an opportunistic way, of the TDAQ (Trigger and Data Acquisition) HLT (High Level Trigger) farm of the ATLAS experiment. This farm provides more than 1300 compute nodes, which are particularly suited for running event generation and Monte Carlo production jobs that are mostly CPU and not I/O bound. It is capable of running up to 2700 virtual machines (VMs) provided with 8 CPU cores each, for a total of up to 22000 parallel running jobs. This contribution gives a review of the design, the results, and the evolution of the Sim@P1 Project; operating a large scale OpenStack based virtualized platform deployed on top of the ATLAS TDAQ HLT farm computing resources. During LS1, Sim@P1 was one of the most productive ATLAS sites: it delivered more than 50 million CPU-hours and it generated more than 1.7 billion Monte Carlo events to various analysis communities. The design aspects a...

  9. An FPGA based demonstrator for a topological processor in the future ATLAS L1-Calo trigger “GOLD”

    CERN Document Server

    Ebling, A; Büscher, V; Degele, R; Ji, W; Meyer, C; Moritz, S; Schäfer, U; Simioni, E; Tapprogge, S; Wenzel, V

    2012-01-01

    Abstract: The existing ATLAS trigger consists of three levels. The level 1 (L1) is an FPGAs based custom designed trigger, while the second and third levels are software based. The LHC machine plans to bring the beam energy to the maximum value of 7 TeV and to increase the luminosity in the coming years. The current L1 trigger system is therefore seriously challenged. To cope with the resulting higher event rate, as part of the ATLAS trigger upgrade, a new electronics module is foreseen to be added in the ATLAS Level-1 Calorimeter Trigger electronics chain: the Topological Processor (TP). Such a processor needs fast optical I/O and large aggregate bandwidth to use the information on trigger object position in space (e.g. jets in the calorimeters or muons measured in the muon detectors) to improve the purity of the L1 triggers streams by applying topological cuts within the L1 latency budget. In this paper, an overview of the adopted technological solutions and the R&D activities on the demonstrator for th...

  10. Design, Results, Evolution and Status of the ATLAS Simulation at Point1 Project

    Science.gov (United States)

    Ballestrero, S.; Batraneanu, S. M.; Brasolin, F.; Contescu, C.; Fazio, D.; Di Girolamo, A.; Lee, C. J.; Pozo Astigarraga, M. E.; Scannicchio, D. A.; Sedov, A.; Twomey, M. S.; Wang, F.; Zaytsev, A.

    2015-12-01

    During the LHC Long Shutdown 1 (LSI) period, that started in 2013, the Simulation at Point1 (Sim@P1) project takes advantage, in an opportunistic way, of the TDAQ (Trigger and Data Acquisition) HLT (High-Level Trigger) farm of the ATLAS experiment. This farm provides more than 1300 compute nodes, which are particularly suited for running event generation and Monte Carlo production jobs that are mostly CPU and not I/O bound. It is capable of running up to 2700 Virtual Machines (VMs) each with 8 CPU cores, for a total of up to 22000 parallel jobs. This contribution gives a review of the design, the results, and the evolution of the Sim@P1 project, operating a large scale OpenStack based virtualized platform deployed on top of the ATLAS TDAQ HLT farm computing resources. During LS1, Sim@P1 was one of the most productive ATLAS sites: it delivered more than 33 million CPU-hours and it generated more than 1.1 billion Monte Carlo events. The design aspects are presented: the virtualization platform exploited by Sim@P1 avoids interferences with TDAQ operations and it guarantees the security and the usability of the ATLAS private network. The cloud mechanism allows the separation of the needed support on both infrastructural (hardware, virtualization layer) and logical (Grid site support) levels. This paper focuses on the operational aspects of such a large system during the upcoming LHC Run 2 period: simple, reliable, and efficient tools are needed to quickly switch from Sim@P1 to TDAQ mode and back, to exploit the resources when they are not used for the data acquisition, even for short periods. The evolution of the central OpenStack infrastructure is described, as it was upgraded from Folsom to the Icehouse release, including the scalability issues addressed.

  11. Resource utilization by the ATLAS High Level Trigger during 2010 and 2011 LHC running

    CERN Document Server

    Ospanov, R

    2012-01-01

    In 2010 and 2011, the ATLAS experiment successfully recorded data from LHC collisions with high efficiency and excellent data quality. ATLAS employs a three-level trigger system to select events of interest for physics analyses and detector commissioning. The trigger system consists of a custom-designed hardware trigger at level-1 and software algorithms at the two higher levels. The trigger selection is defined by a trigger menu which consists of more than 300 individual trigger signatures, such as electrons, muons, particle jets, etc. An execution of a trigger signature incurs computing and data storage costs. Th composition of the deployed trigger menu depends on the instantaneous LHC luminosity, the experiment's goals for the recorded data, and the limits imposed by the available computing power, network bandwidth and storage space. This paper describes a trigger monitoring framework for assigning computing costs for individual trigger signatures and trigger menus as a whole. These costs can be extrapolat...

  12. Frameworks to monitor and predict rates and resource usage in the ATLAS High Level Trigger

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00219969; The ATLAS collaboration

    2017-01-01

    The ATLAS High Level Trigger Farm consists of around 40,000 CPU cores which filter events at an input rate of up to 100 kHz. A costing framework is built into the high level trigger thus enabling detailed monitoring of the system and allowing for data-driven predictions to be made utilising specialist datasets. An overview is presented in to how ATLAS collects in-situ monitoring data on CPU usage during the trigger execution, and how these data are processed to yield both low level monitoring of individual selection-algorithms and high level data on the overall performance of the farm. For development and prediction purposes, ATLAS uses a special ‘Enhanced Bias’ event selection. This mechanism is explained along with how it is used to profile expected resource usage and output event rate of new physics selections, before they are executed on the actual high level trigger farm.

  13. Performance of the ATLAS first-level Trigger with first LHC Data

    CERN Document Server

    Lundberg, J; The ATLAS collaboration

    2010-01-01

    ATLAS is one of the two general-purpose detectors at the Large Hadron Collider (LHC). Its trigger system must reduce the anticipated proton collision rate of up to 40 MHz to a recordable event rate of 100-200 Hz. This is realized through a multi-level trigger system. The first-level trigger is implemented with custom-built electronics and makes an initial selection which reduces the rate to less than 100 kHz. The subsequent trigger selection is done in software run on PC farms. The first-level trigger decision is made by the central-trigger processor using information from coarse grained calorimeter information, dedicated muon-trigger detectors, and a variety of additional trigger inputs from detectors in the forward regions. We present the performance of the first-level trigger during the commissioning of the ATLAS detector during early LHC running. We cover the trigger strategies used during the different machine commissioning phases from first circulating beams and splash events to collisions. It is descri...

  14. Studies for a common selection software environment in ATLAS from the Level-2 Trigger to the offline reconstruction

    CERN Document Server

    Wiedenmann, W; Baines, J T M; Bee, C P; Biglietti, M; Bogaerts, A; Boisvert, V; Bosman, M; Brandt, S; Caron, B; Casado, M P; Cataldi, G; Cavalli, D; Cervetto, M; Comune, G; Corso-Radu, A; Di Mattia, A; Díaz-Gómez, M; Dos Anjos, A; Drohan, J; Ellis, Nick; Elsing, M; Epp, B; Etienne, F; Falciano, S; Farilla, A; George, S; Ghete, V M; González, S; Grothe, M; Kaczmarska, A; Karr, K M; Khomich, A; Konstantinidis, N P; Krasny, W; Li, W; Lowe, A; Luminari, L; Meessen, C; Mello, A G; Merino, G; Morettini, P; Moyse, E; Nairz, A; Negri, A; Nikitin, N V; Nisati, A; Padilla, C; Parodi, F; Pérez-Réale, V; Pinfold, J L; Pinto, P; Polesello, G; Qian, Z; Resconi, S; Rosati, S; Scannicchio, D A; Schiavi, C; Schörner-Sadenius, T; Segura, E; De Seixas, J M; Shears, T G; Sivoklokov, S Yu; Smizanska, M; Soluk, R A; Stanescu, C; Tapprogge, Stefan; Touchard, F; Vercesi, V; Watson, A T; Wengler, T; Werner, P; Wheeler, S; Wickens, F J; Wielers, M; Zobernig, G; NSS-MIC 2003 - IEEE Nuclear Science Symposium and Medical Imaging Conference, Part 1

    2004-01-01

    The Atlas High Level Trigger's primary function of event selection will be accomplished with a Level-2 trigger farm and an Event Filter farm, both running software components developed in the Atlas offline reconstruction framework. While this approach provides a unified software framework for event selection, it poses strict requirements on offline components critical for the Level-2 trigger. A Level-2 decision in Atlas must typically be accomplished within 10 ms and with multiple event processing in concurrent threads. In order to address these constraints, prototypes have been developed that incorporate elements of the Atlas Data Flow -, High Level Trigger -, and offline framework software. To realize a homogeneous software environment for offline components in the High Level Trigger, the Level-2 Steering Controller was developed. With electron/gamma- and muon-selection slices it has been shown that the required performance can be reached, if the offline components used are carefully designed and optimized ...

  15. The ATLAS High Level Trigger Configuration and Steering, Experience with the First 7 TeV Collisions

    CERN Document Server

    Stelzer, J; The ATLAS collaboration

    2011-01-01

    In March 2010 the four LHC experiments saw the first proton-proton collisions at a center-of-mass energy of 7 TeV. Still within the year a collision rate of nearly 10 MHz was expected. At ATLAS, events of potential physics interest for are selected by a three-level trigger system, with a final recording rate of about 200 Hz. The first level (L1) is implemented in customized hardware, the two levels of the high level trigger (HLT) are software triggers. For the ATLAS physics program more than 500 trigger signatures are defined. The HLT tests each signature on each L1-accepted event, the test outcome is recorded for later analysis. The HLT-Steering is responsible for this. It foremost ensures the independence of each signature test and an unbiased trigger decisions. Yet, to minimize data readout and execution time, cached detector data and once-calculated trigger objects are reused to form the decision. Some signature tests are performed only on a scaled-down fraction of candidate events, in order to reduce the...

  16. ATLAS Data Challenge 1

    CERN Document Server

    Poulard, G

    2003-01-01

    In 2002 the ATLAS experiment started a series of Data Challenges (DC) of which the goals are the validation of the Computing Model, of the complete software suite, of the data model, and to ensure the correctness of the technical choices to be made. A major feature of the first Data Challenge (DC1) was the preparation and the deployment of the software required for the production of large event samples for the High Level Trigger (HLT) and physics communities, and the production of those samples as a world-wide distributed activity. The first phase of DC1 was run during summer 2002, and involved 39 institutes in 18 countries. More than 10 million physics events and 30 million single particle events were fully simulated. Over a period of about 40 calendar days 71000 CPU-days were used producing 30 Tbytes of data in about 35000 partitions. In the second phase the next processing step was performed with the participation of 56 institutes in 21 countries (~ 4000 processors used in parallel). The basic elements of ...

  17. Advances in ATLAS@Home towards a major ATLAS computing resource

    CERN Document Server

    Cameron, David; The ATLAS collaboration

    2018-01-01

    The volunteer computing project ATLAS@Home has been providing a stable computing resource for the ATLAS experiment since 2013. It has recently undergone some significant developments and as a result has become one of the largest resources contributing to ATLAS computing, by expanding its scope beyond traditional volunteers and into exploitation of idle computing power in ATLAS data centres. Removing the need for virtualization on Linux and instead using container technology has made the entry barrier significantly lower data centre participation and in this paper, we describe the implementation and results of this change. We also present other recent changes and improvements in the project. In early 2017 the ATLAS@Home project was merged into a combined LHC@Home platform, providing a unified gateway to all CERN-related volunteer computing projects. The ATLAS Event Service shifts data processing from file-level to event-level and we describe how ATLAS@Home was incorporated into this new paradigm. The finishing...

  18. Performance studies of the ATLAS transition radiation tracker barrel using SR1 cosmics data

    CERN Document Server

    Wall, R

    The ATLAS experiment at the Large Hadron Collider (LHC) is designed to measure Nature at the energy scale often associated with electroweak symmetry breaking. When it comes online in 2008, the LHC and ATLAS will work to discover, among other things, the Higgs boson and any other signatures for physics beyond the Standard Model. As part of the ATLAS Inner Detector, the Transition Radiation Tracker will be an important part of ATLAS’s ability to make precise measurements of particle properties. This paper summarizes work done to study and categorize the performance of the TRT, using a combination of cosmic ray test data from the SR1 facility and Monte Carlo. In general, it was found that the TRT is working well, with module-level eciencies around 90 % and module-level noise just above 2 %. Reasonably good agreement was observed with Monte Carlo, though there are some apparently pathological dierences between the two that deserve further attention.

  19. Constituent-level pile-up mitigation techniques in ATLAS

    CERN Document Server

    The ATLAS collaboration

    2017-01-01

    Pile-up of simultaneous proton-proton collisions at the LHC has a significant impact on jet reconstruction. In this note the performance of several pile-up mitigation techniques is evaluated in detailed simulations of the ATLAS experiment. Four algorithms that act on the jet-constituent level are evaluated: SoftKiller, the cluster vertex fraction algorithm and Voronoi and constituent subtraction. We find that application of these constituent-level algorithms improves the resolution of low-transverse-momentum jets. The improvement is significant for collisions with 80-200 simultaneous proton-proton collisions envisaged in future runs of the LHC.

  20. Development of the jet Feature EXtractor (jFEX) for the ATLAS Level 1 calorimeter trigger upgrade at the LHC

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00547698; The ATLAS collaboration; Brogna, Andrea Salvatore; Buescher, Volker; Degele, Reinhold; Herr, Holger; Kahra, Christian; Rave, Stefan; Rocco, Elena; Schaefer, Uli; Vieira De Souza, Julio; Tapprogge, Stefan; Bauss, Bruno

    2017-01-01

    To cope with the enhanced luminosity delivered by the Large Hadron Collider from 2021 onwards, the ATLAS experiment has planned several upgrades. The first level trigger based on calorimeter data will be upgraded to exploit fine-granularity readout using a new system of Feature EXtractors (FEXs, FPGA-based trigger boards), each optimized to trigger on different physics objects. This contribution is focused on the jet FEX. The main challenges of such a board are the input bandwidth of up to 3.1 Tbps, dense routing of high-speed signals and power consumption. The design, PCB simulations and results of integrated tests of a prototype are shown in this document.

  1. B-Identifikation im Level 2 Trigger des ATLAS Experiments

    CERN Document Server

    AUTHOR|(CDS)2072780

    Zur Zeit wird am europäischen Forschungszentrum für Teilchenphysik CERN der neue Proton-Proton-Speicherring LHC und die zugehörigen vier Experimente gebaut. Ziele der Experimente sind unter anderem der Nachweis des Higgs-Bosons sowie detaillierte Studien des top-Quarks. Um möglichst reine Datensätze zu erhalten wäre es hilfreich, diese Ereignisse bereits während der Datennahme möglichst effizient zu selektieren. Dabei würde es helfen, wenn b-Quark-Jets auf Trigger-Niveau erkannt werden könnten. Ziel der Arbeit war die Entwicklung eines Algorithmus zur Identifikation von b-Quark-Jets, welcher die Anforderungen des Level 2 Triggers erfüllt. Das erste Kapitel der Arbeit gibt einen Einblick in die wesentlichen Bestandteile des Standardmodells der Teilchenphysik. In den folgenden zwei Kapiteln wird der Beschleuniger und der ATLAS Detektor sowie das ATLAS-Triggersystem beschrieben. Kapitel vier beschreibt die Möglichkeiten der B-Jet-Identifikation sowie einen Vertexalgorithmus auf Basis der Perigee-Pa...

  2. Data analysis at Level-1 Trigger level

    CERN Document Server

    Wittmann, Johannes; Aradi, Gregor; Bergauer, Herbert; Jeitler, Manfred; Wulz, Claudia; Apanasevich, Leonard; Winer, Brian; Puigh, Darren Michael

    2017-01-01

    With ever increasing luminosity at the LHC, optimum online data selection is getting more and more important. While in the case of some experiments (LHCb and ALICE) this task is being completely transferred to computer farms, the others - ATLAS and CMS - will not be able to do this in the medium-term future for technological, detector-related reasons. Therefore, these experiments pursue the complementary approach of migrating more and more of the offline and High-Level Trigger intelligence into the trigger electronics. This paper illustrates how the Level-1 Trigger of the CMS experiment and in particular its concluding stage, the Global Trigger, take up this challenge.

  3. LS1 Report: Handing in the ATLAS keys

    CERN Multimedia

    Antonella Del Rosso, Katarina Anthony

    2014-01-01

    After completing more than 250 work packages concerning the whole detector and experimental site, the ATLAS and CERN teams involved with LS1 operations are now wrapping things up before starting the commissioning phase in preparation for the LHC restart. The giant detector is now more efficient, safer and even greener than ever thanks to the huge amount of work carried out over the past two years.   Cleaning up the ATLAS cavern and detector in preparation for Run 2. Hundreds of people, more than 3000 certified interventions, huge and delicate parts of the detector completely refurbished: the ATLAS detector that will take data during Run 2 is a brand new machine, which will soon be back in the hands of the thousands of scientists who are preparing for the high-energy run of the LHC accelerator. “During LS1, we have upgraded the detector’s basic infrastructure and a few of its sub-detectors,” explains Beniamino Di Girolamo, ATLAS Technical Coordinator. &...

  4. ATLAS Point-1 System Administration Group

    CERN Multimedia

    Marc Dobson

    2007-01-01

    Hello, my name is Joe Blog and I am about to go on shift at ATLAS. When I enter the control room shown below with my CERN ID card, I go to the subsystem desk for which I am responsible. This is the first shift of the run period and there is a login window displayed on the screens. I just need to hit return and the control room desktop is started. Before I can do anything I must give my credentials in the shifter window which is then synchronised with the shift plan. After that I have access to all the allowed commands and can start preparing for the run. In order not to forget any steps I consult the documentation on how to prepare for a run on the Point-1 web. I can also check what the general status is for the ATLAS online computing farm, the sub-detectors and the LHC by using the utilities provided. ATLAS Control Room. The situation described is made up but the conditions are real. But the control room that the shifters and general public see is only the tip of the iceberg. Behind these tools lie the...

  5. ATLAS TDAQ System Administration:

    CERN Document Server

    Lee, Christopher Jon; The ATLAS collaboration; Bogdanchikov, Alexander; Ballestrero, Sergio; Contescu, Alexandru Cristian; Dubrov, Sergei; Fazio, Daniel; Korol, Aleksandr; Scannicchio, Diana; Twomey, Matthew Shaun; Voronkov, Artem

    2015-01-01

    The ATLAS Trigger and Data Acquisition (TDAQ) system is responsible for the online processing of live data, streaming from the ATLAS experiment at the Large Hadron Collider (LHC) at CERN. The online farm is composed of ̃3000 servers, processing the data readout from ̃100 million detector channels through multiple trigger levels. During the two years of the first Long Shutdown (LS1) there has been a tremendous amount of work done by the ATLAS TDAQ System Administrators, implementing numerous new software applications, upgrading the OS and the hardware, changing some design philosophies and exploiting the High Level Trigger farm with different purposes. During the data taking only critical security updates are applied and broken hardware is replaced to ensure a stable operational environment. The LS1 provided an excellent opportunity to look into new technologies and applications that would help to improve and streamline the daily tasks of not only the System Administrators, but also of the scientists who wil...

  6. A mixed signal multi-chip module with high speed serial output links for the ATLAS Level-1 trigger

    CERN Document Server

    Pfeiffer, U

    2000-01-01

    We have built and tested a mixed signal multi-chip module (MCM) to be used in the Level-1 Pre-Processor system for the Calorimeter Trigger of the ATLAS experiment at CERN. The MCM performs high speed digital signal processing on four analogue input signals. Results are transmitted serially at a serial data rate of 800 MBd. Nine chips of different technologies are mounted on a four layer Cu substrate. ADC converters and serialiser chips are the major consumers of electrical power on the MCM, which amounts to 9 W for all dies. Special cut-out areas are used to dissipate heat directly to the copper substrate. In this paper we report on design criteria, chosen MCM technology for substrate and die mounting, experiences with the MCM operation and measurement results. (4 refs).

  7. Spanish ATLAS Tier-1 &Tier-2 perspective on computing over the next years

    CERN Document Server

    Gonzalez de la Hoz, Santiago; The ATLAS collaboration

    2018-01-01

    Since the beginning of the WLCG Project the Spanish ATLAS computer centres have contributed with reliable and stable resources as well as personnel for the ATLAS Collaboration. Our contribution to the ATLAS Tier2s and Tier1s computing resources (disk and CPUs) in the last 10 years has been around 5%, even though the Spanish contribution to the ATLAS detector construction as well as the number of authors are both close to 3%. In 2015 an international advisory committee recommended to revise our contribution according to the participation in the ATLAS experiment. With this scenario, we are optimising the federation of three sites located in Barcelona, Madrid and Valencia, taking into account that the ATLAS collaboration has developed workflows and tools to flexibly use all the resources available to the collaboration, where the Tiered structure is somehow vanishing. In this contribution, we would like to show the evolution and technical updates in the ATLAS Spanish Federated Tier2 and Tier1. Some developments w...

  8. The chang’E-1 topographic atlas of the Moon

    CERN Document Server

    Li, Chunlai; Mu, Lingli; Ren, Xin; Zuo, Wei

    2016-01-01

    This atlas is based on the lunar global Digital Elevation Models (DEM) of Chang'E-1 (CE-1), and presents CCD stereo image data with digital photogrammetry. The spatial resolution of the DEM in this atlas is 500m, with horizontal accuracy of 192m and vertical accuracy of 120m. Color-shaded relief maps with contour lines are used to show the lunar topographical characteristics. The topographical data gathered by CE-1 can provide fundamental information for the study of lunar topographical, morphological and geological structures, as well as for lunar evolution research.

  9. The Upgrade of the ATLAS First Level Calorimeter Trigger

    CERN Document Server

    Yamamoto, Shimpei; The ATLAS collaboration

    2015-01-01

    The Level-1 calorimeter trigger (L1Calo) operated successfully during the first data taking phase of the ATLAS experiment at the LHC. Based on the lessons learned, a series of upgrades is planned for L1Calo to face the new challenges posed by the upcoming increases of the LHC beam energy and luminosity. The initial upgrade phase in 2013-15 includes substantial improvements to the analogue and digital signal processing to cope with baseline shifts due to signal pile-up. Additionally a newly introduced system will receive real-time data from both the upgraded L1Calo and L1Muon trigger to perform trigger algorithms based on entire event topologies. During the second upgrade phase in 2018-19 major parts of L1Calo will be rebuilt in order to exploit a tenfold increase in the available calorimeter data granularity compared to that of the current system. In this contribution we present the lessons learned during the first period of LHC data taking. Based on these we discuss the expected performance improvements toge...

  10. The ATLAS Trigger algorithms upgrade and performance in Run 2

    CERN Document Server

    Bernius, Catrin; The ATLAS collaboration

    2017-01-01

    Title: The ATLAS Trigger algorithms upgrade and performance in Run 2 (TDAQ) The ATLAS trigger has been used very successfully for the online event selection during the first part of the second LHC run (Run-2) in 2015/16 at a center-of-mass energy of 13 TeV. The trigger system is composed of a hardware Level-1 trigger and a software-based high-level trigger; it reduces the event rate from the bunch-crossing rate of 40 MHz to an average recording rate of about 1 kHz. The excellent performance of the ATLAS trigger has been vital for the ATLAS physics program of Run-2, selecting interesting collision events for wide variety of physics signatures with high efficiency. The trigger selection capabilities of ATLAS during Run-2 have been significantly improved compared to Run-1, in order to cope with the higher event rates and pile-up which are the result of the almost doubling of the center-of-mass collision energy and the increase in the instantaneous luminosity of the LHC. At the Level-1 trigger the undertaken impr...

  11. Large-scale performance studies of the Resistive Plate Chamber fast tracker for the ATLAS 1st-level muon trigger

    CERN Document Server

    Cattani, G; The ATLAS collaboration

    2009-01-01

    In the ATLAS experiment, Resistive Plate Chambers provide the first-level muon trigger and bunch crossing identification over large area of the barrel region, as well as being used as a very fast 2D tracker. To achieve these goals a system of about ~4000 gas gaps operating in avalanche mode was built (resulting in a total readout surface of about 16000 m2 segmented into 350000 strips) and is now fully operational in the ATLAS pit, where its functionality has been widely tested up to now using cosmic rays. Such a large scale system allows to study the performance of RPCs (both from the point of view of gas gaps and readout electronics) with unprecedented sensitivity to rare effects, as well as providing the means to correlate (in a statistically significant way) characteristics at production sites with performance during operation. Calibrating such a system means fine tuning thousands of parameters (involving both front-end electronics and gap voltage), as well as constantly monitoring performance and environm...

  12. The future of event-level information repositories, indexing, and selection in ATLAS

    International Nuclear Information System (INIS)

    Barberis, D; Cranshaw, J; Malon, D; Gemmeren, P Van; Zhang, Q; Dimitrov, G; Nairz, A; Sorokoletov, R; Doherty, T; Quilty, D; Gallas, E J; Hrivnac, J; Nowak, M

    2014-01-01

    ATLAS maintains a rich corpus of event-by-event information that provides a global view of the billions of events the collaboration has measured or simulated, along with sufficient auxiliary information to navigate to and retrieve data for any event at any production processing stage. This unique resource has been employed for a range of purposes, from monitoring, statistics, anomaly detection, and integrity checking, to event picking, subset selection, and sample extraction. Recent years of data-taking provide a foundation for assessment of how this resource has and has not been used in practice, of the uses for which it should be optimized, of how it should be deployed and provisioned for scalability to future data volumes, and of the areas in which enhancements to functionality would be most valuable. This paper describes how ATLAS event-level information repositories and selection infrastructure are evolving in light of this experience, and in view of their expected roles both in wide-area event delivery services and in an evolving ATLAS analysis model in which the importance of efficient selective access to data can only grow.

  13. The Resource utilization by ATLAS High Level Triggers. The contributed talk for the Technology and Instrumentation in Particle Physics 2011.

    CERN Document Server

    Ospanov, R; The ATLAS collaboration

    2011-01-01

    In 2010 the ATLAS experiment has successfully recorded data from LHC collisions with high efficiency and excellent data quality. ATLAS employs a three-level trigger system to select events of interest for physics analyses and detector commissioning. The trigger system consists of a custom-designed hardware trigger at level-1 (L1) and software algorithms executing on commodity servers at the two higher levels: second level trigger (L2) and event filter (EF). The corresponding trigger rates are 75~kHz, 3~kHz and 200~Hz. The L2 uses custom algorithms to examine a small fraction of data at full detector granularity in Regions of Interest selected by the L1. The EF employs offline algorithms and full detector data for more computationally intensive analysis. The trigger selection is defined by trigger menus which consist of more than 500 individual trigger signatures, such as electrons, muons, particle jets, etc. An execution of a trigger signature incurs computing and data storage costs. A composition of the depl...

  14. ATLAS. LHC experiments

    International Nuclear Information System (INIS)

    Anon.

    1995-01-01

    In Greek mythology, Atlas was a Titan who had to hold up the heavens with his hands as a punishment for having taken part in a revolt against the Olympians. For LHC, the ATLAS detector will also have an onerous physics burden to bear, but this is seen as a golden opportunity rather than a punishment. The major physics goal of CERN's LHC proton-proton collider is the quest for the long-awaited£higgs' mechanism which drives the spontaneous symmetry breaking of the electroweak Standard Model picture. The large ATLAS collaboration proposes a large general-purpose detector to exploit the full discovery potential of LHC's proton collisions. LHC will provide proton-proton collision luminosities at the aweinspiring level of 1034 cm2 s~1, with initial running in at 1033. The ATLAS philosophy is to handle as many signatures as possible at all luminosity levels, with the initial running providing more complex possibilities. The ATLAS concept was first presented as a Letter of Intent to the LHC Committee in November 1992. Following initial presentations at the Evian meeting (Towards the LHC Experimental Programme') in March of that year, two ideas for generalpurpose detectors, the ASCOT and EAGLE schemes, merged, with Friedrich Dydak (MPI Munich) and Peter Jenni (CERN) as ATLAS cospokesmen. Since the initial Letter of Intent presentation, the ATLAS design has been optimized and developed, guided by physics performance studies and the LHC-oriented detector R&D programme (April/May, page 3). The overall detector concept is characterized by an inner superconducting solenoid (for inner tracking) and large superconducting air-core toroids outside the calorimetry. This solution avoids constraining the calorimetry while providing a high resolution, large acceptance and robust detector. The outer magnet will extend over a length of 26 metres, with an outer diameter of almost 20 metres. The total weight of the detector is 7,000 tonnes. Fitted with its end

  15. ATLAS Detector Upgrade Prospects

    CERN Document Server

    Dobre, Monica; The ATLAS collaboration

    2016-01-01

    After the successful operation at the center-of-mass energies of 7 and 8 TeV in 2010 - 2012, the LHC is ramped up and successfully took data at the center-of-mass energies of 13 TeV in 2015. Meanwhile, plans are actively advancing for a series of upgrades of the accelerator, culminating roughly ten years from now in the high-luminosity LHC (HL-LHC) project, delivering of the order of five times the LHC nominal instantaneous luminosity along with luminosity leveling. The ultimate goal is to extend the dataset from about few hundred fb−1 expected for LHC running to 3000 fb−1 by around 2035 for ATLAS and CMS. The challenge of coping with the HL-LHC instantaneous and integrated luminosity, along with the associated radiation levels, requires further major changes to the ATLAS detector. The designs are developing rapidly for a new all-silicon tracker, significant upgrades of the calorimeter and muon systems, as well as improved triggers and data acquisition. ATLAS is also examining potential benefits of extens...

  16. ATLAS detector upgrade prospects

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00184940; The ATLAS collaboration

    2017-01-01

    After the successful operation at the centre-of-mass energies of 7 and 8 TeV in 2010-2012, the LHC is ramped up and successfully took data at the centre-of-mass energies of 13 TeV in 2015. Meanwhile, plans are actively advancing for a series of upgrades of the accelerator, culminating roughly ten years from now in the high-luminosity LHC (HL-LHC) project, delivering of the order of five times the LHC nominal instantaneous luminosity along with luminosity levelling. The ultimate goal is to extend the dataset from about few hundred fb$^{-1}$ expected for LHC running to 3000 fb $^{-1}$ by around 2035 for ATLAS and CMS. The challenge of coping with the HL-LHC instantaneous and integrated luminosity, along with the associated radiation levels, requires further major changes to the ATLAS detector. The designs are developing rapidly for a new all-silicon tracker, significant upgrades of the calorimeter and muon systems, as well as improved triggers and data acquisition. ATLAS is also examining potential benefits of ...

  17. Commissioning of the ATLAS high-level trigger with single beam and cosmic rays

    CERN Document Server

    Özcan, V Erkcan

    2010-01-01

    ATLAS is one of the two general-purpose detectors at the Large Hadron Collider (LHC). Using fast reconstruction algorithms, its trigger system needs to efficiently reject a huge rate of background events and still select potentially interesting ones with good efficiency. After a first processing level using custom electronics, the trigger selection is made by software running on two processor farms, designed to have a total of around two thousand multi-core machines. This system is known as the High Level Trigger (HLT). To reduce the network data traffic and the processing time to manageable levels, the HLT uses seeded, step-wise reconstruction, aiming at the earliest possible rejection of background events. The recent LHC startup and short single-beam run provided a "stress test" of the trigger. Following this period, ATLAS continued to collect cosmic-ray events for detector alignment and calibration purposes. These running periods allowed strict tests of the HLT reconstruction and selection algorithms as we...

  18. Monitoring and Tracking the LHC Beam Spot within the ATLAS High Level Trigger

    CERN Document Server

    Winklmeier, F; The ATLAS collaboration

    2012-01-01

    The parameters of the beam spot produced by the LHC in the ATLAS interaction region are computed online using the ATLAS High Level Trigger (HLT) system. The high rate of triggered events is exploited to make precise measurements of the position, size and orientation of the luminous region in near real-time, as these parameters change significantly even during a single data-taking run. We present the challenges, solutions and results for the online determination, monitoring and beam spot feedback system in ATLAS. A specially designed algorithm, which uses tracks registered in the silicon detectors to reconstruct event vertices, is executed on the HLT processor farm of several thousand CPU cores. Monitoring histograms from all the cores are sampled and aggregated across the farm every 60 seconds. The reconstructed beam values are corrected for detector resolution effects, measured in situ from the separation of vertices whose tracks have been split into two collections. Furthermore, measurements for individual ...

  19. Implementation of the ATLAS trigger within the ATLAS Multi­Threaded Software Framework AthenaMT

    CERN Document Server

    Wynne, Benjamin; The ATLAS collaboration

    2016-01-01

    We present an implementation of the ATLAS High Level Trigger that provides parallel execution of trigger algorithms within the ATLAS multi­threaded software framework, AthenaMT. This development will enable the ATLAS High Level Trigger to meet future challenges due to the evolution of computing hardware and upgrades of the Large Hadron Collider, LHC, and ATLAS Detector. During the LHC data­taking period starting in 2021, luminosity will reach up to three times the original design value. Luminosity will increase further, to up to 7.5 times the design value, in 2026 following LHC and ATLAS upgrades. This includes an upgrade of the ATLAS trigger architecture that will result in an increase in the High Level Trigger input rate by a factor of 4 to 10 compared to the current maximum rate of 100 kHz. The current ATLAS multiprocess framework, AthenaMP, manages a number of processes that process events independently, executing algorithms sequentially in each process. AthenaMT will provide a fully multi­threaded env...

  20. The ATLAS High Level Trigger Infrastructure, Performance and Future Developments

    CERN Document Server

    The ATLAS collaboration

    2009-01-01

    The ATLAS High Level Trigger (HLT) is a distributed real-time software system that performs the final online selection of events produced during proton-proton collisions at the Large Hadron Collider (LHC). It is designed as a two-stage event filter running on a farm of commodity PC hardware. Currently the system consists of about 850 multi-core processing nodes that will be extended incrementally following the increasing luminosity of the LHC to about 2000 nodes depending on the evolution of the processor technology. Due to the complexity and similarity of the algorithms a large fraction of the software is shared between the online and offline event reconstruction. The HLT Infrastructure serves as the interface between the two domains and provides common services for the trigger algorithms. The consequences of this design choice will be discussed and experiences from the operation of the ATLAS HLT during cosmic ray data taking and first beam in 2008 will be presented. Since the event processing time at the HL...

  1. The Phase-I Upgrade of the ATLAS First Level Calorimeter Trigger

    CERN Document Server

    Andrei, George Victor; The ATLAS collaboration

    2017-01-01

    The ATLAS Level-1 calorimeter trigger is planning a series of upgrades in order to face the challenges posed by the upcoming increase of the LHC luminosity. The upgrade will benefit from new front-end electronics for parts of the calorimeter that provide the trigger system with digital data with a tenfold increase in granularity. This makes possible the implementation of more efficient algorithms than currently used to maintain the low trigger thresholds at much harsher LHC collision conditions. The Level-1 calorimeter system upgrade consists of an active and a passive system for digital data distribution, and three different Feature Extractor systems which run complex algorithms to identify various physics object candidates. The algorithms are implemented in firmware on custom electronics boards with up to four high speed processing FPGAs. The main characteristics of the electronic boards are a high input bandwidth, up to several TB/s per module, implemented through optical receivers, and a large number of o...

  2. ATLAS Upgrade Plans

    CERN Document Server

    Hopkins, W; The ATLAS collaboration

    2014-01-01

    After the successful LHC operation at the center-of-mass energies of 7 and 8 TeV in 2010-2012, plans are actively advancing for a series of upgrades of the accelerator, culminating roughly ten years from now in the high-luminosity LHC (HL-LHC) project, delivering of the order of five times the LHC nominal instantaneous luminosity along with luminosity leveling. The final goal is to extend the dataset from about few hundred fb−1 expected for LHC running to 3000/fb by around 2035 for ATLAS and CMS. In parallel, the experiments need to be keep lockstep with the accelerator to accommodate running beyond the nominal luminosity this decade. Current planning in ATLAS envisions significant upgrades to the detector during the consolidation of the LHC to reach full LHC energy and further upgrades. The challenge of coping with the HL-LHC instantaneous and integrated luminosity, along with the associated radiation levels, requires further major changes to the ATLAS detector. The designs are developing rapidly for a new...

  3. Upgrade of the PreProcessor System for the ATLAS LVL1 Calorimeter Trigger

    CERN Document Server

    Khomich, A; The ATLAS collaboration

    2010-01-01

    The ATLAS Level-1 Calorimeter Trigger is a hardware-based pipelined system designed to identify high-pT objects in the ATLAS calorimeters within a fixed latency of 2.5us. It consists of three subsystems: the PreProcessor which conditions and digitizes analogue signals and two digital processors. The majority of the PreProcessor's tasks are performed on a dense Multi-Chip Module(MCM) consisting of FADCs, a time-adjustment and digital processing ASICs, and LVDS serializers designed and implemented in ten years old technologies. An MCM substitute, based on today's components (dual channel FADCs and FPGA), is being developed to profit from state-of-the-art electronics and to enhance the flexibility of the digital processing. Development and first test results are presented.

  4. Monitoring ATLAS L1 CTP data from P-BEAST

    CERN Document Server

    Roggel, Jens

    2017-01-01

    The ATLAS Level-1 Central Trigger Processor combines information from the calori-meters and the muon detectors and takes a decision to accept an event based on a list of selection criteria (trigger items). Busy signals from the detectors and generated dead time by the Central Trigger Processor prevents the buffers to become full. The visualisation of this data is useful to check the functionality of the system. My project during the CERN summer student programme was to develop an application, which produces plots of relevant Central Trigger Processor data and presents the results in an appropriate format for experts and users.

  5. ATLAS Outreach Highlights

    CERN Document Server

    Cheatham, Susan; The ATLAS collaboration

    2016-01-01

    The ATLAS outreach team is very active, promoting particle physics to a broad range of audiences including physicists, general public, policy makers, students and teachers, and media. A selection of current outreach activities and new projects will be presented. Recent highlights include the new ATLAS public website and ATLAS Open Data, the very recent public release of 1 fb-1 of ATLAS data.

  6. Validation of ATLAS L1 Topological Triggers

    CERN Document Server

    Praderio, Marco

    2017-01-01

    The Topological trigger (L1Topo) is a new component of the ATLAS L1 (Level-1) trigger. Its purpose is that of reducing the otherwise too high rate of data collection from the LHC by rejecting those events considered “uninteresting” (meaning that they have already been studied). This event rate reduction is achieved by applying topological requirements to the physical objects present in each event. It is very important to make sure that this trigger does not reject any “interesting” event. Therefore we need to verify its correct functioning. The goal of this summer student project is to study the response of two L1Topo algorithms (concerning ∆R and invariant mass). To do so I will compare the trigger decisions produced by the L1Topo hardware with the ones produced by the “official” L1Topo simulation. This way I will be able to identify events that could be incorrectly rejected. Simultaneously I will produce an emulation of these triggers that will help me understand the cause of disagreements bet...

  7. RPCs as trigger detector for the ATLAS experiment performances, simulation and application to the level-1 di-muon trigger

    CERN Document Server

    Di Simone, A; Di Ciaccio, A

    2005-01-01

    In the muon spectrometer different detectors are used to provide trigger functionality and precision momentum measurements. In the pseudorapidity range |eta|<1 the first level muon trigger is based on Resistive Plate Chambers, gas ionization detectors which are characterized by a fast response and an excellent time resolution (<1.5ns). The working principles of the Resistive Plate Chambers will be illustrated in chapter 3. Given the long time of operation expected for the ATLAS experiment (~10 years), ageing phenomena have been carefully studied, in order to ensure stable long-term operation of all the subdetectors. Concerning Resistive Plate Chambers, a very extensive ageing test has been performed at CERN's Gamma Irradiation Facility on three production chambers. The results of this test are presented in chapter 4. One of the most commonly used gases in RPCs operation is C2H2F4, which during the gas discharge can produce fluorine ions. Being F one of the most aggressive elements in nature, the presenc...

  8. Performance of the ATLAS muon trigger in run 2

    CERN Document Server

    Morgenstern, Marcus; The ATLAS collaboration

    2017-01-01

    Triggering on muons is a crucial ingredient to fulfill the physics program of the ATLAS experiments. The ATLAS trigger system deploys a two stage strategy, a hardware-based Level-1 trigger and a software-based high-level trigger to select events of interest at a suitable recording rate. Both stages underwent upgrades to cope with the challenges in run-II data-taking at centre-of-mass energies of 13 TeV and instantaneous luminosities up to 2x10$^{34} cm^{-2}s^{-1}$. The design of the ATLAS muon triggers and their performance in proton-proton collisions at 13 TeV are presented.

  9. Experimental Results of A1.1 Test for OECD-ATLAS Project

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Kyoung-Ho; Bae, Byoung-Uhn; Park, Yu-Sun; Kim, Jong-Rok; Choi, Nam-Hyun; Choi, Ki-Yong [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    KAERI (Korea Atomic Energy Research Institute) is operating an OECD/NEA project (hereafter, OECD-ATLAS project) by utilizing a thermal-hydraulic integral effect test facility, ATLAS (Advanced Thermal-hydraulic Test Loop for Accident Simulation). Considering the importance of the SBO scenario and the related accident mitigation measures, a prolonged SBO scenario was selected as the first test subject worthy of investigation in the OECD-ATLAS project as summarized in Table 1. After the Fukushima accident, design extension conditions (DECs) such as an SBO and a total loss of feed water (TLOFW) attracted wide international attention in that such high-risk multiple failure accidents should be revisited from the viewpoint of the reinforcement of the 'defense in depth' concept. In particular, an SBO is one of the most important DECs because a total loss of heat sink can lead to a core melt-down scenario under high pressure without any proper operator action. As for a prolonged SBO transient of the OECD-ATLAS project, two tests, named A1.1 and A1.2, were determined to be performed. In most nuclear power plants (NPPs), a turbine-driven auxiliary feedwater system was designed to remove the decay heat during the early period of an SBO transient. From a conservative point of view, however, it is necessary to investigate the thermal-hydraulic behaviors of the NPP when a turbine-driven auxiliary feedwater supply is not available during the initial period of an SBO transient and moreover a mobile pump-driven auxiliary feedwater supply can only become realized in the later period of the scenario. In particular, asymmetric heat removal characteristic through the supply of auxiliary feedwater only to one steam generator has its own peculiar importance in terms of safety analysis code validation. With an aim of considering these safety importance, in the A1.1 test, a prolonged SBO transient was simulated with two temporal phases: Phase (I) for a conservative SBO transient

  10. The ATLAS Trigger Algorithms Upgrade and Performance in Run-2

    CERN Document Server

    Bernius, Catrin; The ATLAS collaboration

    2017-01-01

    The ATLAS trigger has been used very successfully for the online event selection during the first part of the second LHC run (Run-2) in 2015/16 at a center-of-mass energy of 13 TeV. The trigger system is composed of a hardware Level-1 trigger and a software-based high-level trigger; it reduces the event rate from the bunch-crossing rate of 40 MHz to an average recording rate of about 1 kHz. The excellent performance of the ATLAS trigger has been vital for the ATLAS physics program of Run-2, selecting interesting collision events for wide variety of physics signatures with high efficiency. The trigger selection capabilities of ATLAS during Run-2 have been significantly improved compared to Run-1, in order to cope with the higher event rates and pile-up which are the result of the almost doubling of the center-of-mass collision energy and the increase in the instantaneous luminosity of the LHC. At the Level-1 trigger the undertaken improvements resulted in more pile-up robust selection efficiencies and event ra...

  11. The ATLAS Trigger system upgrade and performance in Run 2

    CERN Document Server

    Shaw, Savanna Marie; The ATLAS collaboration

    2018-01-01

    The ATLAS trigger has been used very successfully for the online event selection during the first part of the second LHC run (Run-2) in 2015/16 at a centre-of-mass energy of 13 TeV. The trigger system is composed of a hardware Level-1 trigger and a software-based high-level trigger; it reduces the event rate from the bunch-crossing rate of 40 MHz to an average recording rate of about 1 kHz. The excellent performance of the ATLAS trigger has been vital for the ATLAS physics program of Run-2, selecting interesting collision events for wide variety of physics signatures with high efficiency. The trigger selection capabilities of ATLAS during Run-2 have been significantly improved compared to Run-1, in order to cope with the higher event rates and pile-up which are the result of the almost doubling of the center-of-mass collision energy and the increase in the instantaneous luminosity of the LHC. At the Level-1 trigger the undertaken improvements resulted in more pile-up robust selection efficiencies and event ra...

  12. Trigger Menu-aware Monitoring for the ATLAS experiment

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00441925; The ATLAS collaboration

    2017-01-01

    Changes in the trigger menu, the online algorithmic event-selection of the ATLAS experiment at the LHC, are followed by adjustments to the ATLAS trigger monitoring systems. During Run 1, and so far in Run 2, ATLAS has deployed monitoring updates with the installation of new software releases at Tier-0, the first level of the ATLAS computing grid. Having to wait for a new software release to be installed at Tier-0, in order to update ATLAS offline trigger monitoring configurations, results in a lag with respect to the modification of the trigger menu. We present the design and implementation of a `trigger menu-aware' monitoring system that aims to simplify the ATLAS operational workflows by allowing monitoring configuration changes to be made at the Tier-0 site by utilising an Oracle SQL database.

  13. ATLAS computing operations within the GridKa Cloud

    International Nuclear Information System (INIS)

    Kennedy, J; Walker, R; Olszewski, A; Nderitu, S; Serfon, C; Duckeck, G

    2010-01-01

    The organisation and operations model of the ATLAS T1-T2 federation/Cloud associated to the GridKa T1 in Karlsruhe is described. Attention is paid to Cloud level services and the experience gained during the last years of operation. The ATLAS GridKa Cloud is large and divers spanning 5 countries, 2 ROC's and is currently comprised of 13 core sites. A well defined and tested operations model in such a Cloud is of the utmost importance. We have defined the core Cloud services required by the ATLAS experiment and ensured that they are performed in a managed and sustainable manner. Services such as Distributed Data Management involving data replication,deletion and consistency checks, Monte Carlo Production, software installation and data reprocessing are described in greater detail. In addition to providing these central services we have undertaken several Cloud level stress tests and developed monitoring tools to aid with Cloud diagnostics. Furthermore we have defined good channels of communication between ATLAS, the T1 and the T2's and have pro-active contributions from the T2 manpower. A brief introduction to the GridKa Cloud is provided followed by a more detailed discussion of the operations model and ATLAS services within the Cloud.

  14. ATLAS Detector Upgrade Prospects

    International Nuclear Information System (INIS)

    Dobre, M

    2017-01-01

    After the successful operation at the centre-of-mass energies of 7 and 8 TeV in 2010-2012, the LHC was ramped up and successfully took data at the centre-of-mass energies of 13 TeV in 2015 and 2016. Meanwhile, plans are actively advancing for a series of upgrades of the accelerator, culminating roughly ten years from now in the high-luminosity LHC (HL-LHC) project, which will deliver of the order of five times the LHC nominal instantaneous luminosity along with luminosity levelling. The ultimate goal is to extend the dataset from about few hundred fb −1 expected for LHC running by the end of 2018 to 3000 fb −1 by around 2035 for ATLAS and CMS. The challenge of coping with the HL-LHC instantaneous and integrated luminosity, along with the associated radiation levels, requires further major changes to the ATLAS detector. The designs are developing rapidly for a new all-silicon tracker, significant upgrades of the calorimeter and muon systems, as well as improved triggers and data acquisition. ATLAS is also examining potential benefits of extensions to larger pseudorapidity, particularly in tracking and muon systems. This report summarizes various improvements to the ATLAS detector required to cope with the anticipated evolution of the LHC luminosity during this decade and the next. A brief overview is also given on physics prospects with a pp centre-of-mass energy of 14 TeV. (paper)

  15. Thermal and Alignment Analysis of the Instrument-Level ATLAS Thermal Vacuum Test

    Science.gov (United States)

    Bradshaw, Heather

    2012-01-01

    This paper describes the thermal analysis and test design performed in preparation for the ATLAS thermal vacuum test. NASA's Advanced Topographic Laser Altimeter System (ATLAS) will be flown as the sole instrument aboard the Ice, Cloud, and land Elevation Satellite-2 (ICESat-2). It will be used to take measurements of topography and ice thickness for Arctic and Antarctic regions, providing crucial data used to predict future changes in worldwide sea levels. Due to the precise measurements ATLAS is taking, the laser altimeter has very tight pointing requirements. Therefore, the instrument is very sensitive to temperature-induced thermal distortions. For this reason, it is necessary to perform a Structural, Thermal, Optical Performance (STOP) analysis not only for flight, but also to ensure performance requirements can be operationally met during instrument-level thermal vacuum testing. This paper describes the thermal model created for the chamber setup, which was used to generate inputs for the environmental STOP analysis. This paper also presents the results of the STOP analysis, which indicate that the test predictions adequately replicate the thermal distortions predicted for flight. This is a new application of an existing process, as STOP analyses are generally performed to predict flight behavior only. Another novel aspect of this test is that it presents the opportunity to verify pointing results of a STOP model, which is not generally done. It is possible in this case, however, because the actual pointing will be measured using flight hardware during thermal vacuum testing and can be compared to STOP predictions.

  16. submitter Muon trigger efficiency of the ATLAS Detector at LHC

    CERN Document Server

    Gallus, Petr

    The diploma thesis is devoted to the study of the muon trigger efficiency performance in the ATLAS experiment at the LHC collider. It contains measurements of efficiency of muon triggers of Level 1 and Level 2. Level 1 (LVL1) trigger efficiency of L1 MU20 and L1 2MU20 triggers is measured using Monte-Carlo simulated events. For Level 2 the efficiency of MuFast trigger is analysed in relation to the LVL1 decision. In both examples it is shown that the trigger efficiency depends on the detector geometry and transversal momentum pT of muons. Key words: ATLAS, LHC, trigger

  17. Multi-threading in the ATLAS High-Level Trigger

    CERN Document Server

    Barton, Adam Edward; The ATLAS collaboration

    2017-01-01

    Over the next decade of LHC data-taking the instantaneous luminosity will reach up 7.5 times the design value with over 200 interactions per bunch-crossing and will pose unprecedented challenges for the ATLAS trigger system. We report on an HLT prototype in which the need for HLT­specific components has been reduced to a minimum while retaining the key aspects of trigger functionality including regional reconstruction and early event rejection. We report on the first experience of migrating trigger algorithms to this new framework and present the next steps towards a full implementation of the ATLAS trigger within AthenaMT.

  18. The new ATLAS Fast Calorimeter Simulation

    CERN Document Server

    Dias, Flavia; The ATLAS collaboration

    2016-01-01

    A very large number of simulated events is required for physics and performance studies with the ATLAS detector at the Large Hadron Collider. Producing these with the full GEANT4 detector simulation is highly CPU intensive. As a very detailed detector simulation is not always required, fast simulation tools have been developed to reduce the calorimeter simulation time by a few orders of magnitude. The fast simulation of ATLAS for the calorimeter systems used in Run 1, called Fast Calorimeter Simulation (FastCaloSim), provides a parameterized simulation of the particle energy response at the calorimeter read-out cell level. It is then interfaced to the ATLAS digitization and reconstruction software. In Run 1, about 13 billion events were simulated in ATLAS, out of which 50% were produced using fast simulation. For Run 2, a new parameterisation is being developed to improve the original version: It incorporates developments in geometry and physics lists of the last five years and benefits from knowledge acquire...

  19. Rare Decays of $B^{0}_{(s)}$ Mesons to Muon Pairs with the ATLAS Detector (Run 1)

    CERN Document Server

    Walkowiak, Wolfgang; The ATLAS collaboration

    2016-01-01

    The large amount of Heavy Flavor data collected by the ATLAS experiment at the LHC is potentially sensitive to New Physics, which could be evident in processes that are naturally suppressed in the Standard Model. With the full sample of data (Run 1) collected by the ATLAS detector at 7 and 8~TeV proton-proton collisions, the upper limit on the branching fraction of the $B^{0}\\to\\mu^{+}\\mu^{-}$ decay is set at ${\\cal B}(B^{0}\\to\\mu^{+}\\mu^{-}) < 4.2\\times 10^{-10}$ at 95% confidence level. For the $B^{0}_{s}$, the branching fraction ${\\cal B}(B^{0}_{s}\\to\\mu^{+}\\mu^{-}) = \\left(0.9^{+1.1}_{-0.8}\\right)\\times 10^{-9}$ is obtained. The results are consistent with the Standard Model expectations and other available measurements.

  20. A first-level calorimeter trigger for the ATLAS experiment

    International Nuclear Information System (INIS)

    Perera, V.; Edwards, J.; Gee, N.

    1995-01-01

    In the RD27 collaboration the authors have carried out system studies on the implementation of the first level calorimeter trigger processor system for the ATLAS experiment to be mounted at the Large Hadron Collider (LHC) at CERN. A demonstrator trigger system operated successfully with the RD3 and RD33 calorimeters at the full 40 MHz LHC bunch crossing (BC) rate. The prototype application-specific integrated circuits (ASICs) in this system each processed data from only a single trigger cell and its environment, which would lead to an extremely large system for ATLAS. Using eight-bit parallel data even the use of ASICs, processing multiple trigger cells would demand unacceptably large numbers of input pins and module connections. Initial studies of this I/O problem produced a solution based on asynchronous transmission of zero-suppressed and BC-tagged data on 160 Mbit/s serial links. This approach appeared to be feasible but would have introduced additional latency of about 20 BCs. Further studies have led to the design of a fully-synchronous calorimeter trigger processor system using commercial high-speed optical links. The links will terminate in multi-chip modules (MCMs) incorporating custom-designed integrated optics, and the trigger algorithms will be implemented in ASICs

  1. Top quark properties at ATLAS

    CERN Document Server

    Dilip, Jana

    2008-01-01

    The ATLAS potential for the study of the top quark properties and physics beyond the Standard Model in the top quark sector, is described. The measurements of the top quark charge, the spin and spin correlations, the Standard Model decay (t-> bW), rare top quark decays associated to flavour changing neutral currents (t-> qX with X = gluon, Z, photon) and ttbar resonances are discussed. The sensitivity of the ATLAS experiment is estimated for an expected luminosity of 1fb-1 at the LHC. The full simulation of the ATLAS detector is used. For the Standard Model measurements the expected precision is presented. For the tests of physics beyond the Standard Model, the 5 sigma discovery potential (in the presence of a signal) and the 95% Confidence Level (CL) limit (in the absence of a signal) are given.

  2. The Cerefy registered clinical brain atlas on CD-ROM. Based on the classic Talairach-Tournoux and Schaltenbrand-Wahren brain atlases. 2. ed.

    International Nuclear Information System (INIS)

    Nowinski, W.L.; Thirunavuukarasuu, A.

    2001-01-01

    This remarkable CD-ROM provides enhanced and extended versions of three world-famous Thieme atlases, (Schaltenbrand and Wahren's Atlas for Stereotaxy of the Human Brain, Talairach and Tournoux's Co-Planar Stereotaxis Atlas of the Human Brain and Referentially Oriented Cerebral MRI Anatomy). It contains the electronic atlases as well as an easy navigation system to facilitate searching for and displaying more than 525 anatomical structures. Revolutionizing the field of brain anatomy, the authors have segmented, labeled, and cross referenced all the information contained in the books, and created contours for all three atlases. The Cerefy registered Clinical Brain Atlas now allows you to electronically navigate these atlases simultaneously on axial, coronal, and sagittal planes, and enjoy the ability to: 1. Access 210 high-quality, fully segmented, and labeled atlas images with corresponding contours, 2. Display and manipulate spatially co-registered atlases, 3. Dynamically label images with structure names and descriptions, and then highlight selected structures in the atlas image, 4. Image zoom in five different levels, mensurate, search, set triplanar, get coordinates, save, and print, 5. Access on-line help, glossary, and supportive atlas materials. (orig.)

  3. The Level-1 Tile-Muon Trigger in the Tile Calorimeter upgrade program

    International Nuclear Information System (INIS)

    Ryzhov, A.

    2016-01-01

    The Tile Calorimeter (TileCal) is the central hadronic calorimeter of the ATLAS experiment at the Large Hadron Collider (LHC). TileCal provides highly-segmented energy measurements for incident particles. Information from TileCal's outermost radial layer can assist in muon tagging in the Level-1 Muon Trigger by rejecting fake muon triggers due to slow charged particles (typically protons) without degrading the efficiency of the trigger. The main activity of the Tile-Muon Trigger in the ATLAS Phase-0 upgrade program was to install and to activate the TileCal signal processor module for providing trigger inputs to the Level-1 Muon Trigger. This report describes the Tile-Muon Trigger, focusing on the new detector electronics such as the Tile Muon Digitizer Board (TMDB) that receives, digitizes and then provides the signal from eight TileCal modules to three Level-1 muon endcap Sector-Logic Boards.

  4. Multiple brain atlas database and atlas-based neuroimaging system.

    Science.gov (United States)

    Nowinski, W L; Fang, A; Nguyen, B T; Raphel, J K; Jagannathan, L; Raghavan, R; Bryan, R N; Miller, G A

    1997-01-01

    For the purpose of developing multiple, complementary, fully labeled electronic brain atlases and an atlas-based neuroimaging system for analysis, quantification, and real-time manipulation of cerebral structures in two and three dimensions, we have digitized, enhanced, segmented, and labeled the following print brain atlases: Co-Planar Stereotaxic Atlas of the Human Brain by Talairach and Tournoux, Atlas for Stereotaxy of the Human Brain by Schaltenbrand and Wahren, Referentially Oriented Cerebral MRI Anatomy by Talairach and Tournoux, and Atlas of the Cerebral Sulci by Ono, Kubik, and Abernathey. Three-dimensional extensions of these atlases have been developed as well. All two- and three-dimensional atlases are mutually preregistered and may be interactively registered with an actual patient's data. An atlas-based neuroimaging system has been developed that provides support for reformatting, registration, visualization, navigation, image processing, and quantification of clinical data. The anatomical index contains about 1,000 structures and over 400 sulcal patterns. Several new applications of the brain atlas database also have been developed, supported by various technologies such as virtual reality, the Internet, and electronic publishing. Fusion of information from multiple atlases assists the user in comprehensively understanding brain structures and identifying and quantifying anatomical regions in clinical data. The multiple brain atlas database and atlas-based neuroimaging system have substantial potential impact in stereotactic neurosurgery and radiotherapy by assisting in visualization and real-time manipulation in three dimensions of anatomical structures, in quantitative neuroradiology by allowing interactive analysis of clinical data, in three-dimensional neuroeducation, and in brain function studies.

  5. Beam Test of the ATLAS Level-1 Calorimeter Trigger System

    CERN Document Server

    Garvey, J; Mahout, G; Moye, T H; Staley, R J; Thomas, J P; Typaldos, D; Watkins, P M; Watson, A; Achenbach, R; Föhlisch, F; Geweniger, C; Hanke, P; Kluge, E E; Mahboubi, K; Meier, K; Meshkov, P; Rühr, F; Schmitt, K; Schultz-Coulon, H C; Ay, C; Bauss, B; Belkin, A; Rieke, S; Schäfer, U; Tapprogge, T; Trefzger, T; Weber, GA; Eisenhandler, E F; Landon, M; Apostologlou, P; Barnett, B M; Brawn, I P; Davis, A O; Edwards, J; Gee, C N P; Gillman, A R; Mirea, A; Perera, V J O; Qian, W; Sankey, D P C; Bohm, C; Hellman, S; Hidvegi, A; Silverstein, S

    2005-01-01

    The Level-1 Calorimter Trigger consists of a Preprocessor (PP), a Cluster Processor (CP), and a Jet/Energy-sum Processor (JEP). The CP and JEP receive digitised trigger-tower data from the Preprocessor and produce Region-of-Interest (RoIs) and trigger multiplicities. The latter are sent in real time to the Central Trigger Processor (CTP) where the Level-1 decision is made. On receipt of a Level-1 Accept, Readout Driver Modules (RODs), provide intermediate results to the data acquisition (DAQ) system for monitoring and diagnostic purpose. RoI information is sent to the RoI builder (RoIB) to help reduce the amount of data required for the Level-2 Trigger The Level-1 Calorimeter Trigger System at the test beam consisted of 1 Preprocessor module, 1 Cluster Processor Module, 1 Jet/Energy Module and 2 Common Merger Modules. Calorimeter energies were sucessfully handled thourghout the chain and trigger object sent to the CTP. Level-1 Accepts were sucessfully produced and used to drive the readout path. Online diagno...

  6. ATLAS detector performance in Run1: Calorimeters

    CERN Document Server

    Burghgrave, B; The ATLAS collaboration

    2014-01-01

    ATLAS operated with an excellent efficiency during the Run 1 data taking period, recording respectively in 2011 and 2012 an integrated luminosity of 5.3 fb-1 at √s = 7 TeV and 21.6 fb-1 at √s = 8TeV. The Liquid Argon and Tile Calorimeter contributed to this effort by operating with a good data quality efficiency, improving over the whole Run 1. This poster presents the Run 1 overall status and performance, LS1 works and Preparations for Run 2.

  7. Evolution and experience with the ATLAS Simulation at Point1 Project

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00389536; The ATLAS collaboration; Brasolin, Franco; Kouba, Tomas; Schovancova, Jaroslava; Fazio, Daniel; Di Girolamo, Alessandro; Scannicchio, Diana; Twomey, Matthew Shaun; Wang, Fuquan; Zaytsev, Alexander; Lee, Christopher

    2017-01-01

    The Simulation at Point1 project is successfully running standard ATLAS simulation jobs on the TDAQ HLT resources. The pool of available resources changes dynamically, therefore we need to be very effective in exploiting the available computing cycles. We present our experience with using the Event Service that provides the event-level granularity of computations. We show the design decisions and overhead time related to the usage of the Event Service. The improved utilization of the resources is also presented with the recent development in monitoring, automatic alerting, deployment and GUI.

  8. Evolution and experience with the ATLAS simulation at Point1 project

    CERN Document Server

    Ballestrero, Sergio; The ATLAS collaboration; Fazio, Daniel; Di Girolamo, Alessandro; Kouba, Tomas; Lee, Christopher; Scannicchio, Diana; Schovancova, Jaroslava; Twomey, Matthew Shaun; Wang, Fuquan; Zaytsev, Alexander

    2016-01-01

    The Simulation at Point1 project is successfully running traditional ATLAS simulation jobs on the TDAQ HLT resources. The pool of available resources changes dynamically, therefore we need to be very effective in exploiting the available computing cycles. We will present our experience with using the Event Service that provides the event-level granularity of computations. We will show the design decisions and overhead time related to the usage of the Event Service. The improved utilization of the resources will also be presented with the recent development in monitoring, automatic alerting, deployment and GUI.

  9. Rate Predictions and Trigger/DAQ Resource Monitoring in ATLAS

    CERN Document Server

    Schaefer, D M; The ATLAS collaboration

    2012-01-01

    Since starting in 2010, the Large Hadron Collider (LHC) has pro- duced collisions at an ever increasing rate. The ATLAS experiment successfully records the collision data with high eciency and excel- lent data quality. Events are selected using a three-level trigger system, where each level makes a more re ned selection. The level-1 trigger (L1) consists of a custom-designed hardware trigger which seeds two higher software based trigger levels. Over 300 triggers compose a trig- ger menu which selects physics signatures such as electrons, muons, particle jets, etc. Each trigger consumes computing resources of the ATLAS trigger system and oine storage. The LHC instantaneous luminosity conditions, desired physics goals of the collaboration, and the limits of the trigger infrastructure determine the composition of the ATLAS trigger menu. We describe a trigger monitoring frame- work for computing the costs of individual trigger algorithms such as data request rates and CPU consumption. This framework has been used...

  10. New ATLAS Software & Computing Organization

    CERN Multimedia

    Barberis, D

    Following the election by the ATLAS Collaboration Board of Dario Barberis (Genoa University/INFN) as Computing Coordinator and David Quarrie (LBNL) as Software Project Leader, it was considered necessary to modify the organization of the ATLAS Software & Computing ("S&C") project. The new organization is based upon the following principles: separation of the responsibilities for computing management from those of software development, with the appointment of a Computing Coordinator and a Software Project Leader who are both members of the Executive Board; hierarchical structure of responsibilities and reporting lines; coordination at all levels between TDAQ, S&C and Physics working groups; integration of the subdetector software development groups with the central S&C organization. A schematic diagram of the new organization can be seen in Fig.1. Figure 1: new ATLAS Software & Computing organization. Two Management Boards will help the Computing Coordinator and the Software Project...

  11. The Run-2 ATLAS Trigger System

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00222798; The ATLAS collaboration

    2016-01-01

    The ATLAS trigger successfully collected collision data during the first run of the LHC between 2009-2013 at different centre-of-mass energies between 900 GeV and 8 TeV. The trigger system consists of a hardware Level-1 and a software-based high level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. In Run-2, the LHC will operate at centre-of-mass energies of 13 and 14 TeV and higher luminosity, resulting in roughly five times higher trigger rates. A brief review of the ATLAS trigger system upgrades that were implemented between Run-1 and Run-2, allowing to cope with the increased trigger rates while maintaining or even improving the efficiency to select physics processes of interest, will be given. This includes changes to the Level-1 calorimeter and muon trigger systems, the introduction of a new Level-1 topological trigger module and the merging of the previously two-level HLT system into a single event filter farm. A ...

  12. ATLAS calorimeter and topological trigger upgrades for Phase 1

    CERN Document Server

    Silverstein, S

    2011-01-01

    The ATLAS Level-1 Calorimeter Trigger (L1Calo) collaboration is pursuing two hardware upgrade programs for Phase 1 of the LHC upgrade. The first of these is development of a new mixed-signal multi-chip module (MCM) for the PreProcessor system. based on faster FADCs and a modern FPGA. Designed as a drop-in replacement for the existing MCM, the FPGA also enables future upgrades to the PreProcessor algorithms, including enhanced digital filtering and compensation for time-variation of pedestals. It is also planned to augment the current multiplicity-based trigger by adding topology-based algorithms. This is made possible by adding jet and EM/hadron Regions of Interest (ROIs) to the L1Calo real time data path. A synchronous, pipelined topological processor (TP) based on high-density FPGAs and multi-Gbit optical links gathers all ROI information and performs topological algorithms.

  13. The ATLAS event filter

    CERN Document Server

    Beck, H P; Boissat, C; Davis, R; Duval, P Y; Etienne, F; Fede, E; Francis, D; Green, P; Hemmer, F; Jones, R; MacKinnon, J; Mapelli, Livio P; Meessen, C; Mommsen, R K; Mornacchi, Giuseppe; Nacasch, R; Negri, A; Pinfold, James L; Polesello, G; Qian, Z; Rafflin, C; Scannicchio, D A; Stanescu, C; Touchard, F; Vercesi, V

    1999-01-01

    An overview of the studies for the ATLAS Event Filter is given. The architecture and the high level design of the DAQ-1 prototype is presented. The current status if the prototypes is briefly given. Finally, future plans and milestones are given. (11 refs).

  14. A new Highly Selective First Level ATLAS Muon Trigger With MDT Chamber Data for HL-LHC

    CERN Document Server

    Nowak, Sebastian; The ATLAS collaboration

    2015-01-01

    Highly selective first level triggers are essential for the physics programme of the ATLAS experiment at the HL-LHC where the instantaneous luminosity will exceed the LHC's instantaneous luminosity by almost an order of magnitude. The ATLAS first level muon trigger rate is dominated by low momentum sub-trigger threshold muons due to the poor momentum resolution at trigger level caused by the moderate spatial resolution of the resistive plate and thin gap trigger chambers. This limitation can be overcome by including the data of the precision muon drift tube chambers in the first level trigger decision. This requires the implementation of a fast MDT read-out chain and a fast MDT track reconstruction. A hardware demonstrator of the fast read-out chain was successfully tested under HL-LHC operating conditions at CERN's Gamma Irradiation Facility. It could be shown that the data provided by the demonstrator can be processed with a fast track reconstruction algorithm on an ARM CPU within the 6 microseconds latency...

  15. Bd/s -> mu+ mu- in ATLAS

    CERN Document Server

    Guenther, Jaroslav; The ATLAS collaboration

    2016-01-01

    The ATLAS Experiment has conducted a search for the rare decays of Bs and Bd into mu+mu-. 25 fb−1 of integrated luminosity of proton-proton collisions collected during LHC Run 1 were studied to provide new results presented in this talk. An upper limit is set on the branching ratio BR(Bd to mu+mu-) < 4.2×10−10 at 95% confidence level. For Bs, ATLAS measurement yields the branching ratio BR(Bs to mu+mu-)=(0.9+1.1−0.8)×10−9. The result is consistent with the Standard Model expectation and other available measurements.

  16. The ATLAS Tau Trigger

    CERN Document Server

    Rados, PK; The ATLAS collaboration

    2014-01-01

    Physics processes involving tau leptons play a crucial role in understanding particle physics at the high energy frontier. The ability to efficiently trigger on events containing hadronic tau decays is therefore of particular importance to the ATLAS experiment. During the 2012 run, the Large Hadronic Collder (LHC) reached instantaneous luminosities of nearly $10^{34} cm^{-2}s^{-1}$ with bunch crossings occurring every $50 ns$. This resulted in a huge event rate and a high probability of overlapping interactions per bunch crossing (pile-up). With this in mind it was necessary to design an ATLAS tau trigger system that could reduce the event rate to a manageable level, while efficiently extracting the most interesting physics events in a pile-up robust manner. In this poster the ATLAS tau trigger is described, its performance during 2012 is presented, and the outlook for the LHC Run II is briefly summarized.

  17. Validation Tools for ATLAS Muon Spectrometer Commissioning

    International Nuclear Information System (INIS)

    Benekos, N.Chr.; Dedes, G.; Laporte, J.F.; Nicolaidou, R.; Ouraou, A.

    2008-01-01

    The ATLAS Muon Spectrometer (MS), currently being installed at CERN, is designed to measure final state muons of 14 TeV proton-proton interactions at the Large Hadron Collider (LHC) with a good momentum resolution of 2-3% at 10-100 GeV/c and 10% at 1 TeV, taking into account the high level background enviroment, the inhomogeneous magnetic field, and the large size of the apparatus (24 m diameter by 44 m length). The MS layout of the ATLAS detector is made of a large toroidal magnet, arrays of high-pressure drift tubes for precise tracking and dedicated fast detectors for the first-level trigger, and is organized in eight Large and eight Small sectors. All the detectors of the barrel toroid have been installed and the commissioning has started with cosmic rays. In order to validate the MS performance using cosmic events, a Muon Commissioning Validation package has been developed and its results are presented in this paper. Integration with the rest of the ATLAS sub-detectors is now being done in the ATLAS cavern

  18. The performance of the ATLAS missing transverse momentum high-level trigger in 2015 pp collisions at $13$ TeV

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00534627

    The performance of the ATLAS missing transverse momentum (${E_\\text{T}^\\text{miss}}$) high-level trigger during 2015 operation is presented. In 2015, the Large Hadron Collider operated at a higher centre-of-mass energy and shorter bunch spacing ($\\sqrt{s} = 13$ TeV and $25$ ns, respectively) than in previous operation. In future operation, the Large Hadron Collider will operate at even higher instantaneous luminosity ($\\mathcal{O}(10^{34} \\text{ cm$^{-2}$ s$^{-1}$}$) and produce a higher average number of interactions per bunch crossing, $\\langle \\mu \\rangle$. These operating conditions will pose significant challenges to the ${E_\\text{T}^\\text{miss}}$ trigger efficiency and rate. An overview of the new algorithms implemented to address these challenges, and of the existing algorithms is given. An integrated luminosity of $1.4 \\text{ fb$^{-1}$}$ with $\\langle \\mu \\rangle = 14$ was collected from pp collisions of the Large Hadron Collider by the ATLAS detector during October and November 2015 and was used to s...

  19. A Level-2 trigger algorithm for the identification of muons in the ATLAS Muon Spectrometer

    CERN Document Server

    Di Mattia, A; Dos Anjos, A; Baines, J T M; Bee, C P; Biglietti, M; Bogaerts, J A C; Boisvert, V; Bosman, M; Caron, B; Casado, M P; Cataldi, G; Cavalli, D; Cervetto, M; Comune, G; Conde-Muíño, P; De Santo, A; Díaz-Gómez, M; Dosil, M; Ellis, Nick; Emeliyanov, D; Epp, B; Falciano, S; Farilla, A; George, S; Ghete, V M; González, S; Grothe, M; Kabana, S; Khomich, A; Kilvington, G; Konstantinidis, N P; Kootz, A; Lowe, A; Luminari, L; Maeno, T; Masik, J; Meessen, C; Mello, A G; Merino, G; Moore, R; Morettini, P; Negri, A; Nikitin, N V; Nisati, A; Padilla, C; Panikashvili, N; Parodi, F; Pasqualucci, E; Pérez-Réale, V; Pinfold, J L; Pinto, P; Qian, Z; Resconi, S; Rosati, S; Sánchez, C; Santamarina-Rios, C; Scannicchio, D A; Schiavi, C; Segura, E; De Seixas, J M; Sivoklokov, S Yu; Soluk, R A; Stefanidis, E; Sushkov, S S; Sutton, M; Tapprogge, Stefan; Thomas, E; Touchard, F; Venda-Pinto, B; Vercesi, V; Werner, P; Wheeler, S; Wickens, F J; Wiedenmann, W; Wielers, M; Zobernig, G; Computing In High Energy Physics

    2005-01-01

    The ATLAS Level-2 trigger provides a software-based event selection after the initial Level-1 hardware trigger. For the muon events, the selection is decomposed in a number of broad steps: first, the Muon Spectrometer data are processed to give physics quantities associated to the muon track (standalone feature extraction) then, other detector data are used to refine the extracted features. The “µFast” algorithm performs the standalone feature extraction, providing a first reduction of the muon event rate from Level-1. It confirms muon track candidates with a precise measurement of the muon momentum. The algorithm is designed to be both conceptually simple and fast so as to be readily implemented in the demanding online environment in which the Level-2 selection code will run. Never-the-less its physics performance approaches, in some cases, that of the offline reconstruction algorithms. This paper describes the implemented algorithm together with the software techniques employed to increase its timing p...

  20. The ATLAS Electron and Photon Trigger

    CERN Document Server

    Jones, Samuel David; The ATLAS collaboration

    2017-01-01

    Electron and photon triggers covering transverse energies from 5 GeV to several TeV are essential for signal selection in a wide variety of ATLAS physics analyses to study Standard Model processes and to search for new phenomena. Final states including leptons and photons had, for example, an important role in the discovery and measurement of the Higgs boson. Dedicated triggers are also used to collect data for calibration, efficiency and fake rate measurements. The ATLAS trigger system is divided in a hardware-based Level-1 trigger and a software-based high-level trigger, both of which were upgraded during the LHC shutdown in preparation for Run-2 operation. To cope with the increasing luminosity and more challenging pile-up conditions at a center-of-mass energy of 13 TeV, the trigger selections at each level are optimized to control the rates and keep efficiencies high. To achieve this goal multivariate analysis techniques are used. The ATLAS electron and photon triggers and their performance with Run 2 dat...

  1. The ATLAS Electron and Photon Trigger

    CERN Document Server

    Jones, Samuel David; The ATLAS collaboration

    2018-01-01

    Electron and photon triggers covering transverse energies from 5 GeV to several TeV are essential for signal selection in a wide variety of ATLAS physics analyses to study Standard Model processes and to search for new phenomena. Final states including leptons and photons had, for example, an important role in the discovery and measurement of the Higgs boson. Dedicated triggers are also used to collect data for calibration, efficiency and fake rate measurements. The ATLAS trigger system is divided in a hardware-based Level-1 trigger and a software-based high-level trigger, both of which were upgraded during the LHC shutdown in preparation for Run-2 operation. To cope with the increasing luminosity and more challenging pile-up conditions at a center-of-mass energy of 13 TeV, the trigger selections at each level are optimized to control the rates and keep efficiencies high. To achieve this goal multivariate analysis techniques are used. The ATLAS electron and photon triggers and their performance with Run 2 dat...

  2. An Overview of the ATLAS High Level Trigger Dataflow and Supervision

    CERN Document Server

    Wheeler, S; Baines, J T M; Bee, C P; Biglietti, M; Bogaerts, A; Boisvert, V; Bosman, M; Brandt, S; Caron, B; Casado, M P; Cataldi, G; Cavalli, D; Cervetto, M; Comune, G; Corso-Radu, A; Di Mattia, A; Díaz-Gómez, M; Dos Anjos, A; Drohan, J; Ellis, Nick; Elsing, M; Epp, B; Etienne, F; Falciano, S; Farilla, A; George, S; Ghete, V M; González, S; Grothe, M; Kaczmarska, A; Karr, K M; Khomich, A; Konstantinidis, N P; Krasny, W; Li, W; Lowe, A; Luminari, L; Meessen, C; Mello, A G; Merino, G; Morettini, P; Moyse, E; Nairz, A; Negri, A; Nikitin, N V; Nisati, A; Padilla, C; Parodi, F; Pérez-Réale, V; Pinfold, J L; Pinto, P; Polesello, G; Qian, Z; Resconi, S; Rosati, S; Scannicchio, D A; Schiavi, C; Schörner-Sadenius, T; Segura, E; De Seixas, J M; Shears, T G; Sivoklokov, S Yu; Smizanska, M; Soluk, R A; Stanescu, C; Tapprogge, Stefan; Touchard, F; Vercesi, V; Watson, A; Wengler, T; Werner, P; Wickens, F J; Wiedenmann, W; Wielers, M; Zobernig, G; RT 2003 13th IEEE-NPSS Real Time Conference

    2004-01-01

    The ATLAS High Level Trigger (HLT) system provides software-based event selection after the initial LVL1 hardware trigger. It is composed of two stages, the LVL2 trigger and the Event Filter (EF). The LVL2 trigger performs event selection with optimized algorithms using selected data guided by Region of Interest pointers provided by the LVL1 trigger. Those events selected by LVL2, are built into complete events, which are passed to the EF for a further stage of event selection and classification using off-line algorithms. Events surviving the EF selection are passed for off-line storage. The two stages of HLT are implemented on processor farms. The concept of distributing the selection process between LVL2 and EF is a key element in the architecture, which allows it to be flexible to changes (luminosity, detector knowledge, background conditions etc.) Although there are some differences in the requirements between these sub-systems there are many commonalities. An overview of the dataflow (event selection) an...

  3. Upgrading ATLAS Fast Calorimeter Simulation

    CERN Document Server

    Heath, Matthew Peter; The ATLAS collaboration

    2017-01-01

    Producing the very large samples of simulated events required by many physics and performance studies with the ATLAS detector using the full GEANT4 detector simulation is highly CPU intensive. Fast simulation tools are a useful way of reducing CPU requirements when detailed detector simulations are not needed. During the LHC Run-1, a fast calorimeter simulation (FastCaloSim) was successfully used in ATLAS. FastCaloSim provides a simulation of the particle energy response at the calorimeter read-out cell level, taking into account the detailed particle shower shapes and the correlations between the energy depositions in the various calorimeter layers. It is interfaced to the standard ATLAS digitization and reconstruction software, and it can be tuned to data more easily than Geant4. Now an improved version of FastCaloSim is in development, incorporating the experience with the version used during Run-1. The new FastCaloSim aims to overcome some limitations of the first version by improving the description of s...

  4. The ATLAS Fast Tracker

    CERN Document Server

    Volpi, Guido; The ATLAS collaboration

    2015-01-01

    The use of tracking information at the trigger level in the LHC Run II period is crucial for the trigger an data acquisition (TDAQ) system. The tracking precision is in fact important to identify specific decay products of the Higgs boson or new phenomena, a well as to distinguish the contributions coming from many contemporary collisions that occur at every bunch crossing. However, the track reconstruction is among the most demanding tasks performed by the TDAQ computing farm; in fact, full reconstruction at full Level-1 trigger accept rate (100 KHz) is not possible. In order to overcome this limitation, the ATLAS experiment is planning the installation of a specific processor: the Fast Tracker (FTK), which is aimed at achieving this goal. The FTK is a pipeline of high performance electronic, based on custom and commercial devices, which is expected to reconstruct, with high resolution, the trajectories of charged tracks with a transverse momentum above 1 GeV, using the ATLAS inner tracker information. Patte...

  5. The ATLAS Trigger system upgrade and performance in Run 2

    CERN Document Server

    Shaw, Savanna Marie; The ATLAS collaboration

    2017-01-01

    The ATLAS trigger has been used very successfully for the online event selection during the first part of the LHC Run-2 in 2015/16 at a centre-of-mass energy of 13 TeV. The trigger system is composed of a hardware Level-1 trigger and a software-based high-level trigger; it reduces the event rate from the bunch-crossing rate of 40 MHz to an average recording rate of about 1 kHz. The excellent performance of the ATLAS trigger has been vital for the ATLAS physics program of Run-2, selecting interesting collision events for wide variety of physics signatures with high efficiency. The trigger selection capabilities of ATLAS during Run-2 have been significantly improved compared to Run-1, in order to cope with the higher event rates and pile-up which are the result of the almost doubling of the center-of-mass collision energy and the increase in the instantaneous luminosity of the LHC. In order to prepare for the anticipated further luminosity increase of the LHC in 2017/18, improving the trigger performance remain...

  6. The Run-2 ATLAS Trigger System: Design, Performance and Plan

    CERN Document Server

    zur Nedden, Martin; The ATLAS collaboration

    2016-01-01

    In high-energy physics experiments, online selection is crucial to select interesting collisions from the large data volume. The ATLAS experiment at the Large Hadron Collider (LHC) utilizes the trigger system that consists of a hardware Level-1 (L1) and a software based high-level trigger (HLT), reducing the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of about 1000 Hz. The ATLAS trigger has been successfully collecting collision data during the first run of the LHC (Run-1) between 2009-2013 at a centre-of-mass energy between 900 GeV and 8 TeV. In the second run of LHC (Run-2) starting from 2015, the LHC operates at centre-of-mass energy of 13 TeV and provides a higher luminosity of collisions. Also, the number of collisions occurring in a same bunch crossing increases. The ATLAS trigger system has to cope with these challenges, while maintaining or even improving the efficiency to select relevant physics processes. In this talk, first we will review the ATLAS trigger ...

  7. The ATLAS Analysis Model

    CERN Multimedia

    Amir Farbin

    The ATLAS Analysis Model is a continually developing vision of how to reconcile physics analysis requirements with the ATLAS offline software and computing model constraints. In the past year this vision has influenced the evolution of the ATLAS Event Data Model, the Athena software framework, and physics analysis tools. These developments, along with the October Analysis Model Workshop and the planning for CSC analyses have led to a rapid refinement of the ATLAS Analysis Model in the past few months. This article introduces some of the relevant issues and presents the current vision of the future ATLAS Analysis Model. Event Data Model The ATLAS Event Data Model (EDM) consists of several levels of details, each targeted for a specific set of tasks. For example the Event Summary Data (ESD) stores calorimeter cells and tracking system hits thereby permitting many calibration and alignment tasks, but will be only accessible at particular computing sites with potentially large latency. In contrast, the Analysis...

  8. The RPC LVL1 trigger system of the muon spectrometer of the ATLAS experiment at LHC

    CERN Document Server

    Aielli, G; Alviggi, M G; Biglietti, M; Bocci, V; Brambilla, Elena; Camarri, P; Canale, V; Caprio, M A; Cardarelli, R; Carlino, G; Cataldi, G; Chiodini, G; Conventi, F; De Asmundis, R; Della Pietra, M; Della Volpe, D; Di Ciaccio, A; Di Mattia, A; Di Simone, A; Falciano, S; Gorini, E; Grancagnolo, F; Iengo, P; Liberti, B; Luminari, L; Nisati, A; Pastore, F; Patricelli, S; Perrino, R; Petrolo, E; Primavera, M; Sekhniaidze, G; Spagnolo, S; Salamon, A; Santonico, R; Vari, R; Veneziano, Stefano

    2004-01-01

    The ATLAS Trigger System has been designed to reduce the LHC interaction rate of about 1 GHz to the foreseen storage rate of about 100 Hz. Three trigger levels are applied in order to fulfill such a requirement. A detailed simulation of the ATLAS experiment including the hardware components and the logic of the Level-1 Muon trigger in the barrel of the muon spectrometer has been performed. This simulation has been used not only to evaluate the performances of the system but also to optimize the trigger logic design. In the barrel of the muon spectrometer the trigger will be given by means of resistive plate chambers (RPCs) working in avalanche mode. Before being mounted on the experiment, accurate quality tests with cosmic rays are carried out on each RPC chamber using the test station facility of the INFN and University laboratory of Napoli. All working parameters are measured and the uniformity of the efficiency on the whole RPC surface is required. A summary of the Napoli cosmic rays tests, together with a...

  9. Error detection, handling and recovery at the High Level Trigger of the ATLAS experiment at the LHC

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00223972; The ATLAS collaboration

    2016-01-01

    The complexity of the ATLAS High Level Trigger (HLT) requires a robust system for error detection and handling during online data-taking; it also requires an offline system for the recovery of events where no trigger decision could be made online. The error detection and handling ensure smooth operation of the trigger system and provide debugging information necessary for offline analysis and diagnosis. In this presentation, we give an overview of the error detection, handling and recovery of problematic events at the HLT of ATLAS.

  10. ATLAS upgrades for the next decades

    CERN Document Server

    Hopkins, Walter; The ATLAS collaboration

    2014-01-01

    After the successful LHC operation at the center-of-mass energies of 7 and 8 TeV in 2010-2012, plans are actively advancing for a series of upgrades of the accelerator, culminating roughly ten years from now in the high-luminosity LHC (HL-LHC) project, delivering of the order of five times the LHC nominal instantaneous luminosity along with luminosity leveling. The final goal is to extend the dataset from about few hundred \\ifb\\ expected for LHC running to 3000 fb$^{-1}$ by around 2035 for ATLAS and CMS. In parallel, the experiments need to be keep lockstep with the accelerator to accommodate running beyond the nominal luminosity this decade. Current planning in ATLAS envisions significant upgrades to the detector during the consolidation of the LHC to reach full LHC energy and further upgrades. The challenge of coping with the HL-LHC instantaneous and integrated luminosity, along with the associated radiation levels, requires further major changes to the ATLAS detector. The designs are developing rapidly for...

  11. The ATLAS Trigger Algorithms for General Purpose Graphics Processor Units

    CERN Document Server

    Tavares Delgado, Ademar; The ATLAS collaboration

    2016-01-01

    The ATLAS Trigger Algorithms for General Purpose Graphics Processor Units Type: Talk Abstract: We present the ATLAS Trigger algorithms developed to exploit General­ Purpose Graphics Processor Units. ATLAS is a particle physics experiment located on the LHC collider at CERN. The ATLAS Trigger system has two levels, hardware-­based Level 1 and the High Level Trigger implemented in software running on a farm of commodity CPU. Performing the trigger event selection within the available farm resources presents a significant challenge that will increase future LHC upgrades. are being evaluated as a potential solution for trigger algorithms acceleration. Key factors determining the potential benefit of this new technology are the relative execution speedup, the number of GPUs required and the relative financial cost of the selected GPU. We have developed a trigger demonstrator which includes algorithms for reconstructing tracks in the Inner Detector and Muon Spectrometer and clusters of energy deposited in the Cal...

  12. The Level-1 Tile-Muon Trigger in the Tile Calorimeter Upgrade Program

    CERN Document Server

    Ryzhov, Andrey; The ATLAS collaboration

    2016-01-01

    The Tile Calorimeter (TileCal) is the central hadronic calorimeter of the ATLAS experiment at the Large Hadron Collider (LHC). The TileCal provides highly-segmented energy measurements for incident particles. Information from TileCal's last radial layer can assist in muon tagging using Level-1 muon trigger. It can help in the rejection of fake muon triggers arising from background radiation (slow charged particles - protons) without degrading the efficiency of the trigger. The TileCal main activity for Phase-0 upgrade ATLAS program (2013-2014) was the activation of the TileCal third layer signal for assisting the muon trigger at 1.0<|η|<1.3 (Tile-Muon Trigger). This report describes the Tile-Muon Trigger at TileCal upgrade activities, focusing on the new on-detector electronics such as Tile Muon Digitizer Board (TMDB) to provide (receive and digitize) the signal from eight TileCal modules to three Level-1 muon endcap sector logic blocks.

  13. The design and simulated performance of a fast Level 1 track trigger for the ATLAS High Luminosity Upgrade

    CERN Document Server

    Martensson, Mikael; The ATLAS collaboration

    2017-01-01

    The ATLAS experiment at the High Luminosity LHC will face a fivefold increase in the number of interactions per bunch crossing relative to the ongoing Run 2. This will require a proportional improvement in rejection power at the earliest levels of the detector trigger system, while preserving good signal efficiency. One critical aspect of this improvement will be the implementation of precise track reconstruction, through which sharper trigger turn-on curves can be achieved, and b-tagging and tau-tagging techniques can in principle be implemented. The challenge of such a project comes in the development of a fast, custom electronic device integrated in the hardware based first trigger level of the experiment. This article will discuss the requirements, architecture and projected performance of the system in terms of tracking, timing and physics, based on detailed simulations. Studies are carried out using data from the strip subsystem only or both strip and pixel subsystems.

  14. The design and simulated performance of a fast Level 1 track trigger for the ATLAS High Luminosity Upgrade

    CERN Document Server

    Martensson, Mikael; The ATLAS collaboration

    2017-01-01

    The ATLAS experiment at the high-luminosity LHC will face a five-fold increase in the number of interactions per collision relative to the ongoing Run 2. This will require a proportional improvement in rejection power at the earliest levels of the detector trigger system, while preserving good signal efficiency. One critical aspect of this improvement will be the implementation of precise track reconstruction, through which sharper trigger turn-on curves can be achieved, and b-tagging and tau-tagging techniques can in principle be implemented. The challenge of such a project comes in the development of a fast, custom electronic device integrated in the hardware-based first trigger level of the experiment, with repercussions propagating as far as the detector read-out philosophy. This talk will discuss the requirements, architecture and projected performance of the system in terms of tracking, timing and physics, based on detailed simulations. Studies are carried out comparing two detector geometries and using...

  15. A System for Monitoring and Tracking the LHC Beam Spot within the ATLAS High Level Trigger

    CERN Document Server

    Bartoldus, R; The ATLAS collaboration; Cogan, J; Salnikov, A; Strauss, E; Winklmeier, F

    2012-01-01

    The parameters of the beam spot produced by the LHC in the ATLAS interaction region are computed online using the ATLAS High Level Trigger (HLT) system. The high rate of triggered events is exploited to make precise measurements of the position, size and orientation of the luminous region in near real-time, as these parameters change significantly even during a single data-taking run. We present the challenges, solutions and results for the online determination, monitoring and beam spot feedback system in ATLAS. A specially designed algorithm, which uses tracks registered in the silicon detectors to reconstruct event vertices, is executed on the HLT processor farm of several thousand CPU cores. Monitoring histograms from all the cores are sampled and aggregated across the farm every 60 seconds. The reconstructed beam values are corrected for detector resolution effects, measured in situ from the separation of vertices whose tracks have been split into two collections. Furthermore, measurements for individual ...

  16. The ATLAS Data Acquisition and High Level Trigger Systems: Experience and Upgrade Plans

    CERN Document Server

    Hauser, R; The ATLAS collaboration

    2012-01-01

    The ATLAS DAQ/HLT system reduces the Level 1 rate of 75 kHz to a few kHz event build rate after Level 2 and a few hundred Hz out output rate to disk. It has operated with an average data taking efficiency of about 94% during the recent years. The performance has far exceeded the initial requirements, with about 5 kHz event building rate and 500 Hz of output rate in 2012, driven mostly by physics requirements. Several improvements and upgrades are foreseen in the upcoming long shutdowns, both to simplify the existing architecture and improve the performance. On the network side new core switches will be deployed and possible use of 10GBit Ethernet links for critical areas is foreseen. An improved read-out system to replace the existing solution based on PCI is under development. A major evolution of the high level trigger system foresees a merging of the Level 2 and Event Filter functionality on a single node, including the event building. This will represent a big simplification of the existing system, while ...

  17. Construction of an in vivo human spinal cord atlas based on high-resolution MR images at cervical and thoracic levels: preliminary results.

    Science.gov (United States)

    Taso, Manuel; Le Troter, Arnaud; Sdika, Michaël; Ranjeva, Jean-Philippe; Guye, Maxime; Bernard, Monique; Callot, Virginie

    2014-06-01

    Our goal was to build a probabilistic atlas and anatomical template of the human cervical and thoracic spinal cord (SC) that could be used for segmentation algorithm improvement, parametric group studies, and enrichment of biomechanical modelling. High-resolution axial T2*-weighted images were acquired at 3T on 15 healthy volunteers using a multi-echo-gradient-echo sequence (1 slice per vertebral level from C1 to L2). After manual segmentation, linear and affine co-registrations were performed providing either inter-individual morphometric variability maps, or substructure probabilistic maps [CSF, white and grey matter (WM/GM)] and anatomical SC template. The larger inter-individual morphometric variations were observed at the thoraco-lumbar levels and in the posterior GM. Mean SC diameters were in agreement with the literature and higher than post-mortem measurements. A representative SC MR template was generated and values up to 90 and 100% were observed on GM and WM-probability maps. This work provides a probabilistic SC atlas and a template that could offer great potentialities for parametrical MRI analysis (DTI/MTR/fMRI) and group studies, similar to what has already been performed using a brain atlas. It also offers great perspective for biomechanical models usually based on post-mortem or generic data. Further work will consider integration into an automated SC segmentation pipeline.

  18. Atlas-based delineation of lymph node levels in head and neck computed tomography images

    International Nuclear Information System (INIS)

    Commowick, Olivier; Gregoire, Vincent; Malandain, Gregoire

    2008-01-01

    Purpose: Radiotherapy planning requires accurate delineations of the tumor and of the critical structures. Atlas-based segmentation has been shown to be very efficient to automatically delineate brain critical structures. We therefore propose to construct an anatomical atlas of the head and neck region. Methods and materials: Due to the high anatomical variability of this region, an atlas built from a single image as for the brain is not adequate. We address this issue by building a symmetric atlas from a database of manually segmented images. First, we develop an atlas construction method and apply it to a database of 45 Computed Tomography (CT) images from patients with node-negative pharyngo-laryngeal squamous cell carcinoma manually delineated for radiotherapy. Then, we qualitatively and quantitatively evaluate the results generated by the built atlas based on Leave-One-Out framework on the database. Results: We present qualitative and quantitative results using this atlas construction method. The evaluation was performed on a subset of 12 patients among the original CT database of 45 patients. Qualitative results depict visually well delineated structures. The quantitative results are also good, with an error with respect to the best achievable results ranging from 0.196 to 0.404 with a mean of 0.253. Conclusions: These results show the feasibility of using such an atlas for radiotherapy planning. Many perspectives are raised from this work ranging from extensive validation to the construction of several atlases representing sub-populations, to account for large inter-patient variabilities, and populations with node-positive tumors

  19. Supporting ATLAS

    CERN Multimedia

    maximilien brice

    2003-01-01

    Eighteen feet made of stainless steel will support the barrel ATLAS detector in the cavern at Point 1. In total, the ATLAS feet system will carry approximately 6000 tons, and will give the same inclination to the detector as the LHC accelerator.

  20. Design, Results, Evolution and Status of the ATLAS simulation in Point1 project.

    CERN Document Server

    Ballestrero, Sergio; The ATLAS collaboration; Brasolin, Franco; Contescu, Alexandru Cristian; Fazio, Daniel; Di Girolamo, Alessandro; Lee, Christopher Jon; Pozo Astigarraga, Mikel Eukeni; Scannicchio, Diana; Sedov, Alexey; Twomey, Matthew Shaun; Wang, Fuquan; Zaytsev, Alexander

    2015-01-01

    During the LHC long shutdown period (LS1), that started in 2013, the simulation in Point1 (Sim@P1) project takes advantage in an opportunistic way of the trigger and data acquisition (TDAQ) farm of the ATLAS experiment. The farm provides more than 1500 computer nodes, and they are particularly suitable for running event generation and Monte Carlo production jobs that are mostly CPU and not I/O bound. It is capable of running up to 2500 virtual machines (VM) provided with 8 CPU cores each, for a total of up to 20000 parallel running jobs. This contribution gives a thorough review of the design, the results and the evolution of the Sim@P1 project operating a large scale Openstack based virtualized platform deployed on top of the ATLAS TDAQ farm computing resources. During LS1, Sim@P1 was one of the most productive GRID sites: it delivered more than 50 million CPU-hours and it generated more than 1.7 billion Monte Carlo events to various analysis communities within the ATLAS collaboration. The particular design ...

  1. The ATLAS Women's Network: one year of activities

    CERN Multimedia

    Paula Eerola

    The idea for an ATLAS Women's Network was born during the ATLAS overview week in October 2005, when a few of us discussed our experiences and were pondering about what we could do. We felt that it was important to increase the visibility of women working in ATLAS in order to make a better and more effective use of the ATLAS human resources, that is, make sure that women are duly included at all levels. Furthermore, it is our belief that making ATLAS a better working environment for female collaborators and other female co-workers will benefit both us and the collaboration as a whole. On the individual level, all of us thought that we could benefit from peer support and experience sharing, and an ATLAS Women's Network could facilitate this by developing contacts between the ATLAS Women in ATLAS Institutes worldwide. Finally, we thought that it was important to increase the number of women studying physics and working in the field of physics research by identifying gender barriers in the career paths of women i...

  2. Studying radiative B decays with the Atlas detector; Etude des desintegrations radiatives des mesons B dans le detecteur ATLAS

    Energy Technology Data Exchange (ETDEWEB)

    Viret, S

    2004-09-01

    This thesis is dedicated to the study of radiative B decays with the ATLAS detector at the LHC (large hadron collider). Radiative decays belong to the rare decays family. Rare decays transitions involve flavor changing neutral currents (for example b {yields} s{gamma}), which are forbidden at the lowest order in the Standard Model. Therefore these processes occur only at the next order, thus involving penguin or box diagrams, which are very sensitive to 'new physics' contributions. The main goal of our study is to show that it would be possible to develop an online selection strategy for radiative B decays with the ATLAS detector. To this end, we have studied the treatment of low energy photons by the ATLAS electromagnetic calorimeter (ECal). Our analysis shows that ATLAS ECal will be efficient with these particles. This property is extensively used in the next section, where a selection strategy for radiative B decays is proposed. Indeed, we look for a low energy region of interest in the ECal as soon as the level 1 of the trigger. Then, photon identification cuts are performed in this region at level 2. However, a large part of the proposed selection scheme is also based on the inner detector, particularly at level 2. The final results show that large amounts of signal events could be collected in only one year by ATLAS. A preliminary significance (S/{radical}B) estimation is also presented. Encouraging results concerning the observability of exclusive radiative B decays are obtained. (author)

  3. Studying radiative B decays with the Atlas detector; Etude des desintegrations radiatives des mesons B dans le detecteur ATLAS

    Energy Technology Data Exchange (ETDEWEB)

    Viret, S

    2004-09-01

    This thesis is dedicated to the study of radiative B decays with the ATLAS detector at the LHC (large hadron collider). Radiative decays belong to the rare decays family. Rare decays transitions involve flavor changing neutral currents (for example b {yields} s{gamma}), which are forbidden at the lowest order in the Standard Model. Therefore these processes occur only at the next order, thus involving penguin or box diagrams, which are very sensitive to 'new physics' contributions. The main goal of our study is to show that it would be possible to develop an online selection strategy for radiative B decays with the ATLAS detector. To this end, we have studied the treatment of low energy photons by the ATLAS electromagnetic calorimeter (ECal). Our analysis shows that ATLAS ECal will be efficient with these particles. This property is extensively used in the next section, where a selection strategy for radiative B decays is proposed. Indeed, we look for a low energy region of interest in the ECal as soon as the level 1 of the trigger. Then, photon identification cuts are performed in this region at level 2. However, a large part of the proposed selection scheme is also based on the inner detector, particularly at level 2. The final results show that large amounts of signal events could be collected in only one year by ATLAS. A preliminary significance (S/{radical}B) estimation is also presented. Encouraging results concerning the observability of exclusive radiative B decays are obtained. (author)

  4. The Database Driven ATLAS Trigger Configuration System

    CERN Document Server

    Martyniuk, Alex; The ATLAS collaboration

    2015-01-01

    This contribution describes the trigger selection configuration system of the ATLAS low- and high-level trigger (HLT) and the upgrades it received in preparation for LHC Run 2. The ATLAS trigger configuration system is responsible for applying the physics selection parameters for the online data taking at both trigger levels and the proper connection of the trigger lines across those levels. Here the low-level trigger consists of the already existing central trigger (CT) and the new Level-1 Topological trigger (L1Topo), which has been added for Run 2. In detail the tasks of the configuration system during the online data taking are Application of the selection criteria, e.g. energy cuts, minimum multiplicities, trigger object correlation, at the three trigger components L1Topo, CT, and HLT On-the-fly, e.g. rate-dependent, generation and application of prescale factors to the CT and HLT to adjust the trigger rates to the data taking conditions, such as falling luminosity or rate spikes in the detector readout ...

  5. The Equine PeptideAtlas

    DEFF Research Database (Denmark)

    Bundgaard, Louise; Jacobsen, Stine; Sørensen, Mette Aamand

    2014-01-01

    Progress in MS-based methods for veterinary research and diagnostics is lagging behind compared to the human research, and proteome data of domestic animals is still not well represented in open source data repositories. This is particularly true for the equine species. Here we present a first...... Equine PeptideAtlas encompassing high-resolution tandem MS analyses of 51 samples representing a selection of equine tissues and body fluids from healthy and diseased animals. The raw data were processed through the Trans-Proteomic Pipeline to yield high quality identification of proteins and peptides....... The current release comprises 24 131 distinct peptides representing 2636 canonical proteins observed at false discovery rates of 0.2% at the peptide level and 1.4% at the protein level. Data from the Equine PeptideAtlas are available for experimental planning, validation of new datasets, and as a proteomic...

  6. Fast Calorimeter Simulation in ATLAS

    CERN Document Server

    Schaarschmidt, Jana; The ATLAS collaboration

    2017-01-01

    Producing the very large samples of simulated events required by many physics and performance studies with the ATLAS detector using the full GEANT4 detector simulation is highly CPU intensive. Fast simulation tools are a useful way of reducing CPU requirements when detailed detector simulations are not needed. During the LHC Run-1, a fast calorimeter simulation (FastCaloSim) was successfully used in ATLAS. FastCaloSim provides a simulation of the particle energy response at the calorimeter read-out cell level, taking into account the detailed particle shower shapes and the correlations between the energy depositions in the various calorimeter layers. It is interfaced to the standard ATLAS digitization and reconstruction software, and it can be tuned to data more easily than GEANT4. It is 500 times faster than full simulation in the calorimeter system. Now an improved version of FastCaloSim is in development, incorporating the experience with the version used during Run-1. The new FastCaloSim makes use of mach...

  7. The new ATLAS Fast Calorimeter Simulation

    CERN Document Server

    Hasib, Ahmed; The ATLAS collaboration

    2017-01-01

    Producing the very large samples of simulated events required by many physics and performance studies with the ATLAS detector using the full GEANT4 detector simulation is highly CPU intensive. Fast simulation tools are a useful way of reducing CPU requirements when detailed detector simulations are not needed. During the LHC Run-1, a fast calorimeter simulation (FastCaloSim) was successfully used in ATLAS. FastCaloSim provides a simulation of the particle energy response at the calorimeter read-out cell level, taking into account the detailed particle shower shapes and the correlations between the energy depositions in the various calorimeter layers. It is interfaced to the standard ATLAS digitization and reconstruction software, and it can be tuned to data more easily than GEANT4. Now an improved version of FastCaloSim is in development, incorporating the experience with the version used during Run-1. The new FastCaloSim makes use of statistical techniques such as principal component analysis, and a neural n...

  8. The new ATLAS Fast Calorimeter Simulation

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00176100; The ATLAS collaboration

    2016-01-01

    The physics and performance studies of the ATLAS detector at the Large Hadron Collider re- quire a large number of simulated events. A GEANT4 based detailed simulation of the ATLAS calorimeter systems is highly CPU intensive and such resolution is often unnecessary. To reduce the calorimeter simulation time by a few orders of magnitude, fast simulation tools have been developed. The Fast Calorimeter Simulation (FastCaloSim) provides a parameterised simulation of the particle energy response at the calorimeter read-out cell level. In Run 1, about 13 billion events were simulated in ATLAS, out of which 50% were produced using fast simulation. For Run 2, a new parameterisation is being developed to improve the original version: it incorporates developments in geometry and physics lists during the last five years and benefits from the knowledge acquired from the Run 1 data. The algorithm uses machine learning techniques to improve the parameterisations and to optimise the amount of information to be stored in the...

  9. The New ATLAS Fast Calorimeter Simulation

    CERN Document Server

    Heath, Matthew Peter; The ATLAS collaboration

    2017-01-01

    Producing the large samples of simulated events required by many physics and performance studies with the ATLAS detector using the full GEANT4 detector simulation is highly CPU intensive. Fast simulation tools are a useful way of reducing the CPU requirements when detailed detector simulations are not needed. During Run-1 of the LHC, a fast calorimeter simulation (FastCaloSim) was successfully used in ATLAS. FastCaloSim provides a simulation of the particle energy response at the calorimeter read-out cell level, taking into account the detailed particle shower shapes and the correlations between the energy depositions in the various calorimeter layers. It is interfaced to the standard ATLAS digitisation and reconstruction software, and it can be tuned to data more easily than Geant4. Now an improved version of FastCaloSim is in development, incorporating the experience with the version used during Run-1. The new FastCaloSim aims to overcome some limitations of the first version by improving the description of...

  10. ATLAS Cloud R&D

    CERN Document Server

    Panitkin, S; The ATLAS collaboration; Caballero Bejar, J; Benjamin, D; DiGirolamo, A; Gable, I; Hendrix, V; Hover, J; Kucharczuk, K; Medrano LLamas, R; Love, P; Ohman, H; Paterson, M; Sobie, R; Taylor, R; Walker, R; Zaytsev, A

    2014-01-01

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R&D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R&D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R&D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R&D group has gained...

  11. The design and performance of the ATLAS jet trigger

    International Nuclear Information System (INIS)

    Shimizu, Shima

    2014-01-01

    The ATLAS jet trigger is an important element of the event selection process, providing data samples for studies of Standard Model physics and searches for new physics at the LHC. The ATLAS jet trigger system has undergone substantial modifications over the past few years of LHC operations, as experience developed with triggering in a high luminosity and high event pileup environment. In particular, the region-of-interest based strategy has been replaced by a full scan of the calorimeter data at the third trigger level, and by a full scan of the level-1 trigger input at level-2 for some specific trigger chains. Hadronic calibration and cleaning techniques are applied in order to provide improved performance and increased stability in high luminosity data taking conditions. In this note we discuss the implementation and operational aspects of the ATLAS jet trigger during 2011 and 2012 data taking periods at the LHC.

  12. Evolution of the ReadOut System of the ATLAS experiment

    CERN Document Server

    Borga, A; The ATLAS collaboration; Green, B; Kugel, A; Joos, M; Panduro Vazquez, W; Schumacher, J; Teixeira-Dias, P; Tremblet, L; Vandelli, W; Vermeulen, J; Werner, P; Wickens, F

    2014-01-01

    The ReadOut System (ROS) is a central and essential part of the ATLAS DAQ system. It receives and buffers data of events accepted by the first-level trigger from all subdetectors and first-level trigger subsystems. Event data are subsequently forwarded to the High-Level Trigger system and Event Builder via a 1 GbE-based network. The ATLAS ROS is completely renewed in view of the demanding conditions expected during LHC Run 2 and Run 3, to replace obsolete technologies and space constraints require it to be compact. The new ROS will consist of roughly 100 Linux-based 2U high rack mounted server PCs, each equipped with 2 PCIe I/O cards and two four 10 GbE interfaces. The FPGA-based PCIe I/O cards, developed by the ALICE collaboration, will be configured with ATLAS-specific firmware, the so-called RobinNP firmware. They will provide the connectivity to about 2000 optical point-to-point links conveying the ATLAS event data. This dense configuration provides an excellent test bench for studying I/O efficiency and ...

  13. Derived Physics Data Production in ATLAS: Experience with Run 1 and Looking Ahead (proceedings)

    CERN Document Server

    Laycock, P; The ATLAS collaboration; Beckingham, M; Henderson, R; Zhou, L

    2014-01-01

    While a significant fraction of ATLAS physicists directly analyse the AOD (Analysis Object Data) produced at the CERN Tier 0, a much larger fraction have opted to analyse data in a flat ROOT format. The large scale production of this Derived Physics Data (DPD) format must cater for both detailed performance studies of the ATLAS detector and object reconstruction, as well as higher level and generally lighter-content physics analysis. The delay between data-taking and DPD production allows for software improvements, while the ease of arbitrarily defined skimming/slimming of this format results in an optimally performant format for end-user analysis. Given the diversity of requirements, there are many flavours of DPDs, which can result in large peak computing resource demands. While the current model has proven to be very flexible for the individual groups and has successfully met the needs of the collaboration, the resource requirements at the end of Run 1 are much larger than planned. In the near future, ATLA...

  14. Hardware-based tracking at trigger level for ATLAS: The Fast Tracker (FTK) Project

    CERN Document Server

    Gramling, Johanna; The ATLAS collaboration

    2015-01-01

    Physics collisions at 13 TeV are expected at the LHC with an average of 40-50 proton-proton collisions per bunch crossing. Tracking at trigger level is an essential tool to control the rate in high-pileup conditions while maintaining a good efficiency for relevant physics processes. The Fast TracKer (FTK) is an integral part of the trigger upgrade for the ATLAS detector. For every event passing the Level 1 trigger (at a maximum rate of 100 kHz) the FTK receives data from the 80 million channels of the silicon detectors, providing tracking information to the High Level Trigger in order to ensure a selection robust against pile-up. The FTK performs a hardware-based track reconstruction, using associative memory (AM) that is based on the use of a custom chip, designed to perform pattern matching at very high speed. It finds track candidates at low resolution (roads) that seed a full-resolution track fitting done by FPGAs. Narrow roads permit a fast track fitting but need many patterns stored in the AM to ensure ...

  15. The Phase-1 Upgrade of the ATLAS Level-1 Endcap Muon Trigger

    CERN Document Server

    Akatsuka, Shunichi; The ATLAS collaboration

    2018-01-01

    Proceedings for RealTime 2018, 9th -15th June 2018 @ Williamsburg, Virginia, USA, on Phase-1 Upgrade of the Level-1 Endcap Muon trigger. The deadline for this document to the conference side is June 24th, 2018.

  16. ATLAS Run 1 Pythia8 tunes

    CERN Document Server

    The ATLAS collaboration

    2014-01-01

    We present tunes of the Pythia8 Monte~Carlo event generator's parton shower and multiple parton interaction parameters to a range of data observables from ATLAS Run 1. Four new tunes have been constructed, corresponding to the four leading-order parton density functions, CTEQ6L1, MSTW2008LO, NNPDF23LO, and HERAPDF15LO, each simultaneously tuning ten generator parameters. A set of systematic variations is provided for the NNPDF tune, based on the eigentune method. These tunes improve the modeling of observables that can be described by leading-order + parton shower simulation, and are primarily intended for use in situations where next-to-leading-order and/or multileg parton-showered simulations are unavailable or impractical.

  17. The Hardware Topological Trigger of ATLAS: Commissioning and Operations

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00226165; The ATLAS collaboration

    2018-01-01

    The Level-1 trigger is the first rate-reducing step in the ATLAS trigger system with an output rate of 100 kHz and decision latency smaller than 2.5 μs. It consists of a calorimeter trigger, muon trigger and a central trigger processor. To improve the physics potential reach in ATLAS, during the LHC shutdown after Run 1, the Level-1 trigger system was upgraded at hardware, firmware and software level. In particular, a new electronics sub-system was introduced in the real-time data processing path: the Topological Processor System (L1Topo). It consists of a single AdvancedCTA shelf equipped with two Level-1 topological processor blades. For individual blades, real-time information from calorimeter and muon Level-1 trigger systems, is processed by four individual state-of-the-art FPGAs. It needs to deal with a large input bandwidth of up to 6 Tb/s, optical connectivity and low processing latency on the real-time data path. The L1Topo firmware apply measurements of angles between jets and/or leptons and several...

  18. The ATLAS Trigger System: Ready for Run-2

    CERN Document Server

    Maeda, Junpei; The ATLAS collaboration

    2015-01-01

    The ATLAS trigger has been successfully collecting collision data during the first run of the LHC between 2009-2013 at a centre-of-mass energy between 900 GeV and 8 TeV. The trigger system consists of a hardware Level-1 and a software based high-level trigger that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. During the data-taking period of Run-2 the LHC will operate at a centre-of-mass energy of about 13 TeV resulting in roughly five times higher trigger rates. In these proceedings, we briefly review the ATLAS trigger system upgrades that were implemented during the shutdown, allowing us to cope with the increased trigger rates while maintaining or even improving our efficiency to select relevant physics processes. This includes changes to the Level-1 calorimeter and muon trigger system, the introduction of a new Level-1 topological trigger module and themerging of the previously two-level higher-level trigger system into a single even...

  19. ATLAS FTK: Fast Track Trigger

    CERN Document Server

    Volpi, Guido; The ATLAS collaboration

    2015-01-01

    An overview of the ATLAS Fast Tracker processor is presented, reporting the design of the system, its expected performance, and the integration status. The next LHC runs, with a significant increase in instantaneous luminosity, will provide a big challenge to the trigger and data acquisition systems of all the experiments. An intensive use of the tracking information at the trigger level will be important to keep high efficiency in interesting events, despite the increase in multiple p-p collisions per bunch crossing (pile-up). In order to increase the use of tracks within the High Level Trigger (HLT), the ATLAS experiment planned the installation of an hardware processor dedicated to tracking: the Fast TracKer (FTK) processor. The FTK is designed to perform full scan track reconstruction at every Level-1 accept. To achieve this goal, the FTK uses a fully parallel architecture, with algorithms designed to exploit the computing power of custom VLSI chips, the Associative Memory, as well as modern FPGAs. The FT...

  20. The prototype design of gFEX — A component of the L1Calo Trigger for the ATLAS Phase-I upgrade

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00304146; The ATLAS collaboration; Chen, Kai; Lanni, Francesco; Takai, Helio; Tang, Shaochun; Wu, Weihao

    2016-01-01

    The ATLAS experiment will follow the upgrade steps of the Large Hadron Collider (LHC), which will undergo a series of upgrades to increase the luminosity in the next ten years. During the Phase-I upgrade, a new component will be designed for the ATLAS Level-1 calorimeter trigger system to maintain the trigger acceptance against the increasing luminosity - the global feature extractor (gFEX). The gFEX is intended to identify patterns of energy associated with the hadronic decays of high momentum Higgs, W & Z bosons, top quarks and exotic particles in real time at the LHC crossing rate. A prototype v1 with one System-on-Chip Xilinx ZYNQ FPGA, and one Vertex-7 FPGA for technology validation has been designed and tested in 2015. With the lessons learned from the prototype v1, a prototype v2 with three UltraScale FPGAs and one ZYNQ FPGA is implemented on an ATCA module. This board will receive coarse-granularity information from the entire ATLAS calorimeter on 276 optical fibers at the speed up to 12.8 Gb/s sy...

  1. Studying radiative B decays with the Atlas detector

    International Nuclear Information System (INIS)

    Viret, S.

    2004-09-01

    This thesis is dedicated to the study of radiative B decays with the ATLAS detector at the LHC (large hadron collider). Radiative decays belong to the rare decays family. Rare decays transitions involve flavor changing neutral currents (for example b → sγ), which are forbidden at the lowest order in the Standard Model. Therefore these processes occur only at the next order, thus involving penguin or box diagrams, which are very sensitive to 'new physics' contributions. The main goal of our study is to show that it would be possible to develop an online selection strategy for radiative B decays with the ATLAS detector. To this end, we have studied the treatment of low energy photons by the ATLAS electromagnetic calorimeter (ECal). Our analysis shows that ATLAS ECal will be efficient with these particles. This property is extensively used in the next section, where a selection strategy for radiative B decays is proposed. Indeed, we look for a low energy region of interest in the ECal as soon as the level 1 of the trigger. Then, photon identification cuts are performed in this region at level 2. However, a large part of the proposed selection scheme is also based on the inner detector, particularly at level 2. The final results show that large amounts of signal events could be collected in only one year by ATLAS. A preliminary significance (S/√B) estimation is also presented. Encouraging results concerning the observability of exclusive radiative B decays are obtained. (author)

  2. The Run-2 ATLAS Trigger System

    CERN Document Server

    Ruiz-Martinez, Aranzazu; The ATLAS collaboration

    2016-01-01

    The ATLAS trigger has been successfully collecting collision data during the first run of the LHC between 2009-2013 at a centre-of-mass energy between 900 GeV and 8 TeV. The trigger system consists of a hardware Level-1 (L1) and a software based high-level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. In Run-2, the LHC will operate at centre-of-mass energies of 13 and 14 TeV resulting in roughly five times higher trigger rates. We will briefly review the ATLAS trigger system upgrades that were implemented during the shutdown, allowing us to cope with the increased trigger rates while maintaining or even improving our efficiency to select relevant physics processes. This includes changes to the L1 calorimeter and muon trigger systems, the introduction of a new L1 topological trigger module and the merging of the previously two-level HLT system into a single event filter farm. At hand of a few examples, we will show the ...

  3. Muon Identification with the ATLAS Tile Calorimeter Read-Out Driver for Level-2 Trigger Purposes

    CERN Document Server

    Ruiz-Martinez, A

    2008-01-01

    The Hadronic Tile Calorimeter (TileCal) at the ATLAS experiment is a detector made out of iron as passive medium and plastic scintillating tiles as active medium. The light produced by the particles is converted to electrical signals which are digitized in the front-end electronics and sent to the back-end system. The main element of the back-end electronics are the VME 9U Read-Out Driver (ROD) boards, responsible of data management, processing and transmission. A total of 32 ROD boards, placed in the data acquisition chain between Level-1 and Level-2 trigger, are needed to read out the whole calorimeter. They are equipped with fixed-point Digital Signal Processors (DSPs) that apply online algorithms on the incoming raw data. Although the main purpose of TileCal is to measure the energy and direction of the hadronic jets, taking advantage of its projective segmentation soft muons not triggered at Level-1 (with pT<5 GeV) can be recovered. A TileCal standalone muon identification algorithm is presented and i...

  4. The Run-2 ATLAS Trigger System: Design, Performance and Plan

    CERN Document Server

    zur Nedden, Martin; The ATLAS collaboration

    2016-01-01

    In high-energy physics experiments, online selection is crucial to select interesting collisions from the large data volume. The ATLAS experiment at the Large Hadron Collider (LHC) utilizes the trigger system that consists of a hardware Level-1 (L1) and a software based high-level trigger (HLT), reducing the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of about 1000 Hz. In the LHC Run-2 starting from in 2015, the LHC operates at centre-of-mass energy of 13 TeV providing a luminosity up to $1.2 \\cdot 10^{34} {\\rm cm^{-2}s^{-1}}$. The ATLAS trigger system has to cope with these challenges, while maintaining or even improving the efficiency to select relevant physics processes. In this paper, the ATLAS trigger system for LHC Run-2 is reviewed. Secondly, the impressive performance improvements in the HLT trigger algorithms used to identify leptons, hadrons and global event quantities like missing transverse energy is shown. Electron, muon and photon triggers covering trans...

  5. The ATLAS beam pick-up based timing system

    International Nuclear Information System (INIS)

    Ohm, C.; Pauly, T.

    2010-01-01

    The ATLAS BPTX stations are composed of electrostatic button pick-up detectors, located 175 m away along the beam pipe on both sides of ATLAS. The pick-ups are installed as a part of the LHC beam instrumentation and used by ATLAS for timing purposes. The usage of the BPTX signals in ATLAS is twofold: they are used both in the trigger system and for LHC beam monitoring. The BPTX signals are discriminated with a constant-fraction discriminator to provide a Level-1 trigger when a bunch passes through ATLAS. Furthermore, the BPTX detectors are used by a stand-alone monitoring system for the LHC bunches and timing signals. The BPTX monitoring system measures the phase between collisions and clock with a precision better than 100 ps in order to guarantee a stable phase relationship for optimal signal sampling in the sub-detector front-end electronics. In addition to monitoring this phase, the properties of the individual bunches are measured and the structure of the beams is determined. On September 10, 2008, the first LHC beams reached the ATLAS experiment. During this period with beam, the ATLAS BPTX system was used extensively to time in the read-out of the sub-detectors. In this paper, we present the performance of the BPTX system and its measurements of the first LHC beams.

  6. Analysis facility infrastructure (Tier-3) for ATLAS experiment

    International Nuclear Information System (INIS)

    Gonzalez de la Hoz, S.; March, L.; Ros, E.; Sanchez, J.; Amoros, G.; Fassi, F.; Fernandez, A.; Kaci, M.; Lamas, A.; Salt, J.

    2008-01-01

    In the ATLAS computing model the tiered hierarchy ranged from the Tier-0 (CERN) down to desktops or workstations (Tier-3). The focus on defining the roles of each tiered component has evolved with the initial emphasis on the Tier-0 and Tier-1 definition and roles. The various LHC (Large Hadron Collider) projects, including ATLAS, then evolved the tiered hierarchy to include Tier-2's (Regional centers) as part of their projects. Tier-3 centres, on the other hand, have been defined as whatever an institution could construct to support their Physics goals using institutional and otherwise leveraged resources and therefore have not been considered to be part of the official ATLAS computing resources. However, Tier-3 centres are going to exist and will have implications on how the computing model should support ATLAS physicists. Tier-3 users will want to access LHC data and simulations and will want to enable their resources to support their analysis and simulation work. This document will define how IFIC (Instituto de Fisica Corpuscular de Valencia), after discussing with the ATLAS Tier-3 task force, should interact with the ATLAS computing model, detail the conditions under which Tier-3 centres can expect some level of support and set reasonable expectations for the scope and support of ATLAS Tier-3 sites. (orig.)

  7. Calorimetry triggering in ATLAS

    CERN Document Server

    Igonkina, O; Adragna, P; Aharrouche, M; Alexandre, G; Andrei, V; Anduaga, X; Aracena, I; Backlund, S; Baines, J; Barnett, B M; Bauss, B; Bee, C; Behera, P; Bell, P; Bendel, M; Benslama, K; Berry, T; Bogaerts, A; Bohm, C; Bold, T; Booth, J R A; Bosman, M; Boyd, J; Bracinik, J; Brawn, I, P; Brelier, B; Brooks, W; Brunet, S; Bucci, F; Casadei, D; Casado, P; Cerri, A; Charlton, D G; Childers, J T; Collins, N J; Conde Muino, P; Coura Torres, R; Cranmer, K; Curtis, C J; Czyczula, Z; Dam, M; Damazio, D; Davis, A O; De Santo, A; Degenhardt, J; Delsart, P A; Demers, S; Demirkoz, B; Di Mattia, A; Diaz, M; Djilkibaev, R; Dobson, E; Dova, M, T; Dufour, M A; Eckweiler, S; Ehrenfeld, W; Eifert, T; Eisenhandler, E; Ellis, N; Emeliyanov, D; Enoque Ferreira de Lima, D; Faulkner, P J W; Ferland, J; Flacher, H; Fleckner, J E; Flowerdew, M; Fonseca-Martin, T; Fratina, S; Fhlisch, F; Gadomski, S; Gallacher, M P; Garitaonandia Elejabarrieta, H; Gee, C N P; George, S; Gillman, A R; Goncalo, R; Grabowska-Bold, I; Groll, M; Gringer, C; Hadley, D R; Haller, J; Hamilton, A; Hanke, P; Hauser, R; Hellman, S; Hidvgi, A; Hillier, S J; Hryn'ova, T; Idarraga, J; Johansen, M; Johns, K; Kalinowski, A; Khoriauli, G; Kirk, J; Klous, S; Kluge, E-E; Koeneke, K; Konoplich, R; Konstantinidis, N; Kwee, R; Landon, M; LeCompte, T; Ledroit, F; Lei, X; Lendermann, V; Lilley, J N; Losada, M; Maettig, S; Mahboubi, K; Mahout, G; Maltrana, D; Marino, C; Masik, J; Meier, K; Middleton, R P; Mincer, A; Moa, T; Monticelli, F; Moreno, D; Morris, J D; Mller, F; Navarro, G A; Negri, A; Nemethy, P; Neusiedl, A; Oltmann, B; Olvito, D; Osuna, C; Padilla, C; Panes, B; Parodi, F; Perera, V J O; Perez, E; Perez Reale, V; Petersen, B; Pinzon, G; Potter, C; Prieur, D P F; Prokishin, F; Qian, W; Quinonez, F; Rajagopalan, S; Reinsch, A; Rieke, S; Riu, I; Robertson, S; Rodriguez, D; Rogriquez, Y; Rhr, F; Saavedra, A; Sankey, D P C; Santamarina, C; Santamarina Rios, C; Scannicchio, D; Schiavi, C; Schmitt, K; Schultz-Coulon, H C; Schfer, U; Segura, E; Silverstein, D; Silverstein, S; Sivoklokov, S; Sjlin, J; Staley, R J; Stamen, R; Stelzer, J; Stockton, M C; Straessner, A; Strom, D; Sushkov, S; Sutton, M; Tamsett, M; Tan, C L A; Tapprogge, S; Thomas, J P; Thompson, P D; Torrence, E; Tripiana, M; Urquijo, P; Urrejola, P; Vachon, B; Vercesi, V; Vorwerk, V; Wang, M; Watkins, P M; Watson, A; Weber, P; Weidberg, T; Werner, P; Wessels, M; Wheeler-Ellis, S; Whiteson, D; Wiedenmann, W; Wielers, M; Wildt, M; Winklmeier, F; Wu, X; Xella, S; Zhao, L; Zobernig, H; de Seixas, J M; dos Anjos, A; Asman, B; Özcan, E

    2009-01-01

    The ATLAS experiment is preparing for data taking at 14 TeV collision energy. A rich discovery physics program is being prepared in addition to the detailed study of Standard Model processes which will be produced in abundance. The ATLAS multi-level trigger system is designed to accept one event in 2 105 to enable the selection of rare and unusual physics events. The ATLAS calorimeter system is a precise instrument, which includes liquid Argon electro-magnetic and hadronic components as well as a scintillator-tile hadronic calorimeter. All these components are used in the various levels of the trigger system. A wide physics coverage is ensured by inclusively selecting events with candidate electrons, photons, taus, jets or those with large missing transverse energy. The commissioning of the trigger system is being performed with cosmic ray events and by replaying simulated Monte Carlo events through the trigger and data acquisition system.

  8. MSSM fits to the ATLAS 1 lepton excess

    Energy Technology Data Exchange (ETDEWEB)

    Kowalska, Kamila [TU Dortmund, Fakultaet fuer Physik, Dortmund (Germany); Sessolo, Enrico Maria [National Centre for Nuclear Research, Warsaw (Poland)

    2017-02-15

    We use the framework of the p19MSSM to perform a fit to the mild excesses over the Standard Model background recently observed in three bins of the ATLAS 1-lepton + (b-)jets + E{sub T}{sup miss} search. We find a few types of spectra that can fit the emerging signal and at the same time are not excluded by other LHC searches. They can be grouped roughly in two categories. The first class is characterized by the presence of one stop or stop and sbottoms with mass in the ballpark of 700-800 GeV and a neutralino LSP of mass around 400 GeV, with or without the additional presence of an intermediate chargino. In the second type of scenarios the stop, lightest chargino, sbottom if present, and the neutralino are about or heavier than ∝ 650 GeV and the signal originates from cascade decays of squarks of the 1st and 2nd generation, which should have a mass of 1.1-1.2 TeV. For the best-fit scenarios, we compare the global chi-squared with several ATLAS and CMS searches with the corresponding chi-squared of the Standard Model expectation, showing that the putative signal is also favored globally with respect to the background-only hypothesis. We point out that if the observed excess persists in the next round of data, it should be accompanied by associated significant excesses in all-hadronic final-state searches. (orig.)

  9. Herschel-ATLAS : Planck sources in the phase 1 fields

    NARCIS (Netherlands)

    Herranz, D.; González-Nuevo, J.; Clements, D.; De, Zotti G.; Lopez-Caniego, M.; Lapi, A.; Rodighiero, G.; Danese, L.; Fu, H.; Cooray, A.; Baes, M.; Bendo, G.; Bonavera, L.; Carrera, F.; Dole, H.; Eales, S.; Ivison, R.; Jarvis, M.; Lagache, G.; Massardi, M.; Michalowski, M.; Negrello, M.; Rigby, E.E.; Scott, D.; Valiante, E.; Valtchanov, I.; Werf, van der P.P.; Auld, R.; Buttiglione, S.; Dariush, A.; Dunne, L.; Hopwood, R.; Hoyos, C.; Ibar, E.; Maddox, S.

    2013-01-01

    We present the results of a cross-correlation of the Planck Early Release Compact Source catalogue (ERCSC) with the catalogue of Herschel-ATLAS sources detected in the phase 1 fields, covering 134.55{deg}$^{2}$. There are 28 ERCSC sources detected by Planck at 857 GHz in this area. As many as 16 of

  10. Glance Information System for ATLAS Management

    International Nuclear Information System (INIS)

    Grael, F F; Maidantchik, C; Évora, L H R A; Karam, K; Moraes, L O F; Cirilli, M; Nessi, M; Pommès, K

    2011-01-01

    ATLAS Experiment is an international collaboration where more than 37 countries, 172 institutes and laboratories, 2900 physicists, engineers, and computer scientists plus 700 students participate. The management of this teamwork involves several aspects such as institute contribution, employment records, members' appointment, authors' list, preparation and publication of papers and speakers nomination. Previously, most of the information was accessible by a limited group and developers had to face problems such as different terminology, diverse data modeling, heterogeneous databases and unlike users needs. Moreover, the systems were not designed to handle new requirements. The maintenance has to be an easy task due to the long lifetime experiment and professionals turnover. The Glance system, a generic mechanism for accessing any database, acts as an intermediate layer isolating the user from the particularities of each database. It retrieves, inserts and updates the database independently of its technology and modeling. Relying on Glance, a group of systems were built to support the ATLAS management and operation aspects: ATLAS Membership, ATLAS Appointments, ATLAS Speakers, ATLAS Analysis Follow-Up, ATLAS Conference Notes, ATLAS Thesis, ATLAS Traceability and DSS Alarms Viewer. This paper presents the overview of the Glance information framework and describes the privilege mechanism developed to grant different level of access for each member and system.

  11. Glance Information System for ATLAS Management

    Science.gov (United States)

    Grael, F. F.; Maidantchik, C.; Évora, L. H. R. A.; Karam, K.; Moraes, L. O. F.; Cirilli, M.; Nessi, M.; Pommès, K.; ATLAS Collaboration

    2011-12-01

    ATLAS Experiment is an international collaboration where more than 37 countries, 172 institutes and laboratories, 2900 physicists, engineers, and computer scientists plus 700 students participate. The management of this teamwork involves several aspects such as institute contribution, employment records, members' appointment, authors' list, preparation and publication of papers and speakers nomination. Previously, most of the information was accessible by a limited group and developers had to face problems such as different terminology, diverse data modeling, heterogeneous databases and unlike users needs. Moreover, the systems were not designed to handle new requirements. The maintenance has to be an easy task due to the long lifetime experiment and professionals turnover. The Glance system, a generic mechanism for accessing any database, acts as an intermediate layer isolating the user from the particularities of each database. It retrieves, inserts and updates the database independently of its technology and modeling. Relying on Glance, a group of systems were built to support the ATLAS management and operation aspects: ATLAS Membership, ATLAS Appointments, ATLAS Speakers, ATLAS Analysis Follow-Up, ATLAS Conference Notes, ATLAS Thesis, ATLAS Traceability and DSS Alarms Viewer. This paper presents the overview of the Glance information framework and describes the privilege mechanism developed to grant different level of access for each member and system.

  12. ATLAS Cloud R&D

    Science.gov (United States)

    Panitkin, Sergey; Barreiro Megino, Fernando; Caballero Bejar, Jose; Benjamin, Doug; Di Girolamo, Alessandro; Gable, Ian; Hendrix, Val; Hover, John; Kucharczyk, Katarzyna; Medrano Llamas, Ramon; Love, Peter; Ohman, Henrik; Paterson, Michael; Sobie, Randall; Taylor, Ryan; Walker, Rodney; Zaytsev, Alexander; Atlas Collaboration

    2014-06-01

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R&D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R&D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R&D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R&D group has gained a significant insight into the cloud computing landscape and has identified points that still need to be addressed in order to fully utilize this technology. This contribution will explain the cloud integration models that are being evaluated and will discuss ATLAS' learning during the collaboration with leading commercial and academic cloud providers.

  13. ATLAS TDAQ System Administration: evolution and re-design

    CERN Document Server

    Ballestrero, Sergio; The ATLAS collaboration; Brasolin, Franco; Contescu, Alexandru Cristian; Dubrov, Sergei; Fazio, Daniel; Korol, Aleksandr; Lee, Christopher Jon; Scannicchio, Diana; Twomey, Matthew Shaun

    2015-01-01

    The ATLAS Trigger and Data Acquisition (TDAQ) system is responsible for the online processing of live data, streaming from the ATLAS experiment at the Large Hadron Collider (LHC) at CERN. The online farm is composed of $\\sim 3000$ servers, processing the data readout from $\\sim 100$ million detector channels through multiple trigger levels. During the two years of the first Long Shutdown (LS1) there has been a tremendous amount of work done by the ATLAS TDAQ System Administrators, implementing numerous new software applications, upgrading the OS and the hardware, changing some design philosophies and exploiting the High Level Trigger farm with different purposes. The OS version has been upgraded to SLC6; for the largest part of the farm, which is composed by net booted nodes, this required a completely new design of the net booting system. In parallel, the migration to Puppet of the Configuration Management systems has been completed for both net booted and local booted hosts; the Post-Boot Scripts system and...

  14. ATLAS strip detector: Operational Experience and Run1 → Run2 transition

    CERN Document Server

    NAGAI, K; The ATLAS collaboration

    2014-01-01

    The ATLAS SCT operational experience and the detector performance during the RUN1 period of LHC will be reported. Additionally the preparation outward to RUN2 during the long shut down 1 will be mentioned.

  15. The ATLAS Trigger in Run-2 - Design, Menu and Performance

    CERN Document Server

    Vazquez Schroeder, Tamara; The ATLAS collaboration

    2017-01-01

    The ATLAS trigger has been used very successfully for online event selection during the first part of the second LHC run (Run-2) in 2015/16 at a center-of-mass energy of 13 TeV. The trigger system is composed of a hardware Level-1 trigger and a software-based high-level trigger. Events are selected based on physics signatures such as presence of energetic leptons, photons, jets or large missing energy. The trigger system exploits topological information, as well as multi-variate methods to carry out the necessary physics filtering. In total, the ATLAS online selection consists of thousands of different individual triggers. Taken together constitute the trigger menu, which reflects the physics goals of the collaboration while taking into account available data taking resources. The trigger selection capabilities of ATLAS during Run-2 have been significantly improved compared to Run-1, in order to cope with the higher event rates and number of interactions per bunch crossing (pileup) which are the result of the...

  16. Multilevel Workflow System in the ATLAS Experiment

    International Nuclear Information System (INIS)

    Borodin, M; De, K; Navarro, J Garcia; Golubkov, D; Klimentov, A; Maeno, T; Vaniachine, A

    2015-01-01

    The ATLAS experiment is scaling up Big Data processing for the next LHC run using a multilevel workflow system comprised of many layers. In Big Data processing ATLAS deals with datasets, not individual files. Similarly a task (comprised of many jobs) has become a unit of the ATLAS workflow in distributed computing, with about 0.8M tasks processed per year. In order to manage the diversity of LHC physics (exceeding 35K physics samples per year), the individual data processing tasks are organized into workflows. For example, the Monte Carlo workflow is composed of many steps: generate or configure hard-processes, hadronize signal and minimum-bias (pileup) events, simulate energy deposition in the ATLAS detector, digitize electronics response, simulate triggers, reconstruct data, convert the reconstructed data into ROOT ntuples for physics analysis, etc. Outputs are merged and/or filtered as necessary to optimize the chain. The bi-level workflow manager - ProdSys2 - generates actual workflow tasks and their jobs are executed across more than a hundred distributed computing sites by PanDA - the ATLAS job-level workload management system. On the outer level, the Database Engine for Tasks (DEfT) empowers production managers with templated workflow definitions. On the next level, the Job Execution and Definition Interface (JEDI) is integrated with PanDA to provide dynamic job definition tailored to the sites capabilities. We report on scaling up the production system to accommodate a growing number of requirements from main ATLAS areas: Trigger, Physics and Data Preparation. (paper)

  17. ATLAS + CMS top production and properties: Run 1 legacy

    CERN Document Server

    AUTHOR|(SzGeCERN)641905; The ATLAS collaboration

    2015-01-01

    The large Run 1 data sample of top-quark events collected at the Large Hadron Collider allows a variety of measurements to analyse the production and properties of the top quark. Measurements of top-quark production cross sections and top-quark properties in proton-proton collisions with the ATLAS and CMS detectors at the LHC are presented.

  18. The ATLAS fast tracker processor design

    CERN Document Server

    Volpi, Guido; Albicocco, Pietro; Alison, John; Ancu, Lucian Stefan; Anderson, James; Andari, Nansi; Andreani, Alessandro; Andreazza, Attilio; Annovi, Alberto; Antonelli, Mario; Asbah, Needa; Atkinson, Markus; Baines, J; Barberio, Elisabetta; Beccherle, Roberto; Beretta, Matteo; Biesuz, Nicolo Vladi; Blair, R E; Bogdan, Mircea; Boveia, Antonio; Britzger, Daniel; Bryant, Partick; Burghgrave, Blake; Calderini, Giovanni; Camplani, Alessandra; Cavaliere, Viviana; Cavasinni, Vincenzo; Chakraborty, Dhiman; Chang, Philip; Cheng, Yangyang; Citraro, Saverio; Citterio, Mauro; Crescioli, Francesco; Dawe, Noel; Dell'Orso, Mauro; Donati, Simone; Dondero, Paolo; Drake, G; Gadomski, Szymon; Gatta, Mauro; Gentsos, Christos; Giannetti, Paola; Gkaitatzis, Stamatios; Gramling, Johanna; Howarth, James William; Iizawa, Tomoya; Ilic, Nikolina; Jiang, Zihao; Kaji, Toshiaki; Kasten, Michael; Kawaguchi, Yoshimasa; Kim, Young Kee; Kimura, Naoki; Klimkovich, Tatsiana; Kolb, Mathis; Kordas, K; Krizka, Karol; Kubota, T; Lanza, Agostino; Li, Ho Ling; Liberali, Valentino; Lisovyi, Mykhailo; Liu, Lulu; Love, Jeremy; Luciano, Pierluigi; Luongo, Carmela; Magalotti, Daniel; Maznas, Ioannis; Meroni, Chiara; Mitani, Takashi; Nasimi, Hikmat; Negri, Andrea; Neroutsos, Panos; Neubauer, Mark; Nikolaidis, Spiridon; Okumura, Y; Pandini, Carlo; Petridou, Chariclia; Piendibene, Marco; Proudfoot, James; Rados, Petar Kevin; Roda, Chiara; Rossi, Enrico; Sakurai, Yuki; Sampsonidis, Dimitrios; Saxon, James; Schmitt, Stefan; Schoening, Andre; Shochet, Mel; Shoijaii, Jafar; Soltveit, Hans Kristian; Sotiropoulou, Calliope-Louisa; Stabile, Alberto; Swiatlowski, Maximilian J; Tang, Fukun; Taylor, Pierre Thor Elliot; Testa, Marianna; Tompkins, Lauren; Vercesi, V; Wang, Rui; Watari, Ryutaro; Zhang, Jianhong; Zeng, Jian Cong; Zou, Rui; Bertolucci, Federico

    2015-01-01

    The extended use of tracking information at the trigger level in the LHC is crucial for the trigger and data acquisition (TDAQ) system to fulfill its task. Precise and fast tracking is important to identify specific decay products of the Higgs boson or new phenomena, as well as to distinguish the contributions coming from the many collisions that occur at every bunch crossing. However, track reconstruction is among the most demanding tasks performed by the TDAQ computing farm; in fact, complete reconstruction at full Level-1 trigger accept rate (100 kHz) is not possible. In order to overcome this limitation, the ATLAS experiment is planning the installation of a dedicated processor, the Fast Tracker (FTK), which is aimed at achieving this goal. The FTK is a pipeline of high performance electronics, based on custom and commercial devices, which is expected to reconstruct, with high resolution, the trajectories of charged-particle tracks with a transverse momentum above 1 GeV, using the ATLAS inner tracker info...

  19. Trigger Menu-aware Monitoring for the ATLAS experiment

    CERN Document Server

    Hoad, Xanthe; The ATLAS collaboration

    2016-01-01

    Changes in the trigger menu, the online algorithmic event-selection of the ATLAS experiment at the LHC in response to luminosity and detector changes are followed by adjustments in their monitoring system. This is done to ensure that the collected data is useful, and can be properly reconstructed at Tier-0, the first level of the computing grid. During Run 1, ATLAS deployed monitoring updates with the installation of new software releases at Tier-0. This created unnecessary overhead for developers and operators, and unavoidably led to different releases for the data-taking and the monitoring setup. We present a "trigger menu-aware" monitoring system designed for the ATLAS Run 2 data-taking. The new monitoring system aims to simplify the ATLAS operational workflows, and allows for easy and flexible monitoring configuration changes at the Tier-0 site via an Oracle DB interface. We present the design and the implementation of the menu-aware monitoring, along with lessons from the operational experience of the ne...

  20. NODC Standard Product: International ocean atlas Volume 11 - Climatic atlas of the Sea of Azov 2008 (1 disc set) (NODC Accession 0098574)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This Atlas, Climatic Atlas of the Sea of Azov 2008 on CD-ROM, is an update to Volume 10, Climatic Atlas of the Sea of Azov 2006 on CD-ROM (NODC Accession 0098572),...

  1. Hardware-based Tracking at Trigger Level for ATLAS: The Fast TracKer (FTK) Project

    CERN Document Server

    Gramling, Johanna; The ATLAS collaboration

    2015-01-01

    Physics collisions at 13 TeV are expected at the LHC with an average of 40-50 proton-proton collisions per bunch crossing. Tracking at trigger level is an essential tool to control the rate in high-pileup conditions while maintaining a good efficiency for relevant physics processes. The Fast TracKer (FTK) is an integral part of the trigger upgrade for the ATLAS detector. For every event passing the Level 1 trigger (at a maximum rate of 100 kHz) the FTK receives data from the 80 million channels of the silicon detectors, providing tracking information to the High Level Trigger in order to ensure a selection robust against pile-up. The FTK performs a hardware- based track reconstruction, using associative memory (AM) that is based on the use of a custom chip, designed to perform pattern matching at very high speed. It finds track candidates at low resolution (roads) that seed a full-resolution track fitting done by FPGAs. Narrow roads permit a fast track fitting but need many patterns stored in the AM to ensure...

  2. Hardware-based Tracking at Trigger Level for ATLAS the Fast TracKer (FTK) Project

    CERN Document Server

    INSPIRE-00245767

    2015-01-01

    Physics collisions at 13 TeV are expected at the LHC with an average of 40-50 proton-proton collisions per bunch crossing under nominal conditions. Tracking at trigger level is an essential tool to control the rate in high-pileup conditions while maintaining a good efficiency for relevant physics processes. The Fast TracKer is an integral part of the trigger upgrade for the ATLAS detector. For every event passing the Level-1 trigger (at a maximum rate of 100 kHz) the FTK receives data from all the channels of the silicon detectors, providing tracking information to the High Level Trigger in order to ensure a selection robust against pile-up. The FTK performs a hardware-based track reconstruction, using associative memory that is based on the use of a custom chip, designed to perform pattern matching at very high speed. It finds track candidates at low resolution (roads) that seed a full-resolution track fitting done by FPGAs. An overview of the FTK system with focus on the pattern matching procedure will be p...

  3. The ATLAS Trigger System : Ready for Run-2

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00211007; The ATLAS collaboration

    2016-01-01

    The ATLAS trigger has been successfully collecting collision data during the first run of the LHC between 2009-2013 at a centre-of-mass energy between 900 GeV and 8 TeV. The trigger system consists of a hardware based Level-1 (L1) and a software based high-level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. During the course of the ongoing Run-2 data-taking campaign at 13 TeV centre-of-mass energy the trigger rates will be approximately 5 times higher compared to Run-1. In these proceedings we briefly review the ATLAS trigger system upgrades that were implemented during the shutdown, allowing us to cope with the increased trigger rates while maintaining or even improving our efficiency to select relevant physics processes. This includes changes to the L1 calorimeter and muon trigger system, the introduction of a new L1 topological trigger subsystem and the merging of the previously two-level HLT system into a single ev...

  4. Calorimetry triggering in ATLAS

    International Nuclear Information System (INIS)

    Igonkina, O; Achenbach, R; Andrei, V; Adragna, P; Aharrouche, M; Bauss, B; Bendel, M; Alexandre, G; Anduaga, X; Aracena, I; Backlund, S; Bogaerts, A; Baines, J; Barnett, B M; Bee, C; P, Behera; Bell, P; Benslama, K; Berry, T; Bohm, C

    2009-01-01

    The ATLAS experiment is preparing for data taking at 14 TeV collision energy. A rich discovery physics program is being prepared in addition to the detailed study of Standard Model processes which will be produced in abundance. The ATLAS multi-level trigger system is designed to accept one event in 2 | 10 5 to enable the selection of rare and unusual physics events. The ATLAS calorimeter system is a precise instrument, which includes liquid Argon electro-magnetic and hadronic components as well as a scintillator-tile hadronic calorimeter. All these components are used in the various levels of the trigger system. A wide physics coverage is ensured by inclusively selecting events with candidate electrons, photons, taus, jets or those with large missing transverse energy. The commissioning of the trigger system is being performed with cosmic ray events and by replaying simulated Monte Carlo events through the trigger and data acquisition system.

  5. Calorimetry Triggering in ATLAS

    International Nuclear Information System (INIS)

    Igonkina, O.; Achenbach, R.; Adragna, P.; Aharrouche, M.; Alexandre, G.; Andrei, V.; Anduaga, X.; Aracena, I.; Backlund, S.; Baines, J.; Barnett, B.M.; Bauss, B.; Bee, C.; Behera, P.; Bell, P.; Bendel, M.; Benslama, K.; Berry, T.; Bogaerts, A.; Bohm, C.; Bold, T.; Booth, J.R.A.; Bosman, M.; Boyd, J.; Bracinik, J.; Brawn, I.P.; Brelier, B.; Brooks, W.; Brunet, S.; Bucci, F.; Casadei, D.; Casado, P.; Cerri, A.; Charlton, D.G.; Childers, J.T.; Collins, N.J.; Conde Muino, P.; Coura Torres, R.; Cranmer, K.; Curtis, C.J.; Czyczula, Z.; Dam, M.; Damazio, D.; Davis, A.O.; De Santo, A.; Degenhardt, J.

    2011-01-01

    The ATLAS experiment is preparing for data taking at 14 TeV collision energy. A rich discovery physics program is being prepared in addition to the detailed study of Standard Model processes which will be produced in abundance. The ATLAS multi-level trigger system is designed to accept one event in 2/10 5 to enable the selection of rare and unusual physics events. The ATLAS calorimeter system is a precise instrument, which includes liquid Argon electro-magnetic and hadronic components as well as a scintillator-tile hadronic calorimeter. All these components are used in the various levels of the trigger system. A wide physics coverage is ensured by inclusively selecting events with candidate electrons, photons, taus, jets or those with large missing transverse energy. The commissioning of the trigger system is being performed with cosmic ray events and by replaying simulated Monte Carlo events through the trigger and data acquisition system.

  6. Calorimetry triggering in ATLAS

    Energy Technology Data Exchange (ETDEWEB)

    Igonkina, O [Nikhef National Institute for Subatomic Physics, Amsterdam (Netherlands); Achenbach, R; Andrei, V [Kirchhoff Institut fuer Physik, Universitaet Heidelberg, Heidelberg (Germany); Adragna, P [Physics Department, Queen Mary, University of London, London (United Kingdom); Aharrouche, M; Bauss, B; Bendel, M [Institut fr Physik, Universitt Mainz, Mainz (Germany); Alexandre, G [Section de Physique, Universite de Geneve, Geneva (Switzerland); Anduaga, X [Universidad Nacional de La Plata, La Plata (Argentina); Aracena, I [Stanford Linear Accelerator Center (SLAC), Stanford (United States); Backlund, S; Bogaerts, A [European Laboratory for Particle Physics (CERN), Geneva (Switzerland); Baines, J; Barnett, B M [STFC Rutherford Appleton Laboratory, Harwell Science and Innovation Campus, Didcot, Oxon (United Kingdom); Bee, C [Centre de Physique des Particules de Marseille, IN2P3-CNRS, Marseille (France); P, Behera [Iowa State University, Ames, Iowa (United States); Bell, P [School of Physics and Astronomy, University of Manchester, Manchester (United Kingdom); Benslama, K [University of Regina, Regina (Canada); Berry, T [Department of Physics, Royal Holloway and Bedford New College, Egham (United Kingdom); Bohm, C [Fysikum, Stockholm University, Stockholm (Sweden)

    2009-04-01

    The ATLAS experiment is preparing for data taking at 14 TeV collision energy. A rich discovery physics program is being prepared in addition to the detailed study of Standard Model processes which will be produced in abundance. The ATLAS multi-level trigger system is designed to accept one event in 2 | 10{sup 5} to enable the selection of rare and unusual physics events. The ATLAS calorimeter system is a precise instrument, which includes liquid Argon electro-magnetic and hadronic components as well as a scintillator-tile hadronic calorimeter. All these components are used in the various levels of the trigger system. A wide physics coverage is ensured by inclusively selecting events with candidate electrons, photons, taus, jets or those with large missing transverse energy. The commissioning of the trigger system is being performed with cosmic ray events and by replaying simulated Monte Carlo events through the trigger and data acquisition system.

  7. ATLAS Distributed Computing

    CERN Document Server

    Schovancova, J; The ATLAS collaboration

    2011-01-01

    The poster details the different aspects of the ATLAS Distributed Computing experience after the first year of LHC data taking. We describe the performance of the ATLAS distributed computing system and the lessons learned during the 2010 run, pointing out parts of the system which were in a good shape, and also spotting areas which required improvements. Improvements ranged from hardware upgrade on the ATLAS Tier-0 computing pools to improve data distribution rates, tuning of FTS channels between CERN and Tier-1s, and studying data access patterns for Grid analysis to improve the global processing rate. We show recent software development driven by operational needs with emphasis on data management and job execution in the ATLAS production system.

  8. FTK: the hardware Fast TracKer of the ATLAS experiment at CERN

    CERN Document Server

    Maznas, Ioannis; The ATLAS collaboration

    2016-01-01

    FTK: the hardware Fast TracKer of the ATLAS experiment at CERN In the ever increasing pile-up of the Large Hadron Collider environment, the trigger systems of the experiments have to be exceedingly sophisticated and fast at the same time, in order to select the relevant physics processes against the background processes. The Fast TracKer (FTK) is a track finding implementation at hardware level that is designed to deliver full-scan tracks with $p_{T}$ above 1 GeV to the ATLAS trigger system for every L1 accept (at a maximum rate of 100kHz). To accomplish this, FTK is a highly parallel system which is currently under installation in ATLAS. It will first provide the trigger system with tracks in the central region of the ATLAS detector, and next year it is expected to cover the whole detector. The system is based on pattern matching between hits coming from the silicon trackers of the ATLAS detector and 1 billion simulated patterns stored in specially designed ASIC chips (Associative memory – AM06). In a firs...

  9. Improving the ATLAS physics potential with the Fast Track Trigger System

    CERN Document Server

    Cavaliere, Viviana; The ATLAS collaboration

    2015-01-01

    The ATLAS Fast TracKer (FTK) is a custom electronics system that will operate at the full Level-1 accept rate, 100 kHz, to provide high quality tracks as input to the High-Level Trigger. The event reconstruction is performed in hardware, thanks to the massive parallelism of associative memories (AM) and FPGAs. We present the advantages for the physics goals of the ATLAS experiment and the recent results on the design, technological advancements and testing of some of the core components used in the processor.

  10. Report to users of ATLAS

    International Nuclear Information System (INIS)

    Ahmad, I.; Glagola, B.

    1997-03-01

    This report covers the following topics: (1) status of the ATLAS accelerator; (2) progress in R and D towards a proposal for a National ISOL Facility; (3) highlights of recent research at ATLAS; (4) the move of gammasphere from LBNL to ANL; (5) Accelerator Target Development laboratory; (6) Program Advisory Committee; (7) ATLAS User Group Executive Committee; and (8) ATLAS user handbook available in the World Wide Web. A brief summary is given for each topic

  11. Improving vertebra segmentation through joint vertebra-rib atlases

    Science.gov (United States)

    Wang, Yinong; Yao, Jianhua; Roth, Holger R.; Burns, Joseph E.; Summers, Ronald M.

    2016-03-01

    Accurate spine segmentation allows for improved identification and quantitative characterization of abnormalities of the vertebra, such as vertebral fractures. However, in existing automated vertebra segmentation methods on computed tomography (CT) images, leakage into nearby bones such as ribs occurs due to the close proximity of these visibly intense structures in a 3D CT volume. To reduce this error, we propose the use of joint vertebra-rib atlases to improve the segmentation of vertebrae via multi-atlas joint label fusion. Segmentation was performed and evaluated on CTs containing 106 thoracic and lumbar vertebrae from 10 pathological and traumatic spine patients on an individual vertebra level basis. Vertebra atlases produced errors where the segmentation leaked into the ribs. The use of joint vertebra-rib atlases produced a statistically significant increase in the Dice coefficient from 92.5 +/- 3.1% to 93.8 +/- 2.1% for the left and right transverse processes and a decrease in the mean and max surface distance from 0.75 +/- 0.60mm and 8.63 +/- 4.44mm to 0.30 +/- 0.27mm and 3.65 +/- 2.87mm, respectively.

  12. The new ATLAS Fast Calorimeter Simulation

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00223142; The ATLAS collaboration

    2016-01-01

    Many physics and performance studies with the ATLAS detector at the Large Hadron Collider require very large samples of simulated events, and producing these using the full GEANT4 detector simulation is highly CPU intensive. Often, a very detailed detector simulation is not needed, and in these cases fast simulation tools can be used to reduce the calorimeter simulation time by a few orders of magnitude. The new ATLAS Fast Calorimeter Simulation (FastCaloSim) is an improved parametrisation compared to the one used in the LHC Run-1. It provides a simulation of the particle energy response at the calorimeter read-out cell level, taking into account the detailed particle shower shapes and the correlations between the energy depositions in the various calorimeter layers. It is interfaced to the standard ATLAS digitization and reconstruction software, and can be tuned to data more easily than with GEANT4. The new FastCaloSim incorporates developments in geometry and physics lists of the last five years and benefit...

  13. Upgrading the ATLAS Fast Calorimeter Simulation

    CERN Document Server

    Hubacek, Zdenek; The ATLAS collaboration

    2016-01-01

    Many physics and performance studies with the ATLAS detector at the Large Hadron Collider require very large samples of simulated events, and producing these using the full GEANT4 detector simulation is highly CPU intensive. Often, a very detailed detector simulation is not needed, and in these cases fast simulation tools can be used to reduce the calorimeter simulation time by a few orders of magnitude. In ATLAS, a fast simulation of the calorimeter systems was developed, called Fast Calorimeter Simulation (FastCaloSim). It provides a parametrized simulation of the particle energy response at the calorimeter read-out cell level. It is interfaced to the standard ATLAS digitization and reconstruction software, and can be tuned to data more easily than with GEANT4. The original version of FastCaloSim has been very important in the LHC Run-1, with several billion events simulated. An improved parametrisation is being developed, to eventually address shortcomings of the original version. It incorporates developme...

  14. The High-Resolution IRAS Galaxy Atlas

    Science.gov (United States)

    Cao, Yu; Terebey, Susan; Prince, Thomas A.; Beichman, Charles A.; Oliversen, R. (Technical Monitor)

    1997-01-01

    An atlas of the Galactic plane (-4.7 deg is less than b is less than 4.7 deg), along with the molecular clouds in Orion, rho Oph, and Taurus-Auriga, has been produced at 60 and 100 microns from IRAS data. The atlas consists of resolution-enhanced co-added images with 1 min - 2 min resolution and co-added images at the native IRAS resolution. The IRAS Galaxy Atlas, together with the Dominion Radio Astrophysical Observatory H(sub I) line/21 cm continuum and FCRAO CO (1-0) Galactic plane surveys, which both have similar (approx. 1 min) resolution to the IRAS atlas, provides a powerful tool for studying the interstellar medium, star formation, and large-scale structure in our Galaxy. This paper documents the production and characteristics of the atlas.

  15. Performance and development plans for the Inner Detector trigger algorithms at ATLAS

    International Nuclear Information System (INIS)

    Martin-Haugh, Stewart

    2014-01-01

    A description of the algorithms and the performance of the ATLAS Inner Detector trigger for LHC Run 1 are presented, as well as prospects for a redesign of the tracking algorithms in Run 2. The Inner Detector trigger algorithms are vital for many trigger signatures at ATLAS. The performance of the algorithms for electrons is presented. The ATLAS trigger software will be restructured from two software levels into a single stage which poses a big challenge for the trigger algorithms in terms of execution time and maintaining the physics performance. Expected future improvements in the timing and efficiencies of the Inner Detector triggers are discussed, utilising the planned merging of the current two stages of the ATLAS trigger.

  16. First Results from the Online Radiation Dose Monitoring System in ATLAS experiment

    CERN Document Server

    Mandić, I; The ATLAS collaboration; Deliyergiyev, M; Gorišek, A; Kramberger, G; Mikuž, M; Franz, S; Hartert, J; Dawson, I; Miyagawa, P S; Nicolas, L

    2011-01-01

    High radiation doses which will accumulate in components of ATLAS experiment during data taking will cause damage to detectors and readout electronics. It is therefore important to continuously monitor the doses to estimate the level of degradation caused by radiation. Online radiation monitoring system measures ionizing dose in SiO2 and fluences of 1-MeV(Si) equivalent neutrons and thermal neutrons at several locations in ATLAS detector. In this paper measurements collected during two years of ATLAS data taking are presented and compared to predictions from radiation background simulations.

  17. ATLAS Award for Shield Supplier

    CERN Multimedia

    2004-01-01

    ATLAS technical coordinator Dr. Marzio Nessi presents the ATLAS supplier award to Vojtech Novotny, Director General of Skoda Hute.On 3 November, the ATLAS experiment honoured one of its suppliers, Skoda Hute s.r.o., of Plzen, Czech Republic, for their work on the detector's forward shielding elements. These huge and very massive cylinders surround the beampipe at either end of the detector to block stray particles from interfering with the ATLAS's muon chambers. For the shields, Skoda Hute produced 10 cast iron pieces with a total weight of 780 tonnes at a cost of 1.4 million CHF. Although there are many iron foundries in the CERN member states, there are only a limited number that can produce castings of the necessary size: the large pieces range in weight from 59 to 89 tonnes and are up to 1.5 metres thick.The forward shielding was designed by the ATLAS Technical Coordination in close collaboration with the ATLAS groups from the Czech Technical University and Charles University in Prague. The Czech groups a...

  18. ATLAS LAr Calorimeter Trigger Electronics Phase-1 Upgrade

    CERN Document Server

    Aad, Georges; The ATLAS collaboration

    2017-01-01

    The upgrade of the Large Hadron Collider (LHC) scheduled for a shut-down period of 2019-2020, referred to as the Phase-I upgrade, will increase the instantaneous luminosity to about three times the design value. Since the current ATLAS trigger system does not allow sufficient increase of the trigger rate, an improvement of the trigger system is required. The Liquid Argon (LAr) Calorimeter read-out will therefore be modified to use digital trigger signals with a higher spatial granularity in order to improve the identification efficiencies of electrons, photons, tau, jets and missing energy, at high background rejection rates at the Level-1 trigger. The new trigger signals will be arranged in 34000 so-called Super Cells which achieves 5-10 times better granularity than the trigger towers currently used and allows an improved background rejection. The readout of the trigger signals will process the signal of the Super Cells at every LHC bunch-crossing at 12-bit precision and a frequency of 40 MHz. The data will...

  19. Development of a monitoring tool to validate trigger level analysis in the ATLAS experiment

    CERN Document Server

    Hahn, Artur

    2014-01-01

    This report summarizes my thirteen week summer student project at CERN from June 30th until September 26th of 2014. My task was to contribute to a monitoring tool for the ATLAS experiment, comparing jets reconstructed by the trigger to fully offline reconstructed and saved events by creating a set of insightful histograms to be used during run 2 of the Large Hadron Collider, planned to start in early 2015. The motivation behind this project is to validate the use of data taken solely from the high level trigger for analysis purposes. Once the code generating the plots was completed, it was tested on data collected during run 1 up to the year 2012 and Monte Carlo simulated events with center-of-mass energies ps = 8TeV and ps = 14TeV.

  20. The ATLAS TDAQ DataCollection Software

    CERN Document Server

    Haeberli, C; Pretzl, K

    2003-01-01

    The Large Hadron Collider, which is currently under construction at CERN near Geneva, will collide protons with a center-of-mass energy of 14TeV. This high energy offers the possibility to discover particles with masses on the TeV scale. Bunches of 1.15 10^11 protons will cross at a rate of 40 MHz. 23 proton-proton collisions will happen at every bunch-crossing, which results in a total proton-proton interaction rate of almost one GHz. The biggest part of these interactions do not contain new physics but mostly QCD background. Therefore the detectors to discovery physics, such as ATLAS, need to select the ~100 bunch-crossings with the biggest discovery potential out of the 40 10^6 bunch-crossings per second. In case of the ATLAS experiment this reduction will be achieved on a three level trigger system. The first level trigger runs on custom hardware, the two higher trigger levels run as software algorithms on farms of hundreds of commodity PCs. The second level trigger will run at a rate of up to 100 kHz on ...

  1. Development, deployment and operations of ATLAS databases

    International Nuclear Information System (INIS)

    Vaniachine, A. V.; von der Schmitt, J. G.

    2008-01-01

    In preparation for ATLAS data taking, a coordinated shift from development towards operations has occurred in ATLAS database activities. In addition to development and commissioning activities in databases, ATLAS is active in the development and deployment (in collaboration with the WLCG 3D project) of the tools that allow the worldwide distribution and installation of databases and related datasets, as well as the actual operation of this system on ATLAS multi-grid infrastructure. We describe development and commissioning of major ATLAS database applications for online and offline. We present the first scalability test results and ramp-up schedule over the initial LHC years of operations towards the nominal year of ATLAS running, when the database storage volumes are expected to reach 6.1 TB for the Tag DB and 1.0 TB for the Conditions DB. ATLAS database applications require robust operational infrastructure for data replication between online and offline at Tier-0, and for the distribution of the offline data to Tier-1 and Tier-2 computing centers. We describe ATLAS experience with Oracle Streams and other technologies for coordinated replication of databases in the framework of the WLCG 3D services

  2. Upgrade of the First Level Muon Trigger in the End-Cap New Small Wheel Region of the ATLAS Detector

    International Nuclear Information System (INIS)

    Munwes, Yonathan

    2013-06-01

    The luminosity levels foreseen at the LHC after the 2018 LHC upgrade will tighten the demands on the ATLAS first level muon trigger system. A finer muon selection will be required to cope with the increased background and to keep the trigger rate for 20 GeV/c pTmuons as before. The introduction of new detectors in the small wheel region of the end-cap muon spectrometer will allow to refine the current trigger selection, allowing to increase the rejection power for tracks not coming from the interaction point, thus to find candidate muon tracks within 1 mrad angular resolution and within the 500 ns available latency. The on-detector trigger logic will require a coincidence of eight layers of small thin gap chambers detector pads to determine the trigger regions-of-interest. The charge information from the detector strips of the selected regions-of-interest will be sent to the off-detector trigger logic, which will calculate the strip centroids and extrapolate the muon tracks. The muon tracks information will be finally sent to the end-cap sector logic, which will combine the big wheel and the new small wheel trigger data, and provide the trigger muon candidates to the ATLAS central trigger. (author)

  3. The Run-2 ATLAS Trigger System

    International Nuclear Information System (INIS)

    Martínez, A Ruiz

    2016-01-01

    The ATLAS trigger successfully collected collision data during the first run of the LHC between 2009-2013 at different centre-of-mass energies between 900 GeV and 8TeV. The trigger system consists of a hardware Level-1 and a software-based high level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. In Run-2, the LHC will operate at centre-of-mass energies of 13 and 14 TeV and higher luminosity, resulting in up to five times higher rates of processes of interest. A brief review of the ATLAS trigger system upgrades that were implemented between Run-1 and Run-2, allowing to cope with the increased trigger rates while maintaining or even improving the efficiency to select physics processes of interest, will be given. This includes changes to the Level-1 calorimeter and muon trigger systems, the introduction of a new Level-1 topological trigger module and the merging of the previously two-level HLT system into a single event processing farm. A few examples will be shown, such as the impressive performance improvements in the HLT trigger algorithms used to identify leptons, hadrons and global event quantities like missing transverse energy. Finally, the status of the commissioning of the trigger system and its performance during the 2015 run will be presented. (paper)

  4. ATLAS DAQ/HLT rack DCS

    International Nuclear Information System (INIS)

    Ermoline, Yuri; Burckhart, Helfried; Francis, David; Wickens, Frederick J.

    2007-01-01

    The ATLAS Detector Control System (DCS) group provides a set of standard tools, used by subsystems to implement their local control systems. The ATLAS Data Acquisition and High Level Trigger (DAQ/HLT) rack DCS provides monitoring of the environmental parameters (air temperatures, humidity, etc.). The DAQ/HLT racks are located in the underground counting room (20 racks) and in the surface building (100 racks). The rack DCS is based on standard ATLAS tools and integrated into overall operation of the experiment. The implementation is based on the commercial control package and additional components, developed by CERN Joint Controls Project Framework. The prototype implementation and measurements are presented

  5. The ATLAS Trigger System: Ready for Run II

    CERN Document Server

    Czodrowski, Patrick; The ATLAS collaboration

    2015-01-01

    The ATLAS trigger system has been used successfully for data collection in the 2009-2013 Run 1 operation cycle of the CERN Large Hadron Collider (LHC) at center-of-mass energies of up to 8 TeV. With the restart of the LHC for the new Run 2 data-taking period at 13 TeV, the trigger rates are expected to rise by approximately a factor of 5. The trigger system consists of a hardware-based first level (L1) and a software-based high-level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of ~ 1kHz. This presentation will give an overview of the upgrades to the ATLAS trigger system that have been implemented during the LHC shutdown period in order to deal with the increased trigger rates while efficiently selecting the physics processes of interest. These upgrades include changes to the L1 calorimeter trigger, the introduction of a new L1 topological trigger module, improvements in the L1 muon system, and the merging of the previously two-level HLT ...

  6. CARTOGRAFIA ISTORICĂ ÎN SECOLUL AL XIX-LEA. KARL SPRUNER VON MERZ (1803-1892 ŞI AL SĂU HISTORISCH – GEOGRAPHISCHER HAND-ATLAS: ATLAS ANTIQUUS

    Directory of Open Access Journals (Sweden)

    FLORIN-GHEORGHE FODOREAN

    2015-05-01

    Full Text Available The historical cartography in the 19th century: Karl Spruner von Merz (1803-1892 and his Historisch - geographischer Hand-Atlas: Atlas antiquus. The historical atlases publish during the 19th century have changed the level of knowledge of the people regarding the ancient geographical space. The present study focuses on the activity of Karl Spruner von Merz (1803-1892, one of the most important cartographers of the 19th, close collaborator of the famous Justus Perthes’s publishing house in Germany. In 1855, the first section of the atlas edited by Karl Spruner was published: Historisch - geographischer Hand-Atlas: Atlas antiquus, Justus Perthes Verlag, Gotha, 1855. The first section of the atlas included only four pages of texts and commentaries. The atlas was published in three editions. The third one is entitled Spruner-Menke atlas antiquus. Karoli Spruneri opus. Tertio edidit, Theodorus Menke. Gothae: Sumtibus Justi Perthes, anno MDCCCLXV. Thirty-one maps were published. Many of the maps published in this atlas represent a mix of data gathered from ancient geographical sources. By comparing the information from these maps, one can establish the level of modern knowledge regarding the geographical space of the ancient regions of the world.

  7. Search for the standard-model Higgs boson in the associated WH production with 1.47 fb{sup -1} data of the ATLAS experiment at the LHC; Suche des Standardmodell Higgs-Boson bei der assoziierten WH-Produktion mit 1.47 fb{sup -1} Daten des ATLAS-Experimentes am LHC

    Energy Technology Data Exchange (ETDEWEB)

    Verlage, Tobias

    2011-09-28

    The Large Hadron Collider is a particle accelerator at CERN, in which since March 30th 2010 protons are brought to collision at a c. m. energy of √(s)=7 TeV. These events can be observed b y means of the ATLAS detector, one of two universal detectors at the Large Hadron Collider. One of the main purposes of the ATLAS detector is the search for the Standard-Model Higgs boson. This thesis describes a study on the search for the Standard-Model Higgs boson, whereby the production of the Higgs boson in association with a vector boson W{sup ±} and the subsequent decay in a bottom-quark pair iks studied. For this token data of the ATLAS detector, which correspond to an integrated luminosity of 1.47 fb{sup -1}, are compared with simulated physical events. An analysis based on cuts for the separation of the signal events of background processes is presented. Furthermore systematic uncertainties are determined. Finally an upper exclusion limit of the production rate for a Standard-Model Higgs boson in dependence of its mass in the range from 110 GeV to 139 GeV is calculated and discussed. The strongest exclusion limit can be posed for a Higgs boson with a mass of 110 GeV. For this a 16-fold larger production rate as that of the Standard-Model prediction can be excluded with a confidence level of 95%. For the whole studied mass range an upper exclusion limit for Higgs bosons with 16-29-fold increased Standard-Model production rate results.

  8. The ATLAS Muon Trigger Performance : Run 1 and initial Run 2.

    CERN Document Server

    Kasahara, Kota; The ATLAS collaboration

    2015-01-01

    The ATLAS Muon Trigger Performance: Run 1 and Initial Run 2 Performance

Events with muons in the final state are an important signature for many physics topics at the Large Hadron Collider (LHC). An efficient trigger on muons and a detailed understanding of its performance are required. In 2012, the last year of Run 1, the instantaneous luminosity of the LHC reached 7.7x10^33 cm -2s-1 and the average number of events that occur in a same bunch crossing was 25. The ATLAS Muon trigger has successfully adapted to this changing environment by making use of isolation requirements, combined trigger signatures with electron and jet trigger objects, and by using so-called full-scan triggers, which make use of the full event information to search for di-lepton signatures, seeded by single lepton objects. A stable and highly efficient muon trigger was vital in the discovery of Higgs boson in 2012 and for many searches for new physics. 
The performance of muon triggers during the LHC Run 1 data-taking campaigns i...

  9. Instrumentation of the upgraded ATLAS tracker with a double buffer front-end architecture for track triggering

    International Nuclear Information System (INIS)

    Wardrope, D

    2012-01-01

    The Large Hadron Collider will be upgraded to provide instantaneous luminosity L = 5 × 10 34 cm −2 s −1 , leading to excessive rates from the ATLAS Level-1 trigger. A double buffer front-end architecture for the ATLAS tracker replacement is proposed, that will enable the use of track information in trigger decisions within 20 μs in order to reduce the high trigger rates. Analysis of ATLAS simulations have found that using track information will enable the use of single lepton triggers with transverse momentum thresholds of p T ∼ 25 GeV, which will be of great benefit to the future physics programme of ATLAS.

  10. Search for a heavy neutral particle decaying into an electron and a muon using 1 fb$^-1$ of ATLAS data

    CERN Document Server

    Aad, Georges; Abdallah, Jalal; Abdelalim, Ahmed Ali; Abdesselam, Abdelouahab; Abdinov, Ovsat; Abi, Babak; Abolins, Maris; Abramowicz, Halina; Abreu, Henso; Acerbi, Emilio; Acharya, Bobby Samir; Adams, David; Addy, Tetteh; Adelman, Jahred; Aderholz, Michael; Adomeit, Stefanie; Adragna, Paolo; Adye, Tim; Aefsky, Scott; Aguilar-Saavedra, Juan Antonio; Aharrouche, Mohamed; Ahlen, Steven; Ahles, Florian; Ahmad, Ashfaq; Ahsan, Mahsana; Aielli, Giulio; Akdogan, Taylan; Åkesson, Torsten Paul Ake; Akimoto, Ginga; Akimov, Andrei; Akiyama, Kunihiro; Alam, Mohammad; Alam, Muhammad Aftab; Albert, Justin; Albrand, Solveig; Aleksa, Martin; Aleksandrov, Igor; Alessandria, Franco; Alexa, Calin; Alexander, Gideon; Alexandre, Gauthier; Alexopoulos, Theodoros; Alhroob, Muhammad; Aliev, Malik; Alimonti, Gianluca; Alison, John; Aliyev, Magsud; Allport, Phillip; Allwood-Spiers, Sarah; Almond, John; Aloisio, Alberto; Alon, Raz; Alonso, Alejandro; Alviggi, Mariagrazia; Amako, Katsuya; Amaral, Pedro; Amelung, Christoph; Ammosov, Vladimir; Amorim, Antonio; Amorós, Gabriel; Amram, Nir; Anastopoulos, Christos; Ancu, Lucian Stefan; Andari, Nansi; Andeen, Timothy; Anders, Christoph Falk; Anders, Gabriel; Anderson, Kelby; Andreazza, Attilio; Andrei, George Victor; Andrieux, Marie-Laure; Anduaga, Xabier; Angerami, Aaron; Anghinolfi, Francis; Anjos, Nuno; Annovi, Alberto; Antonaki, Ariadni; Antonelli, Mario; Antonov, Alexey; Antos, Jaroslav; Anulli, Fabio; Aoun, Sahar; Aperio Bella, Ludovica; Apolle, Rudi; Arabidze, Giorgi; Aracena, Ignacio; Arai, Yasuo; Arce, Ayana; Archambault, John-Paul; Arfaoui, Samir; Arguin, Jean-Francois; Arik, Engin; Arik, Metin; Armbruster, Aaron James; Arnaez, Olivier; Arnault, Christian; Artamonov, Andrei; Artoni, Giacomo; Arutinov, David; Asai, Shoji; Asfandiyarov, Ruslan; Ask, Stefan; Åsman, Barbro; Asquith, Lily; Assamagan, Ketevi; Astbury, Alan; Astvatsatourov, Anatoli; Atoian, Grigor; Aubert, Bernard; Auge, Etienne; Augsten, Kamil; Aurousseau, Mathieu; Austin, Nicholas; Avolio, Giuseppe; Avramidou, Rachel Maria; Axen, David; Ay, Cano; Azuelos, Georges; Azuma, Yuya; Baak, Max; Baccaglioni, Giuseppe; Bacci, Cesare; Bach, Andre; Bachacou, Henri; Bachas, Konstantinos; Bachy, Gerard; Backes, Moritz; Backhaus, Malte; Badescu, Elisabeta; Bagnaia, Paolo; Bahinipati, Seema; Bai, Yu; Bailey, David; Bain, Travis; Baines, John; Baker, Oliver Keith; Baker, Mark; Baker, Sarah; Banas, Elzbieta; Banerjee, Piyali; Banerjee, Swagato; Banfi, Danilo; Bangert, Andrea Michelle; Bansal, Vikas; Bansil, Hardeep Singh; Barak, Liron; Baranov, Sergei; Barashkou, Andrei; Barbaro Galtieri, Angela; Barber, Tom; Barberio, Elisabetta Luigia; Barberis, Dario; Barbero, Marlon; Bardin, Dmitri; Barillari, Teresa; Barisonzi, Marcello; Barklow, Timothy; Barlow, Nick; Barnett, Bruce; Barnett, Michael; Baroncelli, Antonio; Barone, Gaetano; Barr, Alan; Barreiro, Fernando; Barreiro Guimarães da Costa, João; Barrillon, Pierre; Bartoldus, Rainer; Barton, Adam Edward; Bartsch, Detlef; Bartsch, Valeria; Bates, Richard; Batkova, Lucia; Batley, Richard; Battaglia, Andreas; Battistin, Michele; Battistoni, Giuseppe; Bauer, Florian; Bawa, Harinder Singh; Beare, Brian; Beau, Tristan; Beauchemin, Pierre-Hugues; Beccherle, Roberto; Bechtle, Philip; Beck, Hans Peter; Beckingham, Matthew; Becks, Karl-Heinz; Beddall, Andrew; Beddall, Ayda; Bedikian, Sourpouhi; Bednyakov, Vadim; Bee, Christopher; Begel, Michael; Behar Harpaz, Silvia; Behera, Prafulla; Beimforde, Michael; Belanger-Champagne, Camille; Bell, Paul; Bell, William; Bella, Gideon; Bellagamba, Lorenzo; Bellina, Francesco; Bellomo, Massimiliano; Belloni, Alberto; Beloborodova, Olga; Belotskiy, Konstantin; Beltramello, Olga; Ben Ami, Sagi; Benary, Odette; Benchekroun, Driss; Benchouk, Chafik; Bendel, Markus; Benekos, Nektarios; Benhammou, Yan; Benjamin, Douglas; Benoit, Mathieu; Bensinger, James; Benslama, Kamal; Bentvelsen, Stan; Berge, David; Bergeaas Kuutmann, Elin; Berger, Nicolas; Berghaus, Frank; Berglund, Elina; Beringer, Jürg; Bernardet, Karim; Bernat, Pauline; Bernhard, Ralf; Bernius, Catrin; Berry, Tracey; Bertin, Antonio; Bertinelli, Francesco; Bertolucci, Federico; Besana, Maria Ilaria; Besson, Nathalie; Bethke, Siegfried; Bhimji, Wahid; Bianchi, Riccardo-Maria; Bianco, Michele; Biebel, Otmar; Bieniek, Stephen Paul; Bierwagen, Katharina; Biesiada, Jed; Biglietti, Michela; Bilokon, Halina; Bindi, Marcello; Binet, Sebastien; Bingul, Ahmet; Bini, Cesare; Biscarat, Catherine; Bitenc, Urban; Black, Kevin; Blair, Robert; Blanchard, Jean-Baptiste; Blanchot, Georges; Blazek, Tomas; Blocker, Craig; Blocki, Jacek; Blondel, Alain; Blum, Walter; Blumenschein, Ulrike; Bobbink, Gerjan; Bobrovnikov, Victor; Bocchetta, Simona Serena; Bocci, Andrea; Boddy, Christopher Richard; Boehler, Michael; Boek, Jennifer; Boelaert, Nele; Böser, Sebastian; Bogaerts, Joannes Andreas; Bogdanchikov, Alexander; Bogouch, Andrei; Bohm, Christian; Boisvert, Veronique; Bold, Tomasz; Boldea, Venera; Bolnet, Nayanka Myriam; Bona, Marcella; Bondarenko, Valery; Bondioli, Mario; Boonekamp, Maarten; Boorman, Gary; Booth, Chris; Bordoni, Stefania; Borer, Claudia; Borisov, Anatoly; Borissov, Guennadi; Borjanovic, Iris; Borroni, Sara; Bos, Kors; Boscherini, Davide; Bosman, Martine; Boterenbrood, Hendrik; Botterill, David; Bouchami, Jihene; Boudreau, Joseph; Bouhova-Thacker, Evelina Vassileva; Bourdarios, Claire; Bousson, Nicolas; Boveia, Antonio; Boyd, James; Boyko, Igor; Bozhko, Nikolay; Bozovic-Jelisavcic, Ivanka; Bracinik, Juraj; Braem, André; Branchini, Paolo; Brandenburg, George; Brandt, Andrew; Brandt, Gerhard; Brandt, Oleg; Bratzler, Uwe; Brau, Benjamin; Brau, James; Braun, Helmut; Brelier, Bertrand; Bremer, Johan; Brenner, Richard; Bressler, Shikma; Breton, Dominique; Britton, Dave; Brochu, Frederic; Brock, Ian; Brock, Raymond; Brodbeck, Timothy; Brodet, Eyal; Broggi, Francesco; Bromberg, Carl; Brooijmans, Gustaaf; Brooks, William; Brown, Gareth; Brown, Heather; Bruckman de Renstrom, Pawel; Bruncko, Dusan; Bruneliere, Renaud; Brunet, Sylvie; Bruni, Alessia; Bruni, Graziano; Bruschi, Marco; Buanes, Trygve; Bucci, Francesca; Buchanan, James; Buchanan, Norman; Buchholz, Peter; Buckingham, Ryan; Buckley, Andrew; Buda, Stelian Ioan; Budagov, Ioulian; Budick, Burton; Büscher, Volker; Bugge, Lars; Buira-Clark, Daniel; Bulekov, Oleg; Bunse, Moritz; Buran, Torleiv; Burckhart, Helfried; Burdin, Sergey; Burgess, Thomas; Burke, Stephen; Busato, Emmanuel; Bussey, Peter; Buszello, Claus-Peter; Butin, François; Butler, Bart; Butler, John; Buttar, Craig; Butterworth, Jonathan; Buttinger, William; Byatt, Tom; Cabrera Urbán, Susana; Caforio, Davide; Cakir, Orhan; Calafiura, Paolo; Calderini, Giovanni; Calfayan, Philippe; Calkins, Robert; Caloba, Luiz; Caloi, Rita; Calvet, David; Calvet, Samuel; Camacho Toro, Reina; Camarri, Paolo; Cambiaghi, Mario; Cameron, David; Campana, Simone; Campanelli, Mario; Canale, Vincenzo; Canelli, Florencia; Canepa, Anadi; Cantero, Josu; Capasso, Luciano; Capeans Garrido, Maria Del Mar; Caprini, Irinel; Caprini, Mihai; Capriotti, Daniele; Capua, Marcella; Caputo, Regina; Cardarelli, Roberto; Carli, Tancredi; Carlino, Gianpaolo; Carminati, Leonardo; Caron, Bryan; Caron, Sascha; Carrillo Montoya, German D; Carter, Antony; Carter, Janet; Carvalho, João; Casadei, Diego; Casado, Maria Pilar; Cascella, Michele; Caso, Carlo; Castaneda Hernandez, Alfredo Martin; Castaneda-Miranda, Elizabeth; Castillo Gimenez, Victoria; Castro, Nuno Filipe; Cataldi, Gabriella; Cataneo, Fernando; Catinaccio, Andrea; Catmore, James; Cattai, Ariella; Cattani, Giordano; Caughron, Seth; Cauz, Diego; Cavalleri, Pietro; Cavalli, Donatella; Cavalli-Sforza, Matteo; Cavasinni, Vincenzo; Ceradini, Filippo; Santiago Cerqueira, Augusto; Cerri, Alessandro; Cerrito, Lucio; Cerutti, Fabio; Cetin, Serkant Ali; Cevenini, Francesco; Chafaq, Aziz; Chakraborty, Dhiman; Chan, Kevin; Chapleau, Bertrand; Chapman, John Derek; Chapman, John Wehrley; Chareyre, Eve; Charlton, Dave; Chavda, Vikash; Chavez Barajas, Carlos Alberto; Cheatham, Susan; Chekanov, Sergei; Chekulaev, Sergey; Chelkov, Gueorgui; Chelstowska, Magda Anna; Chen, Chunhui; Chen, Hucheng; Chen, Shenjian; Chen, Tingyang; Chen, Xin; Cheng, Shaochen; Cheplakov, Alexander; Chepurnov, Vladimir; Cherkaoui El Moursli, Rajaa; Chernyatin, Valeriy; Cheu, Elliott; Cheung, Sing-Leung; Chevalier, Laurent; Chiefari, Giovanni; Chikovani, Leila; Childers, John Taylor; Chilingarov, Alexandre; Chiodini, Gabriele; Chizhov, Mihail; Choudalakis, Georgios; Chouridou, Sofia; Christidi, Illectra-Athanasia; Christov, Asen; Chromek-Burckhart, Doris; Chu, Ming-Lee; Chudoba, Jiri; Ciapetti, Guido; Ciba, Krzysztof; Ciftci, Abbas Kenan; Ciftci, Rena; Cinca, Diane; Cindro, Vladimir; Ciobotaru, Matei Dan; Ciocca, Claudia; Ciocio, Alessandra; Cirilli, Manuela; Ciubancan, Mihai; Clark, Allan G; Clark, Philip James; Cleland, Bill; Clemens, Jean-Claude; Clement, Benoit; Clement, Christophe; Clifft, Roger; Coadou, Yann; Cobal, Marina; Coccaro, Andrea; Cochran, James H; Coe, Paul; Cogan, Joshua Godfrey; Coggeshall, James; Cogneras, Eric; Cojocaru, Claudiu; Colas, Jacques; Colijn, Auke-Pieter; Collard, Caroline; Collins, Neil; Collins-Tooth, Christopher; Collot, Johann; Colon, German; Conde Muiño, Patricia; Coniavitis, Elias; Conidi, Maria Chiara; Consonni, Michele; Consorti, Valerio; Constantinescu, Serban; Conta, Claudio; Conventi, Francesco; Cook, James; Cooke, Mark; Cooper, Ben; Cooper-Sarkar, Amanda; Cooper-Smith, Neil; Copic, Katherine; Cornelissen, Thijs; Corradi, Massimo; Corriveau, Francois; Cortes-Gonzalez, Arely; Cortiana, Giorgio; Costa, Giuseppe; Costa, María José; Costanzo, Davide; Costin, Tudor; Côté, David; Courneyea, Lorraine; Cowan, Glen; Cowden, Christopher; Cox, Brian; Cranmer, Kyle; Crescioli, Francesco; Cristinziani, Markus; Crosetti, Giovanni; Crupi, Roberto; Crépé-Renaudin, Sabine; Cuciuc, Constantin-Mihai; Cuenca Almenar, Cristóbal; Cuhadar Donszelmann, Tulay; Curatolo, Maria; Curtis, Chris; Cwetanski, Peter; Czirr, Hendrik; Czyczula, Zofia; D'Auria, Saverio; D'Onofrio, Monica; D'Orazio, Alessia; Da Silva, Paulo Vitor; Da Via, Cinzia; Dabrowski, Wladyslaw; Dai, Tiesheng; Dallapiccola, Carlo; Dam, Mogens; Dameri, Mauro; Damiani, Daniel; Danielsson, Hans Olof; Dannheim, Dominik; Dao, Valerio; Darbo, Giovanni; Darlea, Georgiana Lavinia; Daum, Cornelis; Dauvergne, Jean-Pierre; Davey, Will; Davidek, Tomas; Davidson, Nadia; Davidson, Ruth; Davies, Eleanor; Davies, Merlin; Davison, Adam; Davygora, Yuriy; Dawe, Edmund; Dawson, Ian; Dawson, John; Daya, Rozmin; De, Kaushik; de Asmundis, Riccardo; De Castro, Stefano; De Castro Faria Salgado, Pedro; De Cecco, Sandro; de Graat, Julien; De Groot, Nicolo; de Jong, Paul; De La Taille, Christophe; De la Torre, Hector; De Lotto, Barbara; De Mora, Lee; De Nooij, Lucie; De Pedis, Daniele; De Salvo, Alessandro; De Sanctis, Umberto; De Santo, Antonella; De Vivie De Regie, Jean-Baptiste; Dean, Simon; Debbe, Ramiro; Dedovich, Dmitri; Degenhardt, James; Dehchar, Mohamed; Del Papa, Carlo; Del Peso, Jose; Del Prete, Tarcisio; Deliyergiyev, Maksym; Dell'Acqua, Andrea; Dell'Asta, Lidia; Della Pietra, Massimo; della Volpe, Domenico; Delmastro, Marco; Delpierre, Pierre; Delruelle, Nicolas; Delsart, Pierre-Antoine; Deluca, Carolina; Demers, Sarah; Demichev, Mikhail; Demirkoz, Bilge; Deng, Jianrong; Denisov, Sergey; Derendarz, Dominik; Derkaoui, Jamal Eddine; Derue, Frederic; Dervan, Paul; Desch, Klaus Kurt; Devetak, Erik; Deviveiros, Pier-Olivier; Dewhurst, Alastair; DeWilde, Burton; Dhaliwal, Saminder; Dhullipudi, Ramasudhakar; Di Ciaccio, Anna; Di Ciaccio, Lucia; Di Girolamo, Alessandro; Di Girolamo, Beniamino; Di Luise, Silvestro; Di Mattia, Alessandro; Di Micco, Biagio; Di Nardo, Roberto; Di Simone, Andrea; Di Sipio, Riccardo; Diaz, Marco Aurelio; Diblen, Faruk; Diehl, Edward; Dietrich, Janet; Dietzsch, Thorsten; Diglio, Sara; Dindar Yagci, Kamile; Dingfelder, Jochen; Dionisi, Carlo; Dita, Petre; Dita, Sanda; Dittus, Fridolin; Djama, Fares; Djobava, Tamar; Barros do Vale, Maria Aline; Do Valle Wemans, André; Doan, Thi Kieu Oanh; Dobbs, Matt; Dobinson, Robert; Dobos, Daniel; Dobson, Ellie; Dobson, Marc; Dodd, Jeremy; Doglioni, Caterina; Doherty, Tom; Doi, Yoshikuni; Dolejsi, Jiri; Dolenc, Irena; Dolezal, Zdenek; Dolgoshein, Boris; Dohmae, Takeshi; Donadelli, Marisilvia; Donega, Mauro; Donini, Julien; Dopke, Jens; Doria, Alessandra; Dos Anjos, Andre; Dosil, Mireia; Dotti, Andrea; Dova, Maria-Teresa; Dowell, John; Doxiadis, Alexander; Doyle, Tony; Drasal, Zbynek; Drees, Jürgen; Dressnandt, Nandor; Drevermann, Hans; Driouichi, Chafik; Dris, Manolis; Dubbert, Jörg; Dubbs, Tim; Dube, Sourabh; Duchovni, Ehud; Duckeck, Guenter; Dudarev, Alexey; Dudziak, Fanny; Dührssen, Michael; Duerdoth, Ian; Duflot, Laurent; Dufour, Marc-Andre; Dunford, Monica; Duran Yildiz, Hatice; Duxfield, Robert; Dwuznik, Michal; Dydak, Friedrich; Düren, Michael; Ebenstein, William; Ebke, Johannes; Eckert, Simon; Eckweiler, Sebastian; Edmonds, Keith; Edwards, Clive; Edwards, Nicholas Charles; Ehrenfeld, Wolfgang; Ehrich, Thies; Eifert, Till; Eigen, Gerald; Einsweiler, Kevin; Eisenhandler, Eric; Ekelof, Tord; El Kacimi, Mohamed; Ellert, Mattias; Elles, Sabine; Ellinghaus, Frank; Ellis, Katherine; Ellis, Nicolas; Elmsheuser, Johannes; Elsing, Markus; Emeliyanov, Dmitry; Engelmann, Roderich; Engl, Albert; Epp, Brigitte; Eppig, Andrew; Erdmann, Johannes; Ereditato, Antonio; Eriksson, Daniel; Ernst, Jesse; Ernst, Michael; Ernwein, Jean; Errede, Deborah; Errede, Steven; Ertel, Eugen; Escalier, Marc; Escobar, Carlos; Espinal Curull, Xavier; Esposito, Bellisario; Etienne, Francois; Etienvre, Anne-Isabelle; Etzion, Erez; Evangelakou, Despoina; Evans, Hal; Fabbri, Laura; Fabre, Caroline; Fakhrutdinov, Rinat; Falciano, Speranza; Fang, Yaquan; Fanti, Marcello; Farbin, Amir; Farilla, Addolorata; Farley, Jason; Farooque, Trisha; Farrington, Sinead; Farthouat, Philippe; Fassnacht, Patrick; Fassouliotis, Dimitrios; Fatholahzadeh, Baharak; Favareto, Andrea; Fayard, Louis; Fazio, Salvatore; Febbraro, Renato; Federic, Pavol; Fedin, Oleg; Fedorko, Woiciech; Fehling-Kaschek, Mirjam; Feligioni, Lorenzo; Fellmann, Denis; Felzmann, Ulrich; Feng, Cunfeng; Feng, Eric; Fenyuk, Alexander; Ferencei, Jozef; Ferland, Jonathan; Fernando, Waruna; Ferrag, Samir; Ferrando, James; Ferrara, Valentina; Ferrari, Arnaud; Ferrari, Pamela; Ferrari, Roberto; Ferrer, Antonio; Ferrer, Maria Lorenza; Ferrere, Didier; Ferretti, Claudio; Ferretto Parodi, Andrea; Fiascaris, Maria; Fiedler, Frank; Filipčič, Andrej; Filippas, Anastasios; Filthaut, Frank; Fincke-Keeler, Margret; Fiolhais, Miguel; Fiorini, Luca; Firan, Ana; Fischer, Gordon; Fischer, Peter; Fisher, Matthew; Fisher, Steve; Flechl, Martin; Fleck, Ivor; Fleckner, Johanna; Fleischmann, Philipp; Fleischmann, Sebastian; Flick, Tobias; Flores Castillo, Luis; Flowerdew, Michael; Fokitis, Manolis; Fonseca Martin, Teresa; Forbush, David Alan; Formica, Andrea; Forti, Alessandra; Fortin, Dominique; Foster, Joe; Fournier, Daniel; Foussat, Arnaud; Fowler, Andrew; Fowler, Ken; Fox, Harald; Francavilla, Paolo; Franchino, Silvia; Francis, David; Frank, Tal; Franklin, Melissa; Franz, Sebastien; Fraternali, Marco; Fratina, Sasa; French, Sky; Friedrich, Felix; Froeschl, Robert; Froidevaux, Daniel; Frost, James; Fukunaga, Chikara; Fullana Torregrosa, Esteban; Fuster, Juan; Gabaldon, Carolina; Gabizon, Ofir; Gadfort, Thomas; Gadomski, Szymon; Gagliardi, Guido; Gagnon, Pauline; Galea, Cristina; Gallas, Elizabeth; Gallas, Manuel; Gallo, Valentina Santina; Gallop, Bruce; Gallus, Petr; Galyaev, Eugene; Gan, KK; Gao, Yongsheng; Gapienko, Vladimir; Gaponenko, Andrei; Garberson, Ford; Garcia-Sciveres, Maurice; García, Carmen; García Navarro, José Enrique; Gardner, Robert; Garelli, Nicoletta; Garitaonandia, Hegoi; Garonne, Vincent; Garvey, John; Gatti, Claudio; Gaudio, Gabriella; Gaumer, Olivier; Gaur, Bakul; Gauthier, Lea; Gavrilenko, Igor; Gay, Colin; Gaycken, Goetz; Gayde, Jean-Christophe; Gazis, Evangelos; Ge, Peng; Gee, Norman; Geerts, Daniël Alphonsus Adrianus; Geich-Gimbel, Christoph; Gellerstedt, Karl; Gemme, Claudia; Gemmell, Alistair; Genest, Marie-Hélène; Gentile, Simonetta; George, Matthias; George, Simon; Gerlach, Peter; Gershon, Avi; Geweniger, Christoph; Ghazlane, Hamid; Ghez, Philippe; Ghodbane, Nabil; Giacobbe, Benedetto; Giagu, Stefano; Giakoumopoulou, Victoria; Giangiobbe, Vincent; Gianotti, Fabiola; Gibbard, Bruce; Gibson, Adam; Gibson, Stephen; Gilbert, Laura; Gilchriese, Murdock; Gilewsky, Valentin; Gillberg, Dag; Gillman, Tony; Gingrich, Douglas; Ginzburg, Jonatan; Giokaris, Nikos; Giordani, MarioPaolo; Giordano, Raffaele; Giorgi, Francesco Michelangelo; Giovannini, Paola; Giraud, Pierre-Francois; Giugni, Danilo; Giunta, Michele; Giusti, Paolo; Gjelsten, Børge Kile; Gladilin, Leonid; Glasman, Claudia; Glatzer, Julian; Glazov, Alexandre; Glitza, Karl-Walter; Glonti, George; Godfrey, Jennifer; Godlewski, Jan; Goebel, Martin; Göpfert, Thomas; Goeringer, Christian; Gössling, Claus; Göttfert, Tobias; Goldfarb, Steven; Golling, Tobias; Golovnia, Serguei; Gomes, Agostinho; Gomez Fajardo, Luz Stella; Gonçalo, Ricardo; Goncalves Pinto Firmino Da Costa, Joao; Gonella, Laura; Gonidec, Allain; Gonzalez, Saul; González de la Hoz, Santiago; Gonzalez Silva, Laura; Gonzalez-Sevilla, Sergio; Goodson, Jeremiah Jet; Goossens, Luc; Gorbounov, Petr Andreevich; Gordon, Howard; Gorelov, Igor; Gorfine, Grant; Gorini, Benedetto; Gorini, Edoardo; Gorišek, Andrej; Gornicki, Edward; Gorokhov, Serguei; Goryachev, Vladimir; Gosdzik, Bjoern; Gosselink, Martijn; Gostkin, Mikhail Ivanovitch; Gough Eschrich, Ivo; Gouighri, Mohamed; Goujdami, Driss; Goulette, Marc Phillippe; Goussiou, Anna; Goy, Corinne; Grabowska-Bold, Iwona; Grabski, Varlen; Grafström, Per; Grah, Christian; Grahn, Karl-Johan; Grancagnolo, Francesco; Grancagnolo, Sergio; Grassi, Valerio; Gratchev, Vadim; Grau, Nathan; Gray, Heather; Gray, Julia Ann; Graziani, Enrico; Grebenyuk, Oleg; Greenfield, Debbie; Greenshaw, Timothy; Greenwood, Zeno Dixon; Gregersen, Kristian; Gregor, Ingrid-Maria; Grenier, Philippe; Griffiths, Justin; Grigalashvili, Nugzar; Grillo, Alexander; Grinstein, Sebastian; Grishkevich, Yaroslav; Grivaz, Jean-Francois; Grognuz, Joel; Groh, Manfred; Gross, Eilam; Grosse-Knetter, Joern; Groth-Jensen, Jacob; Grybel, Kai; Guarino, Victor; Guest, Daniel; Guicheney, Christophe; Guida, Angelo; Guillemin, Thibault; Guindon, Stefan; Guler, Hulya; Gunther, Jaroslav; Guo, Bin; Guo, Jun; Gupta, Ambreesh; Gusakov, Yury; Gushchin, Vladimir; Gutierrez, Andrea; Gutierrez, Phillip; Guttman, Nir; Gutzwiller, Olivier; Guyot, Claude; Gwenlan, Claire; Gwilliam, Carl; Haas, Andy; Haas, Stefan; Haber, Carl; Hackenburg, Robert; Hadavand, Haleh Khani; Hadley, David; Haefner, Petra; Hahn, Ferdinand; Haider, Stefan; Hajduk, Zbigniew; Hakobyan, Hrachya; Haller, Johannes; Hamacher, Klaus; Hamal, Petr; Hamilton, Andrew; Hamilton, Samuel; Han, Hongguang; Han, Liang; Hanagaki, Kazunori; Hance, Michael; Handel, Carsten; Hanke, Paul; Hansen, John Renner; Hansen, Jørgen Beck; Hansen, Jorn Dines; Hansen, Peter Henrik; Hansson, Per; Hara, Kazuhiko; Hare, Gabriel; Harenberg, Torsten; Harkusha, Siarhei; Harper, Devin; Harrington, Robert; Harris, Orin; Harrison, Karl; Hartert, Jochen; Hartjes, Fred; Haruyama, Tomiyoshi; Harvey, Alex; Hasegawa, Satoshi; Hasegawa, Yoji; Hassani, Samira; Hatch, Mark; Hauff, Dieter; Haug, Sigve; Hauschild, Michael; Hauser, Reiner; Havranek, Miroslav; Hawes, Brian; Hawkes, Christopher; Hawkings, Richard John; Hawkins, Donovan; Hayakawa, Takashi; Hayden, Daniel; Hayward, Helen; Haywood, Stephen; Hazen, Eric; He, Mao; Head, Simon; Hedberg, Vincent; Heelan, Louise; Heim, Sarah; Heinemann, Beate; Heisterkamp, Simon; Helary, Louis; Heller, Mathieu; Hellman, Sten; Hellmich, Dennis; Helsens, Clement; Henderson, Robert; Henke, Michael; Henrichs, Anna; Henriques Correia, Ana Maria; Henrot-Versille, Sophie; Henry-Couannier, Frédéric; Hensel, Carsten; Henß, Tobias; Medina Hernandez, Carlos; Hernández Jiménez, Yesenia; Herrberg, Ruth; Hershenhorn, Alon David; Herten, Gregor; Hertenberger, Ralf; Hervas, Luis; Hessey, Nigel; Hidvegi, Attila; Higón-Rodriguez, Emilio; Hill, Daniel; Hill, John; Hill, Norman; Hiller, Karl Heinz; Hillert, Sonja; Hillier, Stephen; Hinchliffe, Ian; Hines, Elizabeth; Hirose, Minoru; Hirsch, Florian; Hirschbuehl, Dominic; Hobbs, John; Hod, Noam; Hodgkinson, Mark; Hodgson, Paul; Hoecker, Andreas; Hoeferkamp, Martin; Hoffman, Julia; Hoffmann, Dirk; Hohlfeld, Marc; Holder, Martin; Holmgren, Sven-Olof; Holy, Tomas; Holzbauer, Jenny; Homma, Yasuhiro; Hong, Tae Min; Hooft van Huysduynen, Loek; Horazdovsky, Tomas; Horn, Claus; Horner, Stephan; Horton, Katherine; Hostachy, Jean-Yves; Hou, Suen; Houlden, Michael; Hoummada, Abdeslam; Howarth, James; Howell, David; Hristova, Ivana; Hrivnac, Julius; Hruska, Ivan; Hryn'ova, Tetiana; Hsu, Pai-hsien Jennifer; Hsu, Shih-Chieh; Huang, Guang Shun; Hubacek, Zdenek; Hubaut, Fabrice; Huegging, Fabian; Huffman, Todd Brian; Hughes, Emlyn; Hughes, Gareth; Hughes-Jones, Richard; Huhtinen, Mika; Hurst, Peter; Hurwitz, Martina; Husemann, Ulrich; Huseynov, Nazim; Huston, Joey; Huth, John; Iacobucci, Giuseppe; Iakovidis, Georgios; Ibbotson, Michael; Ibragimov, Iskander; Ichimiya, Ryo; Iconomidou-Fayard, Lydia; Idarraga, John; Idzik, Marek; Iengo, Paolo; Igonkina, Olga; Ikegami, Yoichi; Ikeno, Masahiro; Ilchenko, Yuri; Iliadis, Dimitrios; Imbault, Didier; Imhaeuser, Martin; Imori, Masatoshi; Ince, Tayfun; Inigo-Golfin, Joaquin; Ioannou, Pavlos; Iodice, Mauro; Ionescu, Gelu; Irles Quiles, Adrian; Ishii, Koji; Ishikawa, Akimasa; Ishino, Masaya; Ishmukhametov, Renat; Issever, Cigdem; Istin, Serhat; Ivashin, Anton; Iwanski, Wieslaw; Iwasaki, Hiroyuki; Izen, Joseph; Izzo, Vincenzo; Jackson, Brett; Jackson, John; Jackson, Paul; Jaekel, Martin; Jain, Vivek; Jakobs, Karl; Jakobsen, Sune; Jakubek, Jan; Jana, Dilip; Jankowski, Ernest; Jansen, Eric; Jantsch, Andreas; Janus, Michel; Jarlskog, Göran; Jeanty, Laura; Jelen, Kazimierz; Jen-La Plante, Imai; Jenni, Peter; Jeremie, Andrea; Jež, Pavel; Jézéquel, Stéphane; Jha, Manoj Kumar; Ji, Haoshuang; Ji, Weina; Jia, Jiangyong; Jiang, Yi; Jimenez Belenguer, Marcos; Jin, Ge; Jin, Shan; Jinnouchi, Osamu; Joergensen, Morten Dam; Joffe, David; Johansen, Lars; Johansen, Marianne; Johansson, Erik; Johansson, Per; Johnert, Sebastian; Johns, Kenneth; Jon-And, Kerstin; Jones, Graham; Jones, Roger; Jones, Tegid; Jones, Tim; Jonsson, Ove; Joram, Christian; Jorge, Pedro; Joseph, John; Jovin, Tatjana; Ju, Xiangyang; Juranek, Vojtech; Jussel, Patrick; Juste Rozas, Aurelio; Kabachenko, Vasily; Kabana, Sonja; Kaci, Mohammed; Kaczmarska, Anna; Kadlecik, Peter; Kado, Marumi; Kagan, Harris; Kagan, Michael; Kaiser, Steffen; Kajomovitz, Enrique; Kalinin, Sergey; Kalinovskaya, Lidia; Kama, Sami; Kanaya, Naoko; Kaneda, Michiru; Kanno, Takayuki; Kantserov, Vadim; Kanzaki, Junichi; Kaplan, Benjamin; Kapliy, Anton; Kaplon, Jan; Kar, Deepak; Karagoz, Muge; Karnevskiy, Mikhail; Karr, Kristo; Kartvelishvili, Vakhtang; Karyukhin, Andrey; Kashif, Lashkar; Kasmi, Azzedine; Kass, Richard; Kastanas, Alex; Kataoka, Mayuko; Kataoka, Yousuke; Katsoufis, Elias; Katzy, Judith; Kaushik, Venkatesh; Kawagoe, Kiyotomo; Kawamoto, Tatsuo; Kawamura, Gen; Kayl, Manuel; Kazanin, Vassili; Kazarinov, Makhail; Keates, James Robert; Keeler, Richard; Kehoe, Robert; Keil, Markus; Kekelidze, George; Kelly, Marc; Kennedy, John; Kenney, Christopher John; Kenyon, Mike; Kepka, Oldrich; Kerschen, Nicolas; Kerševan, Borut Paul; Kersten, Susanne; Kessoku, Kohei; Ketterer, Christian; Keung, Justin; Khakzad, Mohsen; Khalil-zada, Farkhad; Khandanyan, Hovhannes; Khanov, Alexander; Kharchenko, Dmitri; Khodinov, Alexander; Kholodenko, Anatoli; Khomich, Andrei; Khoo, Teng Jian; Khoriauli, Gia; Khoroshilov, Andrey; Khovanskiy, Nikolai; Khovanskiy, Valery; Khramov, Evgeniy; Khubua, Jemal; Kim, Hyeon Jin; Kim, Min Suk; Kim, Peter; Kim, Shinhong; Kimura, Naoki; Kind, Oliver; King, Barry; King, Matthew; King, Robert Steven Beaufoy; Kirk, Julie; Kirsch, Lawrence; Kiryunin, Andrey; Kishimoto, Tomoe; Kisielewska, Danuta; Kittelmann, Thomas; Kiver, Andrey; Kladiva, Eduard; Klaiber-Lodewigs, Jonas; Klein, Max; Klein, Uta; Kleinknecht, Konrad; Klemetti, Miika; Klier, Amit; Klimentov, Alexei; Klingenberg, Reiner; Klinkby, Esben; Klioutchnikova, Tatiana; Klok, Peter; Klous, Sander; Kluge, Eike-Erik; Kluge, Thomas; Kluit, Peter; Kluth, Stefan; Knecht, Neil; Kneringer, Emmerich; Knobloch, Juergen; Knoops, Edith; Knue, Andrea; Ko, Byeong Rok; Kobayashi, Tomio; Kobel, Michael; Kocian, Martin; Kocnar, Antonin; Kodys, Peter; Köneke, Karsten; König, Adriaan; Koenig, Sebastian; Köpke, Lutz; Koetsveld, Folkert; Koevesarki, Peter; Koffas, Thomas; Koffeman, Els; Kohn, Fabian; Kohout, Zdenek; Kohriki, Takashi; Koi, Tatsumi; Kokott, Thomas; Kolachev, Guennady; Kolanoski, Hermann; Kolesnikov, Vladimir; Koletsou, Iro; Koll, James; Kollar, Daniel; Kollefrath, Michael; Kolya, Scott; Komar, Aston; Komori, Yuto; Kondo, Takahiko; Kono, Takanori; Kononov, Anatoly; Konoplich, Rostislav; Konstantinidis, Nikolaos; Kootz, Andreas; Koperny, Stefan; Kopikov, Sergey; Korcyl, Krzysztof; Kordas, Kostantinos; Koreshev, Victor; Korn, Andreas; Korol, Aleksandr; Korolkov, Ilya; Korolkova, Elena; Korotkov, Vladislav; Kortner, Oliver; Kortner, Sandra; Kostyukhin, Vadim; Kotamäki, Miikka Juhani; Kotov, Sergey; Kotov, Vladislav; Kotwal, Ashutosh; Kourkoumelis, Christine; Kouskoura, Vasiliki; Koutsman, Alex; Kowalewski, Robert Victor; Kowalski, Tadeusz; Kozanecki, Witold; Kozhin, Anatoly; Kral, Vlastimil; Kramarenko, Viktor; Kramberger, Gregor; Krasny, Mieczyslaw Witold; Krasznahorkay, Attila; Kraus, James; Kreisel, Arik; Krejci, Frantisek; Kretzschmar, Jan; Krieger, Nina; Krieger, Peter; Kroeninger, Kevin; Kroha, Hubert; Kroll, Joe; Kroseberg, Juergen; Krstic, Jelena; Kruchonak, Uladzimir; Krüger, Hans; Kruker, Tobias; Krumshteyn, Zinovii; Kruth, Andre; Kubota, Takashi; Kuehn, Susanne; Kugel, Andreas; Kuhl, Thorsten; Kuhn, Dietmar; Kukhtin, Victor; Kulchitsky, Yuri; Kuleshov, Sergey; Kummer, Christian; Kuna, Marine; Kundu, Nikhil; Kunkle, Joshua; Kupco, Alexander; Kurashige, Hisaya; Kurata, Masakazu; Kurochkin, Yurii; Kus, Vlastimil; Kuykendall, William; Kuze, Masahiro; Kuzhir, Polina; Kvita, Jiri; Kwee, Regina; La Rosa, Alessandro; La Rotonda, Laura; Labarga, Luis; Labbe, Julien; Lablak, Said; Lacasta, Carlos; Lacava, Francesco; Lacker, Heiko; Lacour, Didier; Lacuesta, Vicente Ramón; Ladygin, Evgueni; Lafaye, Rémi; Laforge, Bertrand; Lagouri, Theodota; Lai, Stanley; Laisne, Emmanuel; Lamanna, Massimo; Lampen, Caleb; Lampl, Walter; Lancon, Eric; Landgraf, Ulrich; Landon, Murrough; Landsman, Hagar; Lane, Jenna; Lange, Clemens; Lankford, Andrew; Lanni, Francesco; Lantzsch, Kerstin; Laplace, Sandrine; Lapoire, Cecile; Laporte, Jean-Francois; Lari, Tommaso; Larionov, Anatoly; Larner, Aimee; Lasseur, Christian; Lassnig, Mario; Laurelli, Paolo; Lavorato, Antonia; Lavrijsen, Wim; Laycock, Paul; Lazarev, Alexandre; Le Dortz, Olivier; Le Guirriec, Emmanuel; Le Maner, Christophe; Le Menedeu, Eve; Lebel, Céline; LeCompte, Thomas; Ledroit-Guillon, Fabienne Agnes Marie; Lee, Hurng-Chun; Lee, Jason; Lee, Shih-Chang; Lee, Lawrence; Lefebvre, Michel; Legendre, Marie; Leger, Annie; LeGeyt, Benjamin; Legger, Federica; Leggett, Charles; Lehmacher, Marc; Lehmann Miotto, Giovanna; Lei, Xiaowen; Leite, Marco Aurelio Lisboa; Leitner, Rupert; Lellouch, Daniel; Leltchouk, Mikhail; Lemmer, Boris; Lendermann, Victor; Leney, Katharine; Lenz, Tatiana; Lenzen, Georg; Lenzi, Bruno; Leonhardt, Kathrin; Leontsinis, Stefanos; Leroy, Claude; Lessard, Jean-Raphael; Lesser, Jonas; Lester, Christopher; Leung Fook Cheong, Annabelle; Levêque, Jessica; Levin, Daniel; Levinson, Lorne; Levitski, Mikhail; Lewandowska, Marta; Lewis, Adrian; Lewis, George; Leyko, Agnieszka; Leyton, Michael; Li, Bo; Li, Haifeng; Li, Shu; Li, Xuefei; Liang, Zhihua; Liang, Zhijun; Liao, Hongbo; Liberti, Barbara; Lichard, Peter; Lichtnecker, Markus; Lie, Ki; Liebig, Wolfgang; Lifshitz, Ronen; Lilley, Joseph; Limbach, Christian; Limosani, Antonio; Limper, Maaike; Lin, Simon; Linde, Frank; Linnemann, James; Lipeles, Elliot; Lipinsky, Lukas; Lipniacka, Anna; Liss, Tony; Lissauer, David; Lister, Alison; Litke, Alan; Liu, Chuanlei; Liu, Dong; Liu, Hao; Liu, Jianbei; Liu, Minghui; Liu, Shengli; Liu, Yanwen; Livan, Michele; Livermore, Sarah; Lleres, Annick; Llorente Merino, Javier; Lloyd, Stephen; Lobodzinska, Ewelina; Loch, Peter; Lockman, William; Loddenkoetter, Thomas; Loebinger, Fred; Loginov, Andrey; Loh, Chang Wei; Lohse, Thomas; Lohwasser, Kristin; Lokajicek, Milos; Loken, James; Lombardo, Vincenzo Paolo; Long, Robin Eamonn; Lopes, Lourenco; Lopez Mateos, David; Losada, Marta; Loscutoff, Peter; Lo Sterzo, Francesco; Losty, Michael; Lou, Xinchou; Lounis, Abdenour; Loureiro, Karina; Love, Jeremy; Love, Peter; Lowe, Andrew; Lu, Feng; Lubatti, Henry; Luci, Claudio; Lucotte, Arnaud; Ludwig, Andreas; Ludwig, Dörthe; Ludwig, Inga; Ludwig, Jens; Luehring, Frederick; Luijckx, Guy; Lumb, Debra; Luminari, Lamberto; Lund, Esben; Lund-Jensen, Bengt; Lundberg, Björn; Lundberg, Johan; Lundquist, Johan; Lungwitz, Matthias; Lupi, Anna; Lutz, Gerhard; Lynn, David; Lys, Jeremy; Lytken, Else; Ma, Hong; Ma, Lian Liang; Macana Goia, Jorge Andres; Maccarrone, Giovanni; Macchiolo, Anna; Maček, Boštjan; Machado Miguens, Joana; Mackeprang, Rasmus; Madaras, Ronald; Mader, Wolfgang; Maenner, Reinhard; Maeno, Tadashi; Mättig, Peter; Mättig, Stefan; Magnoni, Luca; Magradze, Erekle; Mahalalel, Yair; Mahboubi, Kambiz; Mahout, Gilles; Maiani, Camilla; Maidantchik, Carmen; Maio, Amélia; Majewski, Stephanie; Makida, Yasuhiro; Makovec, Nikola; Mal, Prolay; Malecki, Pawel; Malecki, Piotr; Maleev, Victor; Malek, Fairouz; Mallik, Usha; Malon, David; Malone, Caitlin; Maltezos, Stavros; Malyshev, Vladimir; Malyukov, Sergei; Mameghani, Raphael; Mamuzic, Judita; Manabe, Atsushi; Mandelli, Luciano; Mandić, Igor; Mandrysch, Rocco; Maneira, José; Mangeard, Pierre-Simon; Manjavidze, Ioseb; Mann, Alexander; Manning, Peter; Manousakis-Katsikakis, Arkadios; Mansoulie, Bruno; Manz, Andreas; Mapelli, Alessandro; Mapelli, Livio; March, Luis; Marchand, Jean-Francois; Marchese, Fabrizio; Marchiori, Giovanni; Marcisovsky, Michal; Marin, Alexandru; Marino, Christopher; Marroquim, Fernando; Marshall, Robin; Marshall, Zach; Martens, Kalen; Marti-Garcia, Salvador; Martin, Andrew; Martin, Brian; Martin, Brian Thomas; Martin, Franck Francois; Martin, Jean-Pierre; Martin, Philippe; Martin, Tim; Martin, Victoria Jane; Martin dit Latour, Bertrand; Martin–Haugh, Stewart; Martinez, Mario; Martinez Outschoorn, Verena; Martyniuk, Alex; Marx, Marilyn; Marzano, Francesco; Marzin, Antoine; Masetti, Lucia; Mashimo, Tetsuro; Mashinistov, Ruslan; Masik, Jiri; Maslennikov, Alexey; Massa, Ignazio; Massaro, Graziano; Massol, Nicolas; Mastrandrea, Paolo; Mastroberardino, Anna; Masubuchi, Tatsuya; Mathes, Markus; Matricon, Pierre; Matsumoto, Hiroshi; Matsunaga, Hiroyuki; Matsushita, Takashi; Mattravers, Carly; Maugain, Jean-Marie; Maxfield, Stephen; Maximov, Dmitriy; May, Edward; Mayne, Anna; Mazini, Rachid; Mazur, Michael; Mazzanti, Marcello; Mazzoni, Enrico; Mc Kee, Shawn Patrick; McCarn, Allison; McCarthy, Robert; McCarthy, Tom; McCubbin, Norman; McFarlane, Kenneth; Mcfayden, Josh; McGlone, Helen; Mchedlidze, Gvantsa; McLaren, Robert Andrew; Mclaughlan, Tom; McMahon, Steve; McPherson, Robert; Meade, Andrew; Mechnich, Joerg; Mechtel, Markus; Medinnis, Mike; Meera-Lebbai, Razzak; Meguro, Tatsuma; Mehdiyev, Rashid; Mehlhase, Sascha; Mehta, Andrew; Meier, Karlheinz; Meinhardt, Jens; Meirose, Bernhard; Melachrinos, Constantinos; Mellado Garcia, Bruce Rafael; Mendoza Navas, Luis; Meng, Zhaoxia; Mengarelli, Alberto; Menke, Sven; Menot, Claude; Meoni, Evelin; Mercurio, Kevin Michael; Mermod, Philippe; Merola, Leonardo; Meroni, Chiara; Merritt, Frank; Messina, Andrea; Metcalfe, Jessica; Mete, Alaettin Serhan; Meuser, Stefan; Meyer, Carsten; Meyer, Jean-Pierre; Meyer, Jochen; Meyer, Joerg; Meyer, Thomas Christian; Meyer, W Thomas; Miao, Jiayuan; Michal, Sebastien; Micu, Liliana; Middleton, Robin; Miele, Paola; Migas, Sylwia; Mijović, Liza; Mikenberg, Giora; Mikestikova, Marcela; Mikuž, Marko; Miller, David; Miller, Robert; Mills, Bill; Mills, Corrinne; Milov, Alexander; Milstead, David; Milstein, Dmitry; Minaenko, Andrey; Miñano, Mercedes; Minashvili, Irakli; Mincer, Allen; Mindur, Bartosz; Mineev, Mikhail; Ming, Yao; Mir, Lluisa-Maria; Mirabelli, Giovanni; Miralles Verge, Lluis; Misiejuk, Andrzej; Mitrevski, Jovan; Mitrofanov, Gennady; Mitsou, Vasiliki A; Mitsui, Shingo; Miyagawa, Paul; Miyazaki, Kazuki; Mjörnmark, Jan-Ulf; Moa, Torbjoern; Mockett, Paul; Moed, Shulamit; Moeller, Victoria; Mönig, Klaus; Möser, Nicolas; Mohapatra, Soumya; Mohr, Wolfgang; Mohrdieck-Möck, Susanne; Moisseev, Artemy; Moles-Valls, Regina; Molina-Perez, Jorge; Monk, James; Monnier, Emmanuel; Montesano, Simone; Monticelli, Fernando; Monzani, Simone; Moore, Roger; Moorhead, Gareth; Mora Herrera, Clemencia; Moraes, Arthur; Morange, Nicolas; Morel, Julien; Morello, Gianfranco; Moreno, Deywis; Moreno Llácer, María; Morettini, Paolo; Morii, Masahiro; Morin, Jerome; Morita, Youhei; Morley, Anthony Keith; Mornacchi, Giuseppe; Morozov, Sergey; Morris, John; Morvaj, Ljiljana; Moser, Hans-Guenther; Mosidze, Maia; Moss, Josh; Mount, Richard; Mountricha, Eleni; Mouraviev, Sergei; Moyse, Edward; Mudrinic, Mihajlo; Mueller, Felix; Mueller, James; Mueller, Klemens; Müller, Thomas; Muenstermann, Daniel; Muir, Alex; Munwes, Yonathan; Murray, Bill; Mussche, Ido; Musto, Elisa; Myagkov, Alexey; Myska, Miroslav; Nadal, Jordi; Nagai, Koichi; Nagano, Kunihiro; Nagasaka, Yasushi; Nairz, Armin Michael; Nakahama, Yu; Nakamura, Koji; Nakano, Itsuo; Nanava, Gizo; Napier, Austin; Nash, Michael; Nation, Nigel; Nattermann, Till; Naumann, Thomas; Navarro, Gabriela; Neal, Homer; Nebot, Eduardo; Nechaeva, Polina; Negri, Andrea; Negri, Guido; Nektarijevic, Snezana; Nelson, Andrew; Nelson, Silke; Nelson, Timothy Knight; Nemecek, Stanislav; Nemethy, Peter; Nepomuceno, Andre Asevedo; Nessi, Marzio; Nesterov, Stanislav; Neubauer, Mark; Neusiedl, Andrea; Neves, Ricardo; Nevski, Pavel; Newman, Paul; Nguyen Thi Hong, Van; Nickerson, Richard; Nicolaidou, Rosy; Nicolas, Ludovic; Nicquevert, Bertrand; Niedercorn, Francois; Nielsen, Jason; Niinikoski, Tapio; Nikiforou, Nikiforos; Nikiforov, Andriy; Nikolaenko, Vladimir; Nikolaev, Kirill; Nikolic-Audit, Irena; Nikolics, Katalin; Nikolopoulos, Konstantinos; Nilsen, Henrik; Nilsson, Paul; Ninomiya, Yoichi; Nisati, Aleandro; Nishiyama, Tomonori; Nisius, Richard; Nodulman, Lawrence; Nomachi, Masaharu; Nomidis, Ioannis; Nordberg, Markus; Nordkvist, Bjoern; Norton, Peter; Novakova, Jana; Nozaki, Mitsuaki; Nožička, Miroslav; Nozka, Libor; Nugent, Ian Michael; Nuncio-Quiroz, Adriana-Elizabeth; Nunes Hanninger, Guilherme; Nunnemann, Thomas; Nurse, Emily; Nyman, Tommi; O'Brien, Brendan Joseph; O'Neale, Steve; O'Neil, Dugan; O'Shea, Val; Oakham, Gerald; Oberlack, Horst; Ocariz, Jose; Ochi, Atsuhiko; Oda, Susumu; Odaka, Shigeru; Odier, Jerome; Ogren, Harold; Oh, Alexander; Oh, Seog; Ohm, Christian; Ohshima, Takayoshi; Ohshita, Hidetoshi; Ohska, Tokio Kenneth; Ohsugi, Takashi; Okada, Shogo; Okawa, Hideki; Okumura, Yasuyuki; Okuyama, Toyonobu; Olcese, Marco; Olchevski, Alexander; Oliveira, Miguel Alfonso; Oliveira Damazio, Denis; Oliver Garcia, Elena; Olivito, Dominick; Olszewski, Andrzej; Olszowska, Jolanta; Omachi, Chihiro; Onofre, António; Onyisi, Peter; Oram, Christopher; Oreglia, Mark; Oren, Yona; Orestano, Domizia; Orlov, Iliya; Oropeza Barrera, Cristina; Orr, Robert; Osculati, Bianca; Ospanov, Rustem; Osuna, Carlos; Otero y Garzon, Gustavo; Ottersbach, John; Ouchrif, Mohamed; Ould-Saada, Farid; Ouraou, Ahmimed; Ouyang, Qun; Owen, Mark; Owen, Simon; Ozcan, Veysi Erkcan; Ozturk, Nurcan; Pacheco Pages, Andres; Padilla Aranda, Cristobal; Pagan Griso, Simone; Paganis, Efstathios; Paige, Frank; Pajchel, Katarina; Palacino, Gabriel; Paleari, Chiara; Palestini, Sandro; Pallin, Dominique; Palma, Alberto; Palmer, Jody; Pan, Yibin; Panagiotopoulou, Evgenia; Panes, Boris; Panikashvili, Natalia; Panitkin, Sergey; Pantea, Dan; Panuskova, Monika; Paolone, Vittorio; Papadelis, Aras; Papadopoulou, Theodora; Paramonov, Alexander; Park, Woochun; Parker, Andy; Parodi, Fabrizio; Parsons, John; Parzefall, Ulrich; Pasqualucci, Enrico; Passeri, Antonio; Pastore, Fernanda; Pastore, Francesca; Pásztor, Gabriella; Pataraia, Sophio; Patel, Nikhul; Pater, Joleen; Patricelli, Sergio; Pauly, Thilo; Pecsy, Martin; Pedraza Morales, Maria Isabel; Peleganchuk, Sergey; Peng, Haiping; Pengo, Ruggero; Penson, Alexander; Penwell, John; Perantoni, Marcelo; Perez, Kerstin; Perez Cavalcanti, Tiago; Perez Codina, Estel; Pérez García-Estañ, María Teresa; Perez Reale, Valeria; Perini, Laura; Pernegger, Heinz; Perrino, Roberto; Perrodo, Pascal; Persembe, Seda; Peshekhonov, Vladimir; Petersen, Brian; Petersen, Jorgen; Petersen, Troels; Petit, Elisabeth; Petridis, Andreas; Petridou, Chariclia; Petrolo, Emilio; Petrucci, Fabrizio; Petschull, Dennis; Petteni, Michele; Pezoa, Raquel; Phan, Anna; Phillips, Alan; Phillips, Peter William; Piacquadio, Giacinto; Piccaro, Elisa; Piccinini, Maurizio; Pickford, Andrew; Piec, Sebastian Marcin; Piegaia, Ricardo; Pilcher, James; Pilkington, Andrew; Pina, João Antonio; Pinamonti, Michele; Pinder, Alex; Pinfold, James; Ping, Jialun; Pinto, Belmiro; Pirotte, Olivier; Pizio, Caterina; Placakyte, Ringaile; Plamondon, Mathieu; Plano, Will; Pleier, Marc-Andre; Pleskach, Anatoly; Poblaguev, Andrei; Poddar, Sahill; Podlyski, Fabrice; Poggioli, Luc; Poghosyan, Tatevik; Pohl, Martin; Polci, Francesco; Polesello, Giacomo; Policicchio, Antonio; Polini, Alessandro; Poll, James; Polychronakos, Venetios; Pomarede, Daniel Marc; Pomeroy, Daniel; Pommès, Kathy; Pontecorvo, Ludovico; Pope, Bernard; Popeneciu, Gabriel Alexandru; Popovic, Dragan; Poppleton, Alan; Portell Bueso, Xavier; Porter, Robert; Posch, Christoph; Pospelov, Guennady; Pospisil, Stanislav; Potrap, Igor; Potter, Christina; Potter, Christopher; Poulard, Gilbert; Poveda, Joaquin; Prabhu, Robindra; Pralavorio, Pascal; Prasad, Srivas; Pravahan, Rishiraj; Prell, Soeren; Pretzl, Klaus Peter; Pribyl, Lukas; Price, Darren; Price, Lawrence; Price, Michael John; Prichard, Paul; Prieur, Damien; Primavera, Margherita; Prokofiev, Kirill; Prokoshin, Fedor; Protopopescu, Serban; Proudfoot, James; Prudent, Xavier; Przysiezniak, Helenka; Psoroulas, Serena; Ptacek, Elizabeth; Pueschel, Elisa; Purdham, John; Purohit, Milind; Puzo, Patrick; Pylypchenko, Yuriy; Qian, Jianming; Qian, Zuxuan; Qin, Zhonghua; Quadt, Arnulf; Quarrie, David; Quayle, William; Quinonez, Fernando; Raas, Marcel; Radescu, Voica; Radics, Balint; Rador, Tonguc; Ragusa, Francesco; Rahal, Ghita; Rahimi, Amir; Rahm, David; Rajagopalan, Srinivasan; Rammensee, Michael; Rammes, Marcus; Ramstedt, Magnus; Randle-Conde, Aidan Sean; Randrianarivony, Koloina; Ratoff, Peter; Rauscher, Felix; Rauter, Emanuel; Raymond, Michel; Read, Alexander Lincoln; Rebuzzi, Daniela; Redelbach, Andreas; Redlinger, George; Reece, Ryan; Reeves, Kendall; Reichold, Armin; Reinherz-Aronis, Erez; Reinsch, Andreas; Reisinger, Ingo; Reljic, Dusan; Rembser, Christoph; Ren, Zhongliang; Renaud, Adrien; Renkel, Peter; Rescigno, Marco; Resconi, Silvia; Resende, Bernardo; Reznicek, Pavel; Rezvani, Reyhaneh; Richards, Alexander; Richter, Robert; Richter-Was, Elzbieta; Ridel, Melissa; Rieke, Stefan; Rijpstra, Manouk; Rijssenbeek, Michael; Rimoldi, Adele; Rinaldi, Lorenzo; Rios, Ryan Randy; Riu, Imma; Rivoltella, Giancesare; Rizatdinova, Flera; Rizvi, Eram; Robertson, Steven; Robichaud-Veronneau, Andree; Robinson, Dave; Robinson, James; Robinson, Mary; Robson, Aidan; Rocha de Lima, Jose Guilherme; Roda, Chiara; Roda Dos Santos, Denis; Rodier, Stephane; Rodriguez, Diego; Roe, Adam; Roe, Shaun; Røhne, Ole; Rojo, Victoria; Rolli, Simona; Romaniouk, Anatoli; Romanov, Victor; Romeo, Gaston; Roos, Lydia; Ros, Eduardo; Rosati, Stefano; Rosbach, Kilian; Rose, Anthony; Rose, Matthew; Rosenbaum, Gabriel; Rosenberg, Eli; Rosendahl, Peter Lundgaard; Rosenthal, Oliver; Rosselet, Laurent; Rossetti, Valerio; Rossi, Elvira; Rossi, Leonardo Paolo; Rossi, Lucio; Rotaru, Marina; Roth, Itamar; Rothberg, Joseph; Rousseau, David; Royon, Christophe; Rozanov, Alexander; Rozen, Yoram; Ruan, Xifeng; Rubinskiy, Igor; Ruckert, Benjamin; Ruckstuhl, Nicole; Rud, Viacheslav; Rudolph, Christian; Rudolph, Gerald; Rühr, Frederik; Ruggieri, Federico; Ruiz-Martinez, Aranzazu; Rulikowska-Zarebska, Elzbieta; Rumiantsev, Viktor; Rumyantsev, Leonid; Runge, Kay; Runolfsson, Ogmundur; Rurikova, Zuzana; Rusakovich, Nikolai; Rust, Dave; Rutherfoord, John; Ruwiedel, Christoph; Ruzicka, Pavel; Ryabov, Yury; Ryadovikov, Vasily; Ryan, Patrick; Rybar, Martin; Rybkin, Grigori; Ryder, Nick; Rzaeva, Sevda; Saavedra, Aldo; Sadeh, Iftach; Sadrozinski, Hartmut; Sadykov, Renat; Safai Tehrani, Francesco; Sakamoto, Hiroshi; Salamanna, Giuseppe; Salamon, Andrea; Saleem, Muhammad; Salihagic, Denis; Salnikov, Andrei; Salt, José; Salvachua Ferrando, Belén; Salvatore, Daniela; Salvatore, Pasquale Fabrizio; Salvucci, Antonio; Salzburger, Andreas; Sampsonidis, Dimitrios; Samset, Björn Hallvard; Sanchez, Arturo; Sandaker, Heidi; Sander, Heinz Georg; Sanders, Michiel; Sandhoff, Marisa; Sandoval, Tanya; Sandoval, Carlos; Sandstroem, Rikard; Sandvoss, Stephan; Sankey, Dave; Sansoni, Andrea; Santamarina Rios, Cibran; Santoni, Claudio; Santonico, Rinaldo; Santos, Helena; Saraiva, João; Sarangi, Tapas; Sarkisyan-Grinbaum, Edward; Sarri, Francesca; Sartisohn, Georg; Sasaki, Osamu; Sasaki, Takashi; Sasao, Noboru; Satsounkevitch, Igor; Sauvage, Gilles; Sauvan, Emmanuel; Sauvan, Jean-Baptiste; Savard, Pierre; Savinov, Vladimir; Savu, Dan Octavian; Savva, Panagiota; Sawyer, Lee; Saxon, David; Says, Louis-Pierre; Sbarra, Carla; Sbrizzi, Antonio; Scallon, Olivia; Scannicchio, Diana; Schaarschmidt, Jana; Schacht, Peter; Schäfer, Uli; Schaepe, Steffen; Schaetzel, Sebastian; Schaffer, Arthur; Schaile, Dorothee; Schamberger, R. Dean; Schamov, Andrey; Scharf, Veit; Schegelsky, Valery; Scheirich, Daniel; Schernau, Michael; Scherzer, Max; Schiavi, Carlo; Schieck, Jochen; Schioppa, Marco; Schlenker, Stefan; Schlereth, James; Schmidt, Evelyn; Schmieden, Kristof; Schmitt, Christian; Schmitt, Sebastian; Schmitz, Martin; Schöning, André; Schott, Matthias; Schouten, Doug; Schovancova, Jaroslava; Schram, Malachi; Schroeder, Christian; Schroer, Nicolai; Schuh, Silvia; Schuler, Georges; Schultes, Joachim; Schultz-Coulon, Hans-Christian; Schulz, Holger; Schumacher, Jan; Schumacher, Markus; Schumm, Bruce; Schune, Philippe; Schwanenberger, Christian; Schwartzman, Ariel; Schwemling, Philippe; Schwienhorst, Reinhard; Schwierz, Rainer; Schwindling, Jerome; Schwindt, Thomas; Scott, Bill; Searcy, Jacob; Sedykh, Evgeny; Segura, Ester; Seidel, Sally; Seiden, Abraham; Seifert, Frank; Seixas, José; Sekhniaidze, Givi; Seliverstov, Dmitry; Sellden, Bjoern; Sellers, Graham; Seman, Michal; Semprini-Cesari, Nicola; Serfon, Cedric; Serin, Laurent; Seuster, Rolf; Severini, Horst; Sevior, Martin; Sfyrla, Anna; Shabalina, Elizaveta; Shamim, Mansoora; Shan, Lianyou; Shank, James; Shao, Qi Tao; Shapiro, Marjorie; Shatalov, Pavel; Shaver, Leif; Shaw, Kate; Sherman, Daniel; Sherwood, Peter; Shibata, Akira; Shichi, Hideharu; Shimizu, Shima; Shimojima, Makoto; Shin, Taeksu; Shmeleva, Alevtina; Shochet, Mel; Short, Daniel; Shupe, Michael; Sicho, Petr; Sidoti, Antonio; Siebel, Anca-Mirela; Siegert, Frank; Siegrist, James; Sijacki, Djordje; Silbert, Ohad; Silva, José; Silver, Yiftah; Silverstein, Daniel; Silverstein, Samuel; Simak, Vladislav; Simard, Olivier; Simic, Ljiljana; Simion, Stefan; Simmons, Brinick; Simonyan, Margar; Sinervo, Pekka; Sinev, Nikolai; Sipica, Valentin; Siragusa, Giovanni; Sircar, Anirvan; Sisakyan, Alexei; Sivoklokov, Serguei; Sjölin, Jörgen; Sjursen, Therese; Skinnari, Louise Anastasia; Skovpen, Kirill; Skubic, Patrick; Skvorodnev, Nikolai; Slater, Mark; Slavicek, Tomas; Sliwa, Krzysztof; Sloan, Terrence; Sloper, John erik; Smakhtin, Vladimir; Smirnov, Sergei; Smirnova, Lidia; Smirnova, Oxana; Smith, Ben Campbell; Smith, Douglas; Smith, Kenway; Smizanska, Maria; Smolek, Karel; Snesarev, Andrei; Snow, Steve; Snow, Joel; Snuverink, Jochem; Snyder, Scott; Soares, Mara; Sobie, Randall; Sodomka, Jaromir; Soffer, Abner; Solans, Carlos; Solar, Michael; Solc, Jaroslav; Soldatov, Evgeny; Soldevila, Urmila; Solfaroli Camillocci, Elena; Solodkov, Alexander; Solovyanov, Oleg; Sondericker, John; Soni, Nitesh; Sopko, Vit; Sopko, Bruno; Sorbi, Massimo; Sosebee, Mark; Soukharev, Andrey; Spagnolo, Stefania; Spanò, Francesco; Spighi, Roberto; Spigo, Giancarlo; Spila, Federico; Spiriti, Eleuterio; Spiwoks, Ralf; Spousta, Martin; Spreitzer, Teresa; Spurlock, Barry; St Denis, Richard Dante; Stahl, Thorsten; Stahlman, Jonathan; Stamen, Rainer; Stanecka, Ewa; Stanek, Robert; Stanescu, Cristian; Stapnes, Steinar; Starchenko, Evgeny; Stark, Jan; Staroba, Pavel; Starovoitov, Pavel; Staude, Arnold; Stavina, Pavel; Stavropoulos, Georgios; Steele, Genevieve; Steinbach, Peter; Steinberg, Peter; Stekl, Ivan; Stelzer, Bernd; Stelzer, Harald Joerg; Stelzer-Chilton, Oliver; Stenzel, Hasko; Stevenson, Kyle; Stewart, Graeme; Stillings, Jan Andre; Stockmanns, Tobias; Stockton, Mark; Stoerig, Kathrin; Stoicea, Gabriel; Stonjek, Stefan; Strachota, Pavel; Stradling, Alden; Straessner, Arno; Strandberg, Jonas; Strandberg, Sara; Strandlie, Are; Strang, Michael; Strauss, Emanuel; Strauss, Michael; Strizenec, Pavol; Ströhmer, Raimund; Strom, David; Strong, John; Stroynowski, Ryszard; Strube, Jan; Stugu, Bjarne; Stumer, Iuliu; Stupak, John; Sturm, Philipp; Soh, Dart-yin; Su, Dong; Subramania, Halasya Siva; Succurro, Antonella; Sugaya, Yorihito; Sugimoto, Takuya; Suhr, Chad; Suita, Koichi; Suk, Michal; Sulin, Vladimir; Sultansoy, Saleh; Sumida, Toshi; Sun, Xiaohu; Sundermann, Jan Erik; Suruliz, Kerim; Sushkov, Serge; Susinno, Giancarlo; Sutton, Mark; Suzuki, Yu; Suzuki, Yuta; Svatos, Michal; Sviridov, Yuri; Swedish, Stephen; Sykora, Ivan; Sykora, Tomas; Szeless, Balazs; Sánchez, Javier; Ta, Duc; Tackmann, Kerstin; Taffard, Anyes; Tafirout, Reda; Taiblum, Nimrod; Takahashi, Yuta; Takai, Helio; Takashima, Ryuichi; Takeda, Hiroshi; Takeshita, Tohru; Talby, Mossadek; Talyshev, Alexey; Tamsett, Matthew; Tanaka, Junichi; Tanaka, Reisaburo; Tanaka, Satoshi; Tanaka, Shuji; Tanaka, Yoshito; Tani, Kazutoshi; Tannoury, Nancy; Tappern, Geoffrey; Tapprogge, Stefan; Tardif, Dominique; Tarem, Shlomit; Tarrade, Fabien; Tartarelli, Giuseppe Francesco; Tas, Petr; Tasevsky, Marek; Tassi, Enrico; Tatarkhanov, Mous; Tayalati, Yahya; Taylor, Christopher; Taylor, Frank; Taylor, Geoffrey; Taylor, Wendy; Teinturier, Marthe; Teixeira Dias Castanheira, Matilde; Teixeira-Dias, Pedro; Temming, Kim Katrin; Ten Kate, Herman; Teng, Ping-Kun; Terada, Susumu; Terashi, Koji; Terron, Juan; Terwort, Mark; Testa, Marianna; Teuscher, Richard; Thadome, Jocelyn; Therhaag, Jan; Theveneaux-Pelzer, Timothée; Thioye, Moustapha; Thoma, Sascha; Thomas, Juergen; Thompson, Emily; Thompson, Paul; Thompson, Peter; Thompson, Stan; Thomson, Evelyn; Thomson, Mark; Thun, Rudolf; Tian, Feng; Tic, Tomáš; Tikhomirov, Vladimir; Tikhonov, Yury; Timmermans, Charles; Tipton, Paul; Tique Aires Viegas, Florbela De Jes; Tisserant, Sylvain; Tobias, Jürgen; Toczek, Barbara; Todorov, Theodore; Todorova-Nova, Sharka; Toggerson, Brokk; Tojo, Junji; Tokár, Stanislav; Tokunaga, Kaoru; Tokushuku, Katsuo; Tollefson, Kirsten; Tomoto, Makoto; Tompkins, Lauren; Toms, Konstantin; Tong, Guoliang; Tonoyan, Arshak; Topfel, Cyril; Topilin, Nikolai; Torchiani, Ingo; Torrence, Eric; Torres, Heberth; Torró Pastor, Emma; Toth, Jozsef; Touchard, Francois; Tovey, Daniel; Traynor, Daniel; Trefzger, Thomas; Tremblet, Louis; Tricoli, Alesandro; Trigger, Isabel Marian; Trincaz-Duvoid, Sophie; Trinh, Thi Nguyet; Tripiana, Martin; Trischuk, William; Trivedi, Arjun; Trocmé, Benjamin; Troncon, Clara; Trottier-McDonald, Michel; Trzupek, Adam; Tsarouchas, Charilaos; Tseng, Jeffrey; Tsiakiris, Menelaos; Tsiareshka, Pavel; Tsionou, Dimitra; Tsipolitis, Georgios; Tsiskaridze, Vakhtang; Tskhadadze, Edisher; Tsukerman, Ilya; Tsulaia, Vakhtang; Tsung, Jieh-Wen; Tsuno, Soshi; Tsybychev, Dmitri; Tua, Alan; Tuggle, Joseph; Turala, Michal; Turecek, Daniel; Turk Cakir, Ilkay; Turlay, Emmanuel; Turra, Ruggero; Tuts, Michael; Tykhonov, Andrii; Tylmad, Maja; Tyndel, Mike; Tyrvainen, Harri; Tzanakos, George; Uchida, Kirika; Ueda, Ikuo; Ueno, Ryuichi; Ugland, Maren; Uhlenbrock, Mathias; Uhrmacher, Michael; Ukegawa, Fumihiko; Unal, Guillaume; Underwood, David; Undrus, Alexander; Unel, Gokhan; Unno, Yoshinobu; Urbaniec, Dustin; Urkovsky, Evgeny; Urrejola, Pedro; Usai, Giulio; Uslenghi, Massimiliano; Vacavant, Laurent; Vacek, Vaclav; Vachon, Brigitte; Vahsen, Sven; Valenta, Jan; Valente, Paolo; Valentinetti, Sara; Valkar, Stefan; Valladolid Gallego, Eva; Vallecorsa, Sofia; Valls Ferrer, Juan Antonio; van der Graaf, Harry; van der Kraaij, Erik; Van Der Leeuw, Robin; van der Poel, Egge; van der Ster, Daniel; Van Eijk, Bob; van Eldik, Niels; van Gemmeren, Peter; van Kesteren, Zdenko; van Vulpen, Ivo; Vandelli, Wainer; Vandoni, Giovanna; Vaniachine, Alexandre; Vankov, Peter; Vannucci, Francois; Varela Rodriguez, Fernando; Vari, Riccardo; Varouchas, Dimitris; Vartapetian, Armen; Varvell, Kevin; Vassilakopoulos, Vassilios; Vazeille, Francois; Vegni, Guido; Veillet, Jean-Jacques; Vellidis, Constantine; Veloso, Filipe; Veness, Raymond; Veneziano, Stefano; Ventura, Andrea; Ventura, Daniel; Venturi, Manuela; Venturi, Nicola; Vercesi, Valerio; Verducci, Monica; Verkerke, Wouter; Vermeulen, Jos; Vest, Anja; Vetterli, Michel; Vichou, Irene; Vickey, Trevor; Vickey Boeriu, Oana Elena; Viehhauser, Georg; Viel, Simon; Villa, Mauro; Villaplana Perez, Miguel; Vilucchi, Elisabetta; Vincter, Manuella; Vinek, Elisabeth; Vinogradov, Vladimir; Virchaux, Marc; Virzi, Joseph; Vitells, Ofer; Viti, Michele; Vivarelli, Iacopo; Vives Vaque, Francesc; Vlachos, Sotirios; Vlasak, Michal; Vlasov, Nikolai; Vogel, Adrian; Vokac, Petr; Volpi, Guido; Volpi, Matteo; Volpini, Giovanni; von der Schmitt, Hans; von Loeben, Joerg; von Radziewski, Holger; von Toerne, Eckhard; Vorobel, Vit; Vorobiev, Alexander; Vorwerk, Volker; Vos, Marcel; Voss, Rudiger; Voss, Thorsten Tobias; Vossebeld, Joost; Vranjes, Nenad; Vranjes Milosavljevic, Marija; Vrba, Vaclav; Vreeswijk, Marcel; Vu Anh, Tuan; Vuillermet, Raphael; Vukotic, Ilija; Wagner, Wolfgang; Wagner, Peter; Wahlen, Helmut; Wakabayashi, Jun; Walbersloh, Jorg; Walch, Shannon; Walder, James; Walker, Rodney; Walkowiak, Wolfgang; Wall, Richard; Waller, Peter; Wang, Chiho; Wang, Haichen; Wang, Hulin; Wang, Jike; Wang, Jin; Wang, Joshua C; Wang, Rui; Wang, Song-Ming; Warburton, Andreas; Ward, Patricia; Warsinsky, Markus; Watkins, Peter; Watson, Alan; Watson, Miriam; Watts, Gordon; Watts, Stephen; Waugh, Anthony; Waugh, Ben; Weber, Jens; Weber, Marc; Weber, Michele; Weber, Pavel; Weidberg, Anthony; Weigell, Philipp; Weingarten, Jens; Weiser, Christian; Wellenstein, Hermann; Wells, Phillippa; Wen, Mei; Wenaus, Torre; Wendler, Shanti; Weng, Zhili; Wengler, Thorsten; Wenig, Siegfried; Wermes, Norbert; Werner, Matthias; Werner, Per; Werth, Michael; Wessels, Martin; Weydert, Carole; Whalen, Kathleen; Wheeler-Ellis, Sarah Jane; Whitaker, Scott; White, Andrew; White, Martin; Whitehead, Samuel Robert; Whiteson, Daniel; Whittington, Denver; Wicek, Francois; Wicke, Daniel; Wickens, Fred; Wiedenmann, Werner; Wielers, Monika; Wienemann, Peter; Wiglesworth, Craig; Wiik, Liv Antje Mari; Wijeratne, Peter Alexander; Wildauer, Andreas; Wildt, Martin Andre; Wilhelm, Ivan; Wilkens, Henric George; Will, Jonas Zacharias; Williams, Eric; Williams, Hugh; Willis, William; Willocq, Stephane; Wilson, John; Wilson, Michael Galante; Wilson, Alan; Wingerter-Seez, Isabelle; Winkelmann, Stefan; Winklmeier, Frank; Wittgen, Matthias; Wolter, Marcin Wladyslaw; Wolters, Helmut; Wong, Wei-Cheng; Wooden, Gemma; Wosiek, Barbara; Wotschack, Jorg; Woudstra, Martin; Wraight, Kenneth; Wright, Catherine; Wrona, Bozydar; Wu, Sau Lan; Wu, Xin; Wu, Yusheng; Wulf, Evan; Wunstorf, Renate; Wynne, Benjamin; Xaplanteris, Leonidas; Xella, Stefania; Xie, Song; Xie, Yigang; Xu, Chao; Xu, Da; Xu, Guofa; Yabsley, Bruce; Yacoob, Sahal; Yamada, Miho; Yamaguchi, Hiroshi; Yamamoto, Akira; Yamamoto, Kyoko; Yamamoto, Shimpei; Yamamura, Taiki; Yamanaka, Takashi; Yamaoka, Jared; Yamazaki, Takayuki; Yamazaki, Yuji; Yan, Zhen; Yang, Haijun; Yang, Un-Ki; Yang, Yi; Yang, Yi; Yang, Zhaoyu; Yanush, Serguei; Yao, Yushu; Yasu, Yoshiji; Ybeles Smit, Gabriel Valentijn; Ye, Jingbo; Ye, Shuwei; Yilmaz, Metin; Yoosoofmiya, Reza; Yorita, Kohei; Yoshida, Riktura; Young, Charles; Youssef, Saul; Yu, Dantong; Yu, Jaehoon; Yu, Jie; Yuan, Li; Yurkewicz, Adam; Zaets, Vassilli; Zaidan, Remi; Zaitsev, Alexander; Zajacova, Zuzana; Zalite, Youris; Zanello, Lucia; Zarzhitsky, Pavel; Zaytsev, Alexander; Zeitnitz, Christian; Zeller, Michael; Zeman, Martin; Zemla, Andrzej; Zendler, Carolin; Zenin, Oleg; Ženiš, Tibor; Zenonos, Zenonas; Zenz, Seth; Zerwas, Dirk; Zevi della Porta, Giovanni; Zhan, Zhichao; Zhang, Dongliang; Zhang, Huaqiao; Zhang, Jinlong; Zhang, Xueyao; Zhang, Zhiqing; Zhao, Long; Zhao, Tianchi; Zhao, Zhengguo; Zhemchugov, Alexey; Zheng, Shuchen; Zhong, Jiahang; Zhou, Bing; Zhou, Ning; Zhou, Yue; Zhu, Cheng Guang; Zhu, Hongbo; Zhu, Junjie; Zhu, Yingchun; Zhuang, Xuai; Zhuravlov, Vadym; Zieminska, Daria; Zimmermann, Robert; Zimmermann, Simone; Zimmermann, Stephanie; Ziolkowski, Michael; Zitoun, Robert; Živković, Lidija; Zmouchko, Viatcheslav; Zobernig, Georg; Zoccoli, Antonio; Zolnierowski, Yves; Zsenei, Andras; zur Nedden, Martin; Zutshi, Vishnu; Zwalinski, Lukasz

    2011-01-01

    A search is presented for a high mass neutral particle that decays directly to the emu final state. The data sample was recorded by the ATLAS detector in sqrt{s}=7 TeV pp collisions at the LHC from March to June 2011 and corresponds to an integrated luminosity of 1.07 fb^-1. The data are found to be consistent with the Standard Model background. The high emu mass region is used to set 95% confidence level upper limits on the production of two possible new physics processes: tau sneutrinos in an R-parity violating supersymmetric model and Z'-like vector bosons in a lepton flavor violating model.

  11. ATLAS FTK a - very complex - custom parallel supercomputer

    CERN Document Server

    Kimura, Naoki; The ATLAS collaboration

    2016-01-01

    In the ever increasing pile-up LHC environment advanced techniques of analysing the data are implemented in order to increase the rate of relevant physics processes with respect to background processes. The Fast TracKer (FTK) is a track finding implementation at hardware level that is designed to deliver full-scan tracks with $p_{T}$ above 1GeV to the ATLAS trigger system for every L1 accept (at a maximum rate of 100kHz). In order to achieve this performance a highly parallel system was designed and now it is under installation in ATLAS. In the beginning of 2016 it will provide tracks for the trigger system in a region covering the central part of the ATLAS detector, and during the year it's coverage will be extended to the full detector coverage. The system relies on matching hits coming from the silicon tracking detectors against 1 billion patterns stored in specially designed ASICS chips (Associative memory - AM06). In a first stage coarse resolution hits are matched against the patterns and the accepted h...

  12. ATLAS RPC commissioning status and cosmic ray test results

    CERN Document Server

    Bianco, Michele

    2009-01-01

    The muon trigger system of the ATLAS experiment consists of several sub-systems and each of them need to be tested and certified before LHC operation. In the barrel region Resistive Plate Chambers are employed. RPC detector and its level-1 trigger electronics are designed to detect and select high momentum muons with high time resolution and good tracking capability for a total surface of about 4000 m2. The commissioning phase provided an unique opportunity to demonstrate, before LHC start-up, the functionality of the muon trigger components such as detector chambers, level-1 trigger electronics, detector slow control system, data acquisition chain, software and computing. We present the status of ATLAS RPC detector, the problems met during the commissioning and the solutions found and, finally, its performances as obtained by acquiring cosmic rays.

  13. The GNAM system in the ATLAS online monitoring framework

    Energy Technology Data Exchange (ETDEWEB)

    Salvatore, D. [INFN Cosenza and Dip. di Fisica, Universita della Calabria, ponte P. Bucci 31 C, 87036 Rende (Italy)], E-mail: daniela.salvatore@cern.ch; Adragna, P. [Queen Mary, University of London, London (United Kingdom); Bosman, M. [IFAE, Institut de Fisica de Altes Energies, UAB/Barcelona (Spain); Burckhart, D. [CERN, Geneva (Switzerland); Caprini, M. [National Institute for Physics and Nuclear Engineering, Bucharest (Romania); Corso-Radu, A. [University of California Irvine, Irvine, California (United States); Costa, M.J. [CERN, Geneva (Switzerland); Della Pietra, M. [INFN Sezione diNapoli, Napoli (Italy); Dotti, A. [Universita and INFN Pisa, Pisa (Italy); Eschrich, I. [University of California Irvine, Irvine, California (United States); Ferrari, R. [INFN Sezione di Pavia, Pavia (Italy); Ferrer, M.L. [INFN Laboratori Nazionali di Frascati, Frascati (Italy); Gaudio, G. [INFN Sezione di Pavia, Pavia (Italy); Hadavand, H. [Southern Methodist University, Dallas (United States); Hauschild, M. [CERN, Geneva (Switzerland); Hillier, S. [University of Birmingham, Birmingham (United Kingdom); Kehoe, B. [Southern Methodist University, Dallas (United States); Kolos, S. [University of California Irvine, Irvine, California (United States); Kordas, K. [INFN Laboratori Nazionali di Frascati, Frascati (Italy); Mcpherson, R. [University of Victoria, Vancouver (Canada)] (and others)

    2007-10-15

    ATLAS [ATLAS Collaboration, 'ATLAS Technical Proposal', CERN/LHHCC/94-43, LHCC/P2, CERN, Geneva, Switzerland, 1994] is one of the four experiments under construction along the Large Hadron Collider (LHC) ring, which will produce interactions at a center of mass energy of 14 TeV at 40 MHz rate. The detector consists of more than 140 million electronic channels. The challenging experimental environment and the extreme detector complexity impose the necessity of a common scalable distributed monitoring framework, which can be tuned for the optimal use by different ATLAS detectors at the various levels of the ATLAS data flow.

  14. The GNAM system in the ATLAS online monitoring framework

    Energy Technology Data Exchange (ETDEWEB)

    Salvatore, D. [INFN Cosenza and Dip. di Fisica, Universita della Calabria, ponte P. Bucci 31 C, 87036 Rende (Italy)], E-mail: daniela.salvatore@cern.ch; Adragna, P [Queen Mary, University of London, London (United Kingdom); Bosman, M [IFAE, Institut de Fisica de Altes Energies, UAB/Barcelona (Spain); Burckhart, D [CERN, Geneva (Switzerland); Caprini, M [National Institute for Physics and Nuclear Engineering, Bucharest (Romania); Corso-Radu, A [University of California Irvine, Irvine, California (United States); Costa, M J [CERN, Geneva (Switzerland); Della Pietra, M [INFN Sezione diNapoli, Napoli (Italy); Dotti, A [Universita and INFN Pisa, Pisa (Italy); Eschrich, I [University of California Irvine, Irvine, California (United States); Ferrari, R [INFN Sezione di Pavia, Pavia (Italy); Ferrer, M L [INFN Laboratori Nazionali di Frascati, Frascati (Italy); Gaudio, G [INFN Sezione di Pavia, Pavia (Italy); Hadavand, H [Southern Methodist University, Dallas (United States); Hauschild, M [CERN, Geneva (Switzerland); Hillier, S [University of Birmingham, Birmingham (United Kingdom); Kehoe, B [Southern Methodist University, Dallas (United States); Kolos, S [University of California Irvine, Irvine, California (United States); Kordas, K [INFN Laboratori Nazionali di Frascati, Frascati (Italy); Mcpherson, R [University of Victoria, Vancouver (Canada)

    2007-10-15

    ATLAS [ATLAS Collaboration, 'ATLAS Technical Proposal', CERN/LHHCC/94-43, LHCC/P2, CERN, Geneva, Switzerland, 1994] is one of the four experiments under construction along the Large Hadron Collider (LHC) ring, which will produce interactions at a center of mass energy of 14 TeV at 40 MHz rate. The detector consists of more than 140 million electronic channels. The challenging experimental environment and the extreme detector complexity impose the necessity of a common scalable distributed monitoring framework, which can be tuned for the optimal use by different ATLAS detectors at the various levels of the ATLAS data flow.

  15. The GNAM system in the ATLAS online monitoring framework

    International Nuclear Information System (INIS)

    Salvatore, D.; Adragna, P.; Bosman, M.; Burckhart, D.; Caprini, M.; Corso-Radu, A.; Costa, M.J.; Della Pietra, M.; Dotti, A.; Eschrich, I.; Ferrari, R.; Ferrer, M.L.; Gaudio, G.; Hadavand, H.; Hauschild, M.; Hillier, S.; Kehoe, B.; Kolos, S.; Kordas, K.; Mcpherson, R.

    2007-01-01

    ATLAS [ATLAS Collaboration, 'ATLAS Technical Proposal', CERN/LHHCC/94-43, LHCC/P2, CERN, Geneva, Switzerland, 1994] is one of the four experiments under construction along the Large Hadron Collider (LHC) ring, which will produce interactions at a center of mass energy of 14 TeV at 40 MHz rate. The detector consists of more than 140 million electronic channels. The challenging experimental environment and the extreme detector complexity impose the necessity of a common scalable distributed monitoring framework, which can be tuned for the optimal use by different ATLAS detectors at the various levels of the ATLAS data flow

  16. The Evolution of the Region of Interest Builder in the ATLAS Experiment

    CERN Document Server

    Blair, Robert; The ATLAS collaboration; Green, Barry; Love, Jeremy; Proudfoot, James; Rifki, Othmane; Panduro Vazquez, Jose Guillermo; Zhang, Jinlong

    2015-01-01

    ATLAS is a general purpose particle detector at the Large Hadron Collider (LHC) at CERN designed to measure the products of proton collisions. Given their high interaction rate (1GHz), selective triggering in real time is required to reduce the rate to the experiment’s data storage capacity (1KHz). To meet this requirement, ATLAS employs a combination of hardware and software triggers to select interesting collisions for physics analysis. The Region of Interest Builder (RoIB) is an integral part of the ATLAS detector Trigger and Data Acquisition (TDAQ) chain where the coordinates of the regions of interest (RoIs) identified by the first level trigger (L1) are collected and passed to the High Level Trigger (HLT) to make a decision. While the current custom RoIB operated reliably during the first run of the LHC, it is desirable to have the RoIB more operationally maintainable in the new run, which will reach higher luminosities with an increased complexity of L1 triggers. We are responsible for migrating the ...

  17. Analytics Platform for ATLAS Computing Services

    CERN Document Server

    Vukotic, Ilija; The ATLAS collaboration; Bryant, Lincoln

    2016-01-01

    Big Data technologies have proven to be very useful for storage, processing and visualization of derived metrics associated with ATLAS distributed computing (ADC) services. Log file data and database records, and metadata from a diversity of systems have been aggregated and indexed to create an analytics platform for ATLAS ADC operations analysis. Dashboards, wide area data access cost metrics, user analysis patterns, and resource utilization efficiency charts are produced flexibly through queries against a powerful analytics cluster. Here we explore whether these techniques and analytics ecosystem can be applied to add new modes of open, quick, and pervasive access to ATLAS event data so as to simplify access and broaden the reach of ATLAS public data to new communities of users. An ability to efficiently store, filter, search and deliver ATLAS data at the event and/or sub-event level in a widely supported format would enable or significantly simplify usage of machine learning tools like Spark, Jupyter, R, S...

  18. ATLAS Cloud Computing R&D project

    CERN Document Server

    Panitkin, S; The ATLAS collaboration; Caballero Bejar, J; Benjamin, D; DiGirolamo, A; Gable, I; Hendrix, V; Hover, J; Kucharczuk, K; Medrano LLamas, R; Ohman, H; Paterson, M; Sobie, R; Taylor, R; Walker, R; Zaytsev, A

    2013-01-01

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R&D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R&D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R&D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R&D group has gained...

  19. Top physics with 0.70–1.08 fb -1 of collisions with the ATLAS ...

    Indian Academy of Sciences (India)

    With data collected during the first half of the 2011 run of the Large Hadron Collider at s = 7 TeV, a substantial data sample of high p T triggers, corresponding to an integrated luminosity of 1.08 fb-1, has been collected by the ATLAS detector. Measurements of the productions of top-quark pairs and single top quarks in ...

  20. ATLAS copies its first PetaByte out of CERN

    CERN Multimedia

    M. Branco; P. Salgado; L. Goossens; A. Nairz

    2006-01-01

    On 6th August ATLAS reached a major milestone for its Distributed Data Management project - copying its first PetaByte (1015 Bytes) of data out from CERN to computing centers around the world. This achievement is part of the so-called 'Tier-0 exercise' running since 19th June, where simulated fake data is used to exercise the expected data flow within the CERN computing centre and out over the Grid to the Tier-1 computing centers as would happen during the real data taking. The expected rate of data output from CERN when the detector is running at full trigger rate is 780 MB/s shared among 10 external Tier-1 sites(*), amounting to around 8 PetaBytes per year. The idea of the exercise was to try to reach this data rate and sustain it for as long as possible. The exercise was run as part of the LCG's Service Challenges and allowed ATLAS to test successfully the integration of ATLAS software with the LCG middleware services that are used for low level cataloging and the actual data movement. When ATLAS is produ...

  1. Muon Event Filter Software for the ATLAS Experiment at LHC

    CERN Document Server

    Biglietti, M; Assamagan, Ketevi A; Baines, J T M; Bee, C P; Bellomo, M; Bogaerts, J A C; Boisvert, V; Bosman, M; Caron, B; Casado, M P; Cataldi, G; Cavalli, D; Cervetto, M; Comune, G; Conde, P; Conde-Muíño, P; De Santo, A; De Seixas, J M; Di Mattia, A; Dos Anjos, A; Dosil, M; Díaz-Gómez, M; Ellis, Nick; Emeliyanov, D; Epp, B; Falciano, S; Farilla, A; George, S; Ghete, V M; González, S; Grothe, M; Kabana, S; Khomich, A; Kilvington, G; Konstantinidis, N P; Kootz, A; Lowe, A; Luminari, L; Maeno, T; Masik, J; Meessen, C; Mello, A G; Merino, G; Moore, R; Morettini, P; Negri, A; Nikitin, N V; Nisati, A; Padilla, C; Panikashvili, N; Parodi, F; Pinfold, J L; Pinto, P; Primavera, M; Pérez-Réale, V; Qian, Z; Resconi, S; Rosati, S; Santamarina-Rios, C; Scannicchio, D A; Schiavi, C; Segura, E; Sivoklokov, S Yu; Soluk, R A; Stefanidis, E; Sushkov, S; Sutton, M; Sánchez, C; Tapprogge, Stefan; Thomas, E; Touchard, F; Venda-Pinto, B; Ventura, A; Vercesi, V; Werner, P; Wheeler, S; Wickens, F J; Wiedenmann, W; Wielers, M; Zobernig, G; Computing In High Energy Physics

    2005-01-01

    At LHC the 40 MHz bunch crossing rate dictates a high selectivity of the ATLAS Trigger system, which has to keep the full physics potential of the experiment in spite of a limited storage capability. The level-1 trigger, implemented in a custom hardware, will reduce the initial rate to 75 kHz and is followed by the software based level-2 and Event Filter, usually referred as High Level Triggers (HLT), which further reduce the rate to about 100 Hz. In this paper an overview of the implementation of the offline muon recostruction algortihms MOORE (Muon Object Oriented REconstruction) and MuId (Muon Identification) as Event Filter in the ATLAS online framework is given. The MOORE algorithm performs the reconstruction inside the Muon Spectrometer providing a precise measurement of the muon track parameters outside the calorimeters; MuId combines the measurements of all ATLAS sub-detectors in order to identify muons and provides the best estimate of their momentum at the production vertex. In the HLT implementatio...

  2. ATLAS Visitors Centre

    CERN Multimedia

    claudia Marcelloni

    2009-01-01

    ATLAS Visitors Centre has opened its shiny new doors to the public. Officially launched on Monday February 23rd, 2009, the permanent exhibition at Point 1 was conceived as a tour resource for ATLAS guides, and as a way to preserve the public’s opportunity to get a close-up look at the experiment in action when the cavern is sealed.

  3. ATLAS FTK a - very complex - custom super computer

    International Nuclear Information System (INIS)

    Kimura, N

    2016-01-01

    In the LHC environment for high interaction pile-up, advanced techniques of analysing the data in real time are required in order to maximize the rate of physics processes of interest with respect to background processes. The Fast TracKer (FTK) is a track finding implementation at the hardware level that is designed to deliver full-scan tracks with p T above 1 GeV to the ATLAS trigger system for events passing the Level-1 accept (at a maximum rate of 100 kHz). In order to achieve this performance, a highly parallel system was designed and currently it is being commissioned within in ATLAS. Starting in 2016 it will provide tracks for the trigger system in a region covering the central part of the ATLAS detector, and will be extended to the full detector coverage. The system relies on matching hits coming from the silicon tracking detectors against one billion patterns stored in custom ASIC chips (Associative memory chip - AM06). In a first stage, coarse resolution hits are matched against the patterns and the accepted hits undergo track fitting implemented in FPGAs. Tracks with p T > 1GeV are delivered to the High Level Trigger within about 100 ps. Resolution of the tracks coming from FTK is close to the offline tracking and it will allow for reliable detection of primary and secondary vertexes at trigger level and improved trigger performance for b-jets and tau leptons. This contribution will give an overview of the FTK system and present the status of commissioning of the system. Additionally, the expected FTK performance will be briefly described. (paper)

  4. Overview of the ATLAS Fast Tracker Project

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00025195; The ATLAS collaboration

    2016-01-01

    The next LHC runs, with a significant increase in instantaneous luminosity, will provide a big challenge for the trigger and data acquisition systems of all the experiments. An intensive use of the tracking information at the trigger level will be important to keep high efficiency for interesting events despite the increase in multiple collisions per bunch crossing. In order to increase the use of tracks within the High Level Trigger, the ATLAS experiment planned the installation of a hardware processor dedicated to tracking: the Fast TracKer processor. The Fast Tracker is designed to perform full scan track reconstruction of every event accepted by the ATLAS first level hardware trigger. To achieve this goal the system uses a parallel architecture, with algorithms designed to exploit the computing power of custom Associative Memory chips, and modern field programmable gate arrays. The processor will provide computing power to reconstruct tracks with transverse momentum greater than 1 GeV in the whole trackin...

  5. 8 October 2013 - Rolex Director- General G. Marini in the ATLAS Control Room with CERN Director-General R. Heuer and ATLAS Collaboration Senior Physicist C. Rembser; visiting the ATLAS experimental cavern at LHC Point 1. Were also present from the Directorate: S. Lettow, Director for Administration and General Infrastructure; from the ATLAS Collaboration: Technische Universitaet Dortmund (DE) J. Jentzsch and SLAC National Accelerator Laboratory (US) G. Piacquadio.

    CERN Multimedia

    Anna Pantelia

    2013-01-01

    8 October 2013 - Rolex Director- General G. Marini in the ATLAS Control Room with CERN Director-General R. Heuer and ATLAS Collaboration Senior Physicist C. Rembser; visiting the ATLAS experimental cavern at LHC Point 1. Were also present from the Directorate: S. Lettow, Director for Administration and General Infrastructure; from the ATLAS Collaboration: Technische Universitaet Dortmund (DE) J. Jentzsch and SLAC National Accelerator Laboratory (US) G. Piacquadio.

  6. Taking ATLAS to new heights

    CERN Document Server

    Abha Eli Phoboo, ATLAS experiment

    2013-01-01

    Earlier this month, 51 members of the ATLAS collaboration trekked up to the highest peak in the Atlas Mountains, Mt. Toubkal (4,167m), in North Africa.    The physicists were in Marrakech, Morocco, attending the ATLAS Overview Week (7 - 11 October), which was held for the first time on the African continent. Around 300 members of the collaboration met to discuss the status of the LS1 upgrades and plans for the next run of the LHC. Besides the trek, 42 ATLAS members explored the Saharan sand dunes of Morocco on camels.  Photos courtesy of Patrick Jussel.

  7. A detailed and verified wind resource atlas for Denmark

    Energy Technology Data Exchange (ETDEWEB)

    Mortensen, N G; Landberg, L; Rathmann, O; Nielsen, M N [Risoe National Lab., Roskilde (Denmark); Nielsen, P [Energy and Environmental Data, Aalberg (Denmark)

    1999-03-01

    A detailed and reliable wind resource atlas covering the entire land area of Denmark has been established. Key words of the methodology are wind atlas analysis, interpolation of wind atlas data sets, automated generation of digital terrain descriptions and modelling of local wind climates. The atlas contains wind speed and direction distributions, as well as mean energy densities of the wind, for 12 sectors and four heights above ground level: 25, 45, 70 and 100 m. The spatial resolution is 200 meters in the horizontal. The atlas has been verified by comparison with actual wind turbine power productions from over 1200 turbines. More than 80% of these turbines were predicted to within 10%. The atlas will become available on CD-ROM and on the Internet. (au)

  8. The Evolution of the Region of Interest Builder for the ATLAS Experiment at CERN

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00060668; Blair, Robert; Crone, Gordon Jeremy; Green, Barry; Love, Jeremy; Proudfoot, James; Rifki, Othmane; Panduro Vazquez, William; Vandelli, Wainer; Zhang, Jinlong

    2016-01-01

    ATLAS is a general purpose particle detector, at the Large Hadron Collider (LHC) at CERN, designed to measure the products of proton collisions. Given the high interaction rate (40 MHz), selective triggering in real time is required to reduce the rate to the experiment's data storage capacity (1 kHz). To meet this requirement, ATLAS employs a hardware trigger that reduces the rate to 100 kHz and software based triggers to select interesting interactions for physics analysis. The Region of Interest Builder (RoIB) is an essential part of the ATLAS detector Trigger and Data Acquisition (TDAQ) chain where the coordinates of the regions of interest (RoIs) identified by the first level trigger (L1) are collected and passed to the High Level Trigger (HLT) to make a decision. While the current custom VME based RoIB operated reliably during the first run of the LHC, it is desirable to have a more flexible RoIB and more operationally maintainable in the future, as the LHC reaches higher luminosity and ATLAS increases t...

  9. Progress on the Level-1 Calorimeter Trigger

    CERN Multimedia

    Eric Eisenhandler

    The Level-1 Calorimeter Trigger (L1Calo) has recently passed a number of major hurdles. The various electronic modules that make up the trigger are either in full production or are about to be, and preparations in the ATLAS pit are well advanced. L1Calo has three main subsystems. The PreProcessor converts analogue calorimeter signals to digital, associates the rather broad trigger pulses with the correct proton-proton bunch crossing, and does a final calibration in transverse energy before sending digital data streams to the two algorithmic trigger processors. The Cluster Processor identifies and counts electrons, photons and taus, and the Jet/Energy-sum Processor looks for jets and also sums missing and total transverse energy. Readout drivers allow the performance of the trigger to be monitored online and offline, and also send region-of-interest information to the Level-2 Trigger. The PreProcessor (Heidelberg) is the L1Calo subsystem with the largest number of electronic modules (124), and most of its fu...

  10. The ATLAS Trigger System: Ready for Run-2

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00211007; The ATLAS collaboration

    2015-01-01

    The ATLAS trigger has been successfully collecting collision data during the first run of the LHC between 2009-2013 at a centre-of-mass energy between 900 GeV and 8 TeV. The trigger system consists of a hardware Level-1 (L1) and a software based high-level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. During the next data-taking period starting in 2015 (Run-2) the LHC will operate at a centre-of-mass energy of about 13 TeV resulting in roughly five times higher trigger rates. We will briefly review the ATLAS trigger system upgrades that were implemented during the shutdown, allowing us to cope with the increased trigger rates while maintaining or even improving our efficiency to select relevant physics processes. This includes changes to the L1 calorimeter and muon trigger system, the introduction of a new L1 topological trigger module and the merging of the previously two-level HLT system into a single event filter fa...

  11. Development and test of the DAQ system for a Micromegas prototype installed into the ATLAS experiment

    CERN Document Server

    Zibell, Andre; The ATLAS collaboration; Bianco, Michele; Martoiu, Victor Sorin

    2015-01-01

    A Micromegas (MM) quadruplet prototype with an active area of 0.5 m$^2$ that adopts the general design foreseen for the upgrade of the innermost forward muon tracking systems (Small Wheels) of the ATLAS detector in 2018-2019, has been built at CERN and is going to be tested in the ATLAS cavern environment during the LHC RUN-II period 2015-2017. The integration of this prototype detector into the ATLAS data acquisition system using custom ATCA equipment is presented. An ATLAS compatible ReadOutDriver (ROD) based on the Scalable Readout System (SRS), the Scalable Readout Unit (SRU), will be used in order to transmit the data after generating valid event fragments to the high-level Read Out System (ROS). The SRU will be synchronized with the LHC bunch crossing clock (40.08 MHz) and will receive the Level-1 trigger signals from the Central Trigger Processor (CTP) through the TTCrx receiver ASIC. The configuration of the system will be driven directly from the ATLAS Run Control System. By using the ATLAS TDAQ Soft...

  12. ATLAS cloud R and D

    International Nuclear Information System (INIS)

    Panitkin, Sergey; Bejar, Jose Caballero; Hover, John; Zaytsev, Alexander; Megino, Fernando Barreiro; Girolamo, Alessandro Di; Kucharczyk, Katarzyna; Llamas, Ramon Medrano; Benjamin, Doug; Gable, Ian; Paterson, Michael; Sobie, Randall; Taylor, Ryan; Hendrix, Val; Love, Peter; Ohman, Henrik; Walker, Rodney

    2014-01-01

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R and D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R and D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R and D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R and D group has gained a significant insight into the cloud computing landscape and has identified points that still need to be addressed in order to fully utilize this technology. This contribution will explain the cloud integration models that are being evaluated and will discuss ATLAS' learning during the collaboration with leading commercial and academic cloud providers.

  13. Development and Implementation of a Corriedale Ovine Brain Atlas for Use in Atlas-Based Segmentation.

    Directory of Open Access Journals (Sweden)

    Kishan Andre Liyanage

    Full Text Available Segmentation is the process of partitioning an image into subdivisions and can be applied to medical images to isolate anatomical or pathological areas for further analysis. This process can be done manually or automated by the use of image processing computer packages. Atlas-based segmentation automates this process by the use of a pre-labelled template and a registration algorithm. We developed an ovine brain atlas that can be used as a model for neurological conditions such as Parkinson's disease and focal epilepsy. 17 female Corriedale ovine brains were imaged in-vivo in a 1.5T (low-resolution MRI scanner. 13 of the low-resolution images were combined using a template construction algorithm to form a low-resolution template. The template was labelled to form an atlas and tested by comparing manual with atlas-based segmentations against the remaining four low-resolution images. The comparisons were in the form of similarity metrics used in previous segmentation research. Dice Similarity Coefficients were utilised to determine the degree of overlap between eight independent, manual and atlas-based segmentations, with values ranging from 0 (no overlap to 1 (complete overlap. For 7 of these 8 segmented areas, we achieved a Dice Similarity Coefficient of 0.5-0.8. The amygdala was difficult to segment due to its variable location and similar intensity to surrounding tissues resulting in Dice Coefficients of 0.0-0.2. We developed a low resolution ovine brain atlas with eight clinically relevant areas labelled. This brain atlas performed comparably to prior human atlases described in the literature and to intra-observer error providing an atlas that can be used to guide further research using ovine brains as a model and is hosted online for public access.

  14. The ATLAS Detector Safety System

    CERN Multimedia

    Helfried Burckhart; Kathy Pommes; Heidi Sandaker

    The ATLAS Detector Safety System (DSS) has the mandate to put the detector in a safe state in case an abnormal situation arises which could be potentially dangerous for the detector. It covers the CERN alarm severity levels 1 and 2, which address serious risks for the equipment. The highest level 3, which also includes danger for persons, is the responsibility of the CERN-wide system CSAM, which always triggers an intervention by the CERN fire brigade. DSS works independently from and hence complements the Detector Control System, which is the tool to operate the experiment. The DSS is organized in a Front- End (FE), which fulfills autonomously the safety functions and a Back-End (BE) for interaction and configuration. The overall layout is shown in the picture below. ATLAS DSS configuration The FE implementation is based on a redundant Programmable Logical Crate (PLC) system which is used also in industry for such safety applications. Each of the two PLCs alone, one located underground and one at the s...

  15. Electromagnetic Cell Level Calibration for ATLAS Tile Calorimeter Modules

    CERN Document Server

    Kulchitskii, Yu A; Budagov, Yu A; Khubua, J I; Rusakovitch, N A; Vinogradov, V B; Henriques, A; Davidek, T; Tokar, S; Solodkov, A; Vichou, I

    2006-01-01

    We have determined the electromagnetic calibration constants of 11% TileCal modules exposed to electron beams with incident angles of 20 and 90 degrees. The gain of all the calorimeter cells have been pre-equalized using the radioactive Cs-source that will be also used in situ. The average values for these modules are equal to: for the flat filter method 1.154+/-0.002 pC/GeV and 1.192+/-0.002 pC/GeV for 20 and 90 degrees, for the fit method 1.040+/-0.002 pC/GeV and 1.068+/-0.003 pC/GeV, respectively. These average values for all cells of calibrated modules agree with the weighted average calibration constants for separate modules within the errors. Using the individual calibration constants for every module the RMS spread value of constants will be 1.9+/-0.1 %. In the case of the global constant this value will be 2.6+/-0.1 %. Finally, we present the global constants which should be used for the electromagnetic calibration of the ATLAS Tile hadronic calorimeter data in the ATHENA framework. These constants ar...

  16. Search for a heavy neutral particle decaying into an electron and a muon using 1 fb-1 of ATLAS data

    International Nuclear Information System (INIS)

    Aad, G.; Abbott, B.; Abdallah, J.; Abdelalim, A.A.; Abdesselam, A.; Abdinov, O.; Abi, B.; Abolins, M.; Abramowicz, H.; Abreu, H.; Acerbia, E.; Acharya, B.S.; Adams, D.L.; Addy, T.N.; Adelman, J.; Aderholz, M.; Adomeit, S.; Adragna, P.; Adye, T.; Aefsky, S.; Aguilar-Saavedra, J.A.

    2011-01-01

    A search is presented for a high mass neutral particle that decays directly to the e ± μ ± final state. The data sample was recorded by the ATLAS detector in √s = 7 TeV pp collisions at the LHC from March to June 2011 and corresponds to an integrated luminosity of 1.07 fb -1 . The data are found to be consistent with the Standard Model background. The high e ± μ ± mass region is used to set 95% confidence level upper limits on the production of two possible new physics processes: tau sneutrinos in an R-parity violating supersymmetric model and Z'-like vector bosons in a lepton flavor violating model.

  17. ATLAS ITk Strip Detector for High-Luminosity LHC

    CERN Document Server

    Kroll, Jiri; The ATLAS collaboration

    2017-01-01

    The ATLAS experiment is currently preparing for an upgrade of the tracking system in the course of the High-Luminosity LHC that is scheduled for 2026. The expected peak instantaneous luminosity up to 7.5E34 per second and cm2 corresponding to approximately 200 inelastic proton-proton interactions per beam crossing, radiation damage at an integrated luminosity of 3000/fb and hadron fluencies over 1E16 1 MeV neutron equivalent per cm2, as well as fast hardware tracking capability that will bring Level-0 trigger rate of a few MHz down to a Level-1 trigger rate below 1 MHz require a replacement of existing Inner Detector by an all-silicon Inner Tracker (ITk) with a pixel detector surrounded by a strip detector. The current prototyping phase, that is working with ITk Strip Detector consisting of a four-layer barrel and a forward region composed of six discs on each side of the barrel, has resulted in the ATLAS ITk Strip Detector Technical Design Report (TDR), which starts the pre-production readiness phase at the ...

  18. ATLAS ITk Strip Detector for High-Luminosity LHC

    CERN Document Server

    Kroll, Jiri; The ATLAS collaboration

    2017-01-01

    The ATLAS experiment is currently preparing for an upgrade of the tracking system in the course of the High-Luminosity LHC that is scheduled for 2026. The expected peak instantaneous luminosity up to $7.5\\times10^{34}\\;\\mathrm{cm}^{-2}\\mathrm{s}^{-1}$ corresponding to approximately 200 inelastic proton-proton interactions per beam crossing, radiation damage at an integrated luminosity of $3000\\;\\mathrm{fb}^{-1}$ and hadron fluencies over $2\\times10^{16}\\;\\mathrm{n}_{\\mathrm{eq}}/\\mathrm{cm}^{2}$, as well as fast hardware tracking capability that will bring Level-0 trigger rate of a few MHz down to a Level-1 trigger rate below 1 MHz require a replacement of existing Inner Detector by an all-silicon Inner Tracker with a pixel detector surrounded by a strip detector. The current prototyping phase, that is working with ITk Strip Detector consisting of a four-layer barrel and a forward region composed of six disks on each side of the barrel, has resulted in the ATLAS Inner Tracker Strip Detector Technical Design R...

  19. A Hardware Fast Tracker for the ATLAS trigger

    CERN Document Server

    Asbah, Nedaa; The ATLAS collaboration

    2015-01-01

    The trigger system of the ATLAS experiment is designed to reduce the event rate from the LHC nominal bunch crossing at 40 MHz to about 1 kHz, at the design luminosity of 10^{34} cm^{-2}s^{-1}. After a successful period of data taking from 2010 to early 2013, the LHC restarted with much higher instantaneous luminosity. This will increase the load on High Level Trigger system, the second stage of the selection based on software algorithms. More sophisticated algorithms will be needed to achieve higher background rejection while maintaining good efficiency for interesting physics signals. The Fast TracKer (FTK) is part of the ATLAS trigger upgrade project; it is a hardware processor that will provide, at every level-1 accepted event (100 kHz) and within 100 microseconds, full tracking information for tracks with momentum as low as 1 GeV. Providing fast extensive access to tracking information, with resolution comparable to the offline reconstruction, FTK will help in precise detection of the primary and secondar...

  20. Evolution of the ATLAS distributed computing system during the LHC long shutdown

    Science.gov (United States)

    Campana, S.; Atlas Collaboration

    2014-06-01

    The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the Worldwide LHC Computing Grid (WLCG) distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1 PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileup. We will describe the evolution of the ADC software foreseen during this period. This includes consolidating the existing Production and Distributed Analysis framework (PanDA) and ATLAS Grid Information System (AGIS), together with the development and commissioning of next generation systems for distributed data management (DDM/Rucio) and production (Prodsys-2). We will explain how new technologies such as Cloud Computing and NoSQL databases, which ATLAS investigated as R&D projects in past years, will be integrated in production. Finally, we will describe more fundamental developments such as breaking job-to-data locality by exploiting storage federations and caches, and event level (rather than file or dataset level) workload engines.

  1. Evolution of the ATLAS distributed computing system during the LHC long shutdown

    International Nuclear Information System (INIS)

    Campana, S

    2014-01-01

    The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the Worldwide LHC Computing Grid (WLCG) distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1 PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileup. We will describe the evolution of the ADC software foreseen during this period. This includes consolidating the existing Production and Distributed Analysis framework (PanDA) and ATLAS Grid Information System (AGIS), together with the development and commissioning of next generation systems for distributed data management (DDM/Rucio) and production (Prodsys-2). We will explain how new technologies such as Cloud Computing and NoSQL databases, which ATLAS investigated as R and D projects in past years, will be integrated in production. Finally, we will describe more fundamental developments such as breaking job-to-data locality by exploiting storage federations and caches, and event level (rather than file or dataset level) workload engines.

  2. FTK: The hardware Fast TracKer of the ATLAS experiment at CERN

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00525014; The ATLAS collaboration

    2017-01-01

    In the ever increasing pile-up of the Large Hadron Collider environment the trigger systems of the experiments have to be exceedingly sophisticated and fast at the same time in order to increase the rate of relevant physics processes with respect to background processes. The Fast TracKer (FTK) is a track finding implementation at hardware level that is designed to deliver full-scan tracks with $p_{T}$ above 1GeV to the ATLAS trigger system for every L1 accept (at a maximum rate of 100kHz). To accomplish this, FTK is a highly parallel system which is currently under installation in ATLAS. It will first provide the trigger system with tracks in the central region of the ATLAS detector, and next year it is expected that it will cover the whole detector. The system is based on pattern matching between hits coming from the silicon trackers of the ATLAS detector and 1 billion simulated patterns stored in specially designed ASIC chips (Associative Memory – AM06). In a first stage, coarse resolution hits are matche...

  3. ATLAS Jet Trigger Update for the LHC Run II

    CERN Document Server

    Prince, Sebastien; The ATLAS collaboration

    2015-01-01

    After the current shutdown, the LHC is about to resume operation for a new data-taking period, when it will operate with increased luminosity, event rate and centre of mass energy. The new conditions will impose more demanding constraints on the ATLAS online trigger reconstruction and selection system. To cope with such increased constraints, the ATLAS High Level Trigger, placed after a first hardware-based Level-1 trigger, has been redesigned by merging two previously separated software-based processing levels. In the new joint processing level, the algorithms run in the same computing nodes, thus sharing resources, minimizing the data transfer from the detector buffers and increasing the algorithm flexibility. The Jet trigger software selects events containing high transverse momentum hadronic jets. It needs optimal jet energy resolution to help rejecting an overwhelming background while retaining good efficiency for interesting jets. In particular, this requires the CPU-intensive reconstruction of tridimen...

  4. Level-1 Calorimeter Trigger starts firing

    CERN Multimedia

    Stephen Hillier

    2007-01-01

    L1Calo is one of the major components of ATLAS First Level trigger, along with the Muon Trigger and Central Trigger Processor. It forms all of the first-level calorimeter-based triggers, including electron, jet, tau and missing ET. The final system consists of over 250 custom designed 9U VME boards, most containing a dense array of FPGAs or ASICs. It is subdivided into a PreProcessor, which digitises the incoming trigger signals from the Liquid Argon and Tile calorimeters, and two separate processor systems, which perform the physics algorithms. All of these are highly flexible, allowing the possibility to adapt to beam conditions and luminosity. All parts of the system are read out through Read-Out Drivers, which provide monitoring data and Region of Interest (RoI) information for the Level-2 trigger. Production of the modules is now essentially complete, and enough modules exist to populate the full scale system in USA15. Installation is proceeding rapidly - approximately 90% of the final modules are insta...

  5. Task management in the new ATLAS production system

    International Nuclear Information System (INIS)

    De, K; Golubkov, D; Klimentov, A; Potekhin, M; Vaniachine, A

    2014-01-01

    This document describes the design of the new Production System of the ATLAS experiment at the LHC [1]. The Production System is the top level workflow manager which translates physicists' needs for production level processing and analysis into actual workflows executed across over a hundred Grid sites used globally by ATLAS. As the production workload increased in volume and complexity in recent years (the ATLAS production tasks count is above one million, with each task containing hundreds or thousands of jobs) there is a need to upgrade the Production System to meet the challenging requirements of the next LHC run while minimizing the operating costs. In the new design, the main subsystems are the Database Engine for Tasks (DEFT) and the Job Execution and Definition Interface (JEDI). Based on users' requests, DEFT manages inter-dependent groups of tasks (Meta-Tasks) and generates corresponding data processing workflows. The JEDI component then dynamically translates the task definitions from DEFT into actual workload jobs executed in the PanDA Workload Management System [2]. We present the requirements, design parameters, basics of the object model and concrete solutions utilized in building the new Production System and its components.

  6. ATLAS-Canada Network

    Energy Technology Data Exchange (ETDEWEB)

    Gable, I; Sobie, R J [HEPnet/Canada, Victoria, BC (Canada); Bedinelli, M; Butterworth, S; Groer, L; Kupchinsky, V [University of Toronto, Toronto, ON (Canada); Caron, B; McDonald, S; Payne, C [TRIUMF Laboratory, Vancouver, BC (Canada); Chambers, R [University of Alberta, Edmonton, AB (Canada); Fitzgerald, B [University of Victoria, Victoria, BC (Canada); Hatem, R; Marshall, P; Pobric, D [CANARIE Inc., Ottawa, ON (Canada); Maddalena, P; Mercure, P; Robertson, S; Rochefort, M [McGill University, Montreal, QC (Canada); McWilliam, D [BCNet, Vancouver, BC (Canada); Siegert, M [Simon Fraser University, Burnaby, BC (Canada)], E-mail: igable@uvic.ca (and others)

    2008-12-15

    The ATLAS-Canada computing model consists of a WLCG Tier-1 computing centre located at the TRIUMF Laboratory in Vancouver, Canada, and two distributed Tier-2 computing centres in eastern and western Canadian universities. The TRIUMF Tier-1 is connected to the CERN Tier-0 via a 10G dedicated circuit provided by CANARIE. The Canadian institutions hosting Tier-2 facilities are connected to TRIUMF via 1G lightpaths, and routing between Tier-2s occurs through TRIUMF. This paper discusses the architecture of the ATLAS-Canada network, the challenges of building the network, and the future plans.

  7. Report to users of ATLAS, January 1998

    International Nuclear Information System (INIS)

    Ahmad, I.; Hofman, D.

    1998-01-01

    This report is aimed at informing users about the operating schedule, user policies, and recent changes in research capabilities. It covers the following subjects: (1) status of the Argonne Tandem-Linac Accelerator System (ATLAS) accelerator; (2) the move of Gammasphere from LBNL to ANL; (3) commissioning of the CPT mass spectrometer at ATLAS; (4) highlights of recent research at ATLAS; (5) Program Advisory Committee; and (6) ATLAS User Group Executive Committee

  8. The zero degree calorimeter for the ATLAS experiment

    International Nuclear Information System (INIS)

    Leite, Marco

    2009-01-01

    of dual gain amplifier and 10 bit digitizer is used. The ZDC deploys the same digitization electronics from the ATLAS Level 1 Trigger (Pre Processor Modules), and is capable of storing 13 samples per channel at a digitization rate of 40 MHz, which is doubled by digitizing the same channel using 0 and 12.5ns delays. The integration of this new system in the ATLAS data acquisition and trigger systems has been recently accomplished with success during the last ATLAS cosmic ray integration runs, passing the readiness test for the LHC startup.(author)

  9. Combining Amplification Typing of L1 Active Subfamilies (ATLAS) with High-Throughput Sequencing.

    Science.gov (United States)

    Rahbari, Raheleh; Badge, Richard M

    2016-01-01

    With the advent of new generations of high-throughput sequencing technologies, the catalog of human genome variants created by retrotransposon activity is expanding rapidly. However, despite these advances in describing L1 diversity and the fact that L1 must retrotranspose in the germline or prior to germline partitioning to be evolutionarily successful, direct assessment of de novo L1 retrotransposition in the germline or early embryogenesis has not been achieved for endogenous L1 elements. A direct study of de novo L1 retrotransposition into susceptible loci within sperm DNA (Freeman et al., Hum Mutat 32(8):978-988, 2011) suggested that the rate of L1 retrotransposition in the germline is much lower than previously estimated (ATLAS L1 display technique (Badge et al., Am J Hum Genet 72(4):823-838, 2003) to investigate de novo L1 retrotransposition in human genomes. In this chapter, we describe how we combined a high-coverage ATLAS variant with high-throughput sequencing, achieving 11-25× sequence depth per single amplicon, to study L1 retrotransposition in whole genome amplified (WGA) DNAs.

  10. The performance and development of the ATLAS Inner Detector Trigger

    International Nuclear Information System (INIS)

    Washbrook, A

    2014-01-01

    A description of the ATLAS Inner Detector (ID) software trigger algorithms and the performance of the ID trigger for LHC Run 1 are presented, as well as prospects for a redesign of the tracking algorithms in Run 2. The ID trigger HLT algorithms are essential for a large number of signatures within the ATLAS trigger. During the shutdown, modifications are being made to the LHC machine, to increase both the beam energy and luminosity. This in turn poses significant challenges for the trigger algorithms both in terms of execution time and physics performance. To meet these challenges the ATLAS HLT software is being restructured to run as a single stage rather than in the two distinct levels present during the Run 1 operation. This is allowing the tracking algorithms to be redesigned to make optimal use of the CPU resources available and to integrate new detector systems being added to ATLAS for post-shutdown running. Expected future improvements in the timing and efficiencies of the Inner Detector triggers are also discussed. In addition, potential improvements in the algorithm performance resulting from the additional spacepoint information from the new Insertable B-Layer are presented

  11. The geosystems of complex geographical atlases

    Directory of Open Access Journals (Sweden)

    Jovanović Jasmina

    2012-01-01

    Full Text Available Complex geographical atlases represent geosystems of different hierarchical rank, complexity and diversity, scale and connection. They represent a set of large number of different pieces of information about geospace. Also, they contain systematized, correlative and in the apparent form represented pieces of information about space. The degree of information revealed in the atlas is precisely explained by its content structure and the form of presentation. The quality of atlas depends on the method of visualization of data and the quality of geodata. Cartographic visualization represents cognitive process. The analysis converts geospatial data into knowledge. A complex geographical atlas represents information complex of spatial - temporal coordinated database on geosystems of different complexity and territorial scope. Each geographical atlas defines a concrete geosystem. Systemic organization (structural and contextual determines its complexity and concreteness. In complex atlases, the attributes of geosystems are modeled and pieces of information are given in systematized, graphically unique form. The atlas can be considered as a database. In composing a database, semantic analysis of data is important. The result of semantic modeling is expressed in structuring of data information, in emphasizing logic connections between phenomena and processes and in defining their classes according to the degree of similarity. Accordingly, the efficiency of research of needed pieces of information in the process of the database use is enabled. An atlas map has a special power to integrate sets of geodata and present information contents in user - friendly and understandable visual and tactile way using its visual ability. Composing an atlas by systemic cartography requires the pieces of information on concrete - defined geosystems of different hierarchical level, the application of scientific methods and making of adequate number of analytical, synthetic

  12. ATLAS L1 Muon Trigger Upgrade with sTGC: Design and Performance

    CERN Document Server

    Gerbaudo, Davide

    2014-01-01

    We describe the upgrade of the ATLAS forward Level 1 (L1) muon trigger planned for the LHC run with luminosity above 2 10 34 cm. This upgrade, which aims at suppressing the fake muon triggers from non-pointing tracks, foresees the installation of a New Small Wheel (NSW) detector in the endcap region. This region of the detector will be instrumented with small-strip Thin Gap Chambers (sTGC) that will allow to keep the L1 muon trigger rate below 25 kHz. This rate suppression is realized with a two-step trigger system: first, an ultra-fast pad trigger defines the regions of interest containing potential high- p T muon candidates; second, an accurate track measurement is performed with precision readouts from the sTGC strips, providing the required 1 mrad angular resolution. The new, sTGC-based, L1 muon trigger is reviewed. A description of the sTGC detector as well as of its readout system is given. The first results from the simulation of this new trigger system are presented. These studies show that the pad-tr...

  13. ATLAS Metadata Task Force

    Energy Technology Data Exchange (ETDEWEB)

    ATLAS Collaboration; Costanzo, D.; Cranshaw, J.; Gadomski, S.; Jezequel, S.; Klimentov, A.; Lehmann Miotto, G.; Malon, D.; Mornacchi, G.; Nemethy, P.; Pauly, T.; von der Schmitt, H.; Barberis, D.; Gianotti, F.; Hinchliffe, I.; Mapelli, L.; Quarrie, D.; Stapnes, S.

    2007-04-04

    This document provides an overview of the metadata, which are needed to characterizeATLAS event data at different levels (a complete run, data streams within a run, luminosity blocks within a run, individual events).

  14. EnviroAtlas - NHDPlus V2 WBD Snapshot, EnviroAtlas version - Conterminous United States

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset is a digital hydrologic unit boundary layer to the Subwatershed (12-digit) 6th level for the conterminous United States, based on the...

  15. Long-term operating experience for the ATLAS superconducting resonators

    International Nuclear Information System (INIS)

    Pardo, R.; Zinkann, G.

    1999-01-01

    Portions of the ATLAS accelerator have been operating now for over 21 years. The facility has accumulated several million resonator-hours of operation at this point and has demonstrated the long-term reliability of RF superconductivity. The overall operating performance of the ATLAS facility has established a level of beam quality, flexibility, and reliability not previously achieved with heavy-ion accelerator facilities. The actual operating experience and maintenance history of ATLAS are presented for ATLAS resonators and associated electronics systems. Solutions to problems that appeared in early operation as well as current problems needing further development are discussed

  16. A digital atlas of the dog brain.

    Directory of Open Access Journals (Sweden)

    Ritobrato Datta

    Full Text Available There is a long history and a growing interest in the canine as a subject of study in neuroscience research and in translational neurology. In the last few years, anatomical and functional magnetic resonance imaging (MRI studies of awake and anesthetized dogs have been reported. Such efforts can be enhanced by a population atlas of canine brain anatomy to implement group analyses. Here we present a canine brain atlas derived as the diffeomorphic average of a population of fifteen mesaticephalic dogs. The atlas includes: 1 A brain template derived from in-vivo, T1-weighted imaging at 1 mm isotropic resolution at 3 Tesla (with and without the soft tissues of the head; 2 A co-registered, high-resolution (0.33 mm isotropic template created from imaging of ex-vivo brains at 7 Tesla; 3 A surface representation of the gray matter/white matter boundary of the high-resolution atlas (including labeling of gyral and sulcal features. The properties of the atlas are considered in relation to historical nomenclature and the evolutionary taxonomy of the Canini tribe. The atlas is available for download (https://cfn.upenn.edu/aguirre/wiki/public:data_plosone_2012_datta.

  17. FTK status and track triggers in ATLAS at HL-LHC

    CERN Document Server

    ATLAS Collaboration; The ATLAS collaboration

    2016-01-01

    The expected instantaneous luminosities delivered by the Large Hadron Collider will place continually increasing burdens on the trigger systems of the ATLAS detector. The use of tracking information is key to maintaining a manageable trigger rate while keeping a high efficiency. At the same time, however, track finding is one of the more resource-intensive tasks in the software-based processing farms of the high level trigger system. To support the trigger, ATLAS is building and currently installing the Fast TracK Finder (FTK), a hardware-based system that uses massively parallel pattern recognition in Associative Memory to reconstruct tracks above transverse momenta of 1 GeV across the entire detector at 100 kHz with a latency of ~100 microseconds. In the first-stage of track finding, FTK compares hits in ATLAS silicon detectors against ~1 billion pre-computed track pattern candidates. Track parameters for these candidates, including goodness-of-fit tests, are calculated in FPGAs using a linear approximation...

  18. Readout Electronics for the ATLAS LAr Calorimeter at HL-LHC

    CERN Document Server

    Chen, H; The ATLAS collaboration

    2011-01-01

    The ATLAS experiment is one of the two general-purpose detectors designed to study proton-proton collisions (14 TeV in the center of mass) produced at the Large Hadron Collider (LHC) and to explore the full physics potential of the LHC machine at CERN. The ATLAS Liquid Argon (LAr) calorimeters are high precision, high sensitivity and high granularity detectors designed to provide precision measurements of electrons, photons, jets and missing transverse energy. ATLAS (and its LAr Calorimeters) has been operating and collecting p-p collisions at LHC since 2009. The on-detector electronics (front-end) part of the current readout electronics of the calorimeters measures the ionization current signals by means of preamplifiers, shapers and digitizers and then transfers the data to the off-detector electronics (back-end) for further elaboration, via optical links. Only the data selected by the level-1 calorimeter trigger system are transferred, achieving a bandwidth reduction to 1.6 Gbps. The analog trigger sum sig...

  19. The Evolution of the Region of Interest Builder in the ATLAS Experiment at CERN

    CERN Document Server

    Rifki, Othmane; The ATLAS collaboration; Crone, Gordon Jeremy; Green, Barry; Love, Jeremy; Proudfoot, James; Panduro Vazquez, William; Vandelli, Wainer; Zhang, Jinlong

    2015-01-01

    ATLAS is a general purpose particle detector at the Large Hadron Collider (LHC) at CERN designed to measure the products of proton collisions. Given their high interaction rate (1GHz), selective triggering in real time is required to reduce the rate to the experiment’s data storage capacity (1KHz). To meet this requirement, ATLAS employs a combination of hardware and software triggers to select interesting collisions for physics analysis. The Region of Interest Builder (RoIB) is an integral part of the ATLAS detector Trigger and Data Acquisition (TDAQ) chain where the coordinates of the regions of interest (RoIs) identified by the first level trigger (L1) are collected and passed to the High Level Trigger (HLT) to make a decision. While the current custom RoIB operated reliably during the first run of the LHC, it is desirable to have the RoIB more operationally maintainable in the new run, which will reach higher luminosities with an increased complexity of L1 triggers. We are responsible for migrating the ...

  20. The ATLAS Trigger System: Ready for Run-2

    CERN Document Server

    Nakahama, Yu; The ATLAS collaboration

    2015-01-01

    The ATLAS trigger has been used very successfully for the online event selection during the first run of the LHC between 2009-2013 at a centre-of-mass energy between 900 GeV and 8 TeV. The trigger system consists of a hardware Level-1 (L1) and a software based high-level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. During the next data-taking period starting in early 2015 (Run-2) the LHC will operate at a centre-of-mass energy of about 13 TeV resulting in roughly five times higher trigger rates. We will review the upgrades to the ATLAS Trigger system that have been implemented during the shutdown and that will allow us to cope with these increased trigger rates while maintaining or even improving our efficiency to select relevant physics processes. This includes changes to the L1 calorimeter trigger, the introduction of a new L1 topological trigger module, improvements in the L1 muon system and the merging of the prev...

  1. Overview of the ATLAS Fast Tracker Project

    CERN Document Server

    Ancu, Lucian Stefan; The ATLAS collaboration

    2016-01-01

    The next LHC runs, with a significant increase in instantaneous luminosity, will provide a big challenge for the trigger and data acquisition systems of all the experiments. An intensive use of the tracking information at the trigger level will be important to keep high efficiency for interesting events despite the increase in multiple collisions per bunch crossing. In order to increase the use of tracks within the High Level Trigger, the ATLAS experiment planned the installation of a hardware processor dedicated to tracking: the Fast TracKer processor. The Fast Tracker is designed to perform full scan track reconstruction of every event accepted by the ATLAS first level hardware trigger. To achieve this goal the system uses a parallel architecture, with algorithms designed to exploit the computing power of custom Associative Memory chips, and modern field programmable gate arrays. The processor will provide computing power to reconstruct tracks with transverse momentum greater than 1 GeV in the whol...

  2. Real-time configuration changes of the ATLAS High Level Trigger

    CERN Document Server

    Winklmeier, F

    2010-01-01

    The ATLAS High Level Trigger (HLT) is a distributed real-time software system that performs the final online selection of events produced during proton-proton collisions at the Large Hadron Collider (LHC). It is designed as a two-stage trigger and event filter running on a farm of commodity PC hardware. Currently the system consists of about 850 processing nodes and will be extended incrementally following the expected increase in luminosity of the LHC to about 2000 nodes. The event selection within the HLT applications is carried out by specialized reconstruction algorithms. The selection can be controlled via properties that are stored in a central database and are retrieved at the startup of the HLT processes, which then usually run continuously for many hours. To be able to respond to changes in the LHC beam conditions, it is essential that the algorithms can be re-configured without disrupting data taking while ensuring a consistent and reproducible configuration across the entire HLT farm. The technique...

  3. Petrographic atlas characterisation of aggregates regarding potential reactivity to alkalis : RILEM TC 219-ACS recommended guidance AAR-1.2, for use with the RILEM AAR-1.1 petrographic examination method

    CERN Document Server

    Ribeiro, Maria; Broekmans, Maarten; Sims, Ian

    2016-01-01

    This RILEM AAR 1.2 Atlas is complementary to the petrographic method described in RILEM AAR 1.1. It is designed and intended to assist in the identification of alkali-reactive rock types in concrete aggregate by thin-section petrography. Additional issues include: • optical thin-section petrography conforming to RILEM AAR 1.1 is considered the prime assessment method for aggregate materials, being effective regarding cost and time. Unequivocal identification of minerals in very-fine grained rock types may however require use of supplementary methods. • the atlas adheres to internationally adopted schemes for rock classification and nomenclature, as recommended in AAR 1.1. Thus, rock types are classified as igneous, sedimentary or metamorphic based upon mineral content, microstructure and texture/fabric. • in addition, the atlas identifies known alkali-reactive silica types in each rock type presented. It also identifies consistent coincidence between certain lithologies and silica types; however, it ref...

  4. 1 October 2013 - British Minister of State for Trade and Investment Lord Green of Hurstpierpoint signing the guest book with Head of Internationals Relations R. Voss; visiting the LHC tunnel at Point 1 and the ATLAS experimental cavern with ATLAS Collaboration Members K. Behr and J. Catmore.

    CERN Multimedia

    Jean-Claude Gadmer

    2013-01-01

    1 October 2013 - British Minister of State for Trade and Investment Lord Green of Hurstpierpoint signing the guest book with Head of Internationals Relations R. Voss; visiting the LHC tunnel at Point 1 and the ATLAS experimental cavern with ATLAS Collaboration Members K. Behr and J. Catmore.

  5. Upgrading the ATLAS Tile Calorimeter Electronics

    Directory of Open Access Journals (Sweden)

    Carrió Fernando

    2013-11-01

    Full Text Available This work summarizes the status of the on-detector and off-detector electronics developments for the Phase 2 Upgrade of the ATLAS Tile Calorimeter at the LHC scheduled around 2022. A demonstrator prototype for a slice of the calorimeter including most of the new electronics is planned to be installed in ATLAS in the middle of 2014 during the first Long Shutdown. For the on-detector readout, three different front-end boards (FEB alternatives are being studied: a new version of the 3-in-1 card, the QIE chip and a dedicated ASIC called FATALIC. The Main Board will provide communication and control to the FEBs and the Daughter Board will transmit the digitized data to the off-detector electronics in the counting room, where the super Read-Out Driver (sROD will perform processing tasks on them and will be the interface to the trigger levels 0, 1 and 2.

  6. Online Measurement of LHC Beam Parameters with the ATLAS High Level Trigger

    CERN Document Server

    Strauss, E; The ATLAS collaboration

    2011-01-01

    We present an online measurement of the LHC beam parameters in ATLAS using the High Level Trigger (HLT). When a significant change is detected in the measured beamspot, it is distributed to the HLT. There, trigger algorithms like b-tagging which calculate impact parameters or decay lengths benefit from a precise, up-to-date set of beamspot parameters. Additionally, online feedback is sent to the LHC operators in real time. The measurement is performed by an algorithm running on the Level 2 trigger farm, leveraging the high rate of usable events. Dedicated algorithms perform a full scan of the silicon detector to reconstruct event vertices from registered tracks. The distribution of these vertices is aggregated across the farm and their shape is extracted through fits every 60 seconds to determine the beamspot position, size, and tilt. The reconstructed beam values are corrected for detector resolution effects, measured in situ using the separation of vertices whose tracks have been split into two collections....

  7. Online measurement of LHC beam parameters with the ATLAS High Level Trigger

    CERN Document Server

    Strauss, E; The ATLAS collaboration

    2011-01-01

    We present an online measurement of the LHC beam parameters in ATLAS using the High Level Trigger (HLT). When a significant change is detected in the measured beamspot, it is distributed to the HLT. There, trigger algorithms like b-tagging which calculate impact parameters or decay lengths benefit from a precise,up-to-date set of beamspot parameters. Additionally, online feedback is sent to the LHC operators in real time. The measurement is performed by an algorithm running on the Level 2 trigger farm, leveraging the high rate of usable events. Dedicated algorithms perform a full scan of the silicon detector to reconstruct event vertices from registered tracks. The distribution of these vertices is aggregated across the farm and their shape is extracted through fits every 60 seconds to determine the beamspot position, size, and tilt. The reconstructed beam values are corrected for detector resolution effects, measured in situ using the separation of vertices whose tracks have been split into two collections. ...

  8. Development and test of the DAQ system for a Micromegas prototype to be installed in the ATLAS experiment

    CERN Document Server

    Zibell, Andre; The ATLAS collaboration; Bianco, Michele; Martoiu, Victor Sorin

    2015-01-01

    A Micromegas (MM) quadruplet prototype with an active area of 0.5 m 2 that adopts the general design foreseen for the upgrade of the innermost forward muon tracking systems (Small Wheels) of the ATLAS detector in 2018-2019, has been built at CERN and is going to be tested in the ATLAS cavern environment during the LHC RUN-II period 2015-2017. The integration of this prototype detector into the ATLAS data acquisition system using custom ATCA equipment is presented. An ATLAS compatible Read Out Driver (ROD) based on the Scalable Readout System (SRS), the Scalable Readout Unit (SRU), will be used in order to transmit the data after generating valid event fragments to the high-level Read Out System (ROS). The SRU will be synchronized with the LHC bunch crossing clock (40.08 MHz) and will receive the Level-1 trigger signals from the Central Trigger Processor (CTP) through the TTCrx receiver ASIC. The configuration of the system will be driven directly from the ATLAS Run Control System. By using the ATLAS TDAQ Soft...

  9. The effect of morphometric atlas selection on multi-atlas-based automatic brachial plexus segmentation

    International Nuclear Information System (INIS)

    Van de Velde, Joris; Wouters, Johan; Vercauteren, Tom; De Gersem, Werner; Achten, Eric; De Neve, Wilfried; Van Hoof, Tom

    2015-01-01

    The present study aimed to measure the effect of a morphometric atlas selection strategy on the accuracy of multi-atlas-based BP autosegmentation using the commercially available software package ADMIRE® and to determine the optimal number of selected atlases to use. Autosegmentation accuracy was measured by comparing all generated automatic BP segmentations with anatomically validated gold standard segmentations that were developed using cadavers. Twelve cadaver computed tomography (CT) atlases were included in the study. One atlas was selected as a patient in ADMIRE®, and multi-atlas-based BP autosegmentation was first performed with a group of morphometrically preselected atlases. In this group, the atlases were selected on the basis of similarity in the shoulder protraction position with the patient. The number of selected atlases used started at two and increased up to eight. Subsequently, a group of randomly chosen, non-selected atlases were taken. In this second group, every possible combination of 2 to 8 random atlases was used for multi-atlas-based BP autosegmentation. For both groups, the average Dice similarity coefficient (DSC), Jaccard index (JI) and Inclusion index (INI) were calculated, measuring the similarity of the generated automatic BP segmentations and the gold standard segmentation. Similarity indices of both groups were compared using an independent sample t-test, and the optimal number of selected atlases was investigated using an equivalence trial. For each number of atlases, average similarity indices of the morphometrically selected atlas group were significantly higher than the random group (p < 0,05). In this study, the highest similarity indices were achieved using multi-atlas autosegmentation with 6 selected atlases (average DSC = 0,598; average JI = 0,434; average INI = 0,733). Morphometric atlas selection on the basis of the protraction position of the patient significantly improves multi-atlas-based BP autosegmentation accuracy

  10. White matter atlas of the human spinal cord with estimation of partial volume effect.

    Science.gov (United States)

    Lévy, S; Benhamou, M; Naaman, C; Rainville, P; Callot, V; Cohen-Adad, J

    2015-10-01

    Template-based analysis has proven to be an efficient, objective and reproducible way of extracting relevant information from multi-parametric MRI data. Using common atlases, it is possible to quantify MRI metrics within specific regions without the need for manual segmentation. This method is therefore free from user-bias and amenable to group studies. While template-based analysis is common procedure for the brain, there is currently no atlas of the white matter (WM) spinal pathways. The goals of this study were: (i) to create an atlas of the white matter tracts compatible with the MNI-Poly-AMU template and (ii) to propose methods to quantify metrics within the atlas that account for partial volume effect. The WM atlas was generated by: (i) digitalizing an existing WM atlas from a well-known source (Gray's Anatomy), (ii) registering this atlas to the MNI-Poly-AMU template at the corresponding slice (C4 vertebral level), (iii) propagating the atlas throughout all slices of the template (C1 to T6) using regularized diffeomorphic transformations and (iv) computing partial volume values for each voxel and each tract. Several approaches were implemented and validated to quantify metrics within the atlas, including weighted-average and Gaussian mixture models. Proof-of-concept application was done in five subjects for quantifying magnetization transfer ratio (MTR) in each tract of the atlas. The resulting WM atlas showed consistent topological organization and smooth transitions along the rostro-caudal axis. The median MTR across tracts was 26.2. Significant differences were detected across tracts, vertebral levels and subjects, but not across laterality (right-left). Among the different tested approaches to extract metrics, the maximum a posteriori showed highest performance with respect to noise, inter-tract variability, tract size and partial volume effect. This new WM atlas of the human spinal cord overcomes the biases associated with manual delineation and partial

  11. A High-Resolution In Vivo Atlas of the Human Brain's Serotonin System.

    Science.gov (United States)

    Beliveau, Vincent; Ganz, Melanie; Feng, Ling; Ozenne, Brice; Højgaard, Liselotte; Fisher, Patrick M; Svarer, Claus; Greve, Douglas N; Knudsen, Gitte M

    2017-01-04

    The serotonin (5-hydroxytryptamine, 5-HT) system modulates many important brain functions and is critically involved in many neuropsychiatric disorders. Here, we present a high-resolution, multidimensional, in vivo atlas of four of the human brain's 5-HT receptors (5-HT 1A , 5-HT 1B , 5-HT 2A , and 5-HT 4 ) and the 5-HT transporter (5-HTT). The atlas is created from molecular and structural high-resolution neuroimaging data consisting of positron emission tomography (PET) and magnetic resonance imaging (MRI) scans acquired in a total of 210 healthy individuals. Comparison of the regional PET binding measures with postmortem human brain autoradiography outcomes showed a high correlation for the five 5-HT targets and this enabled us to transform the atlas to represent protein densities (in picomoles per milliliter). We also assessed the regional association between protein concentration and mRNA expression in the human brain by comparing the 5-HT density across the atlas with data from the Allen Human Brain atlas and identified receptor- and transporter-specific associations that show the regional relation between the two measures. Together, these data provide unparalleled insight into the serotonin system of the human brain. We present a high-resolution positron emission tomography (PET)- and magnetic resonance imaging-based human brain atlas of important serotonin receptors and the transporter. The regional PET-derived binding measures correlate strongly with the corresponding autoradiography protein levels. The strong correlation enables the transformation of the PET-derived human brain atlas into a protein density map of the serotonin (5-hydroxytryptamine, 5-HT) system. Next, we compared the regional receptor/transporter protein densities with mRNA levels and uncovered unique associations between protein expression and density at high detail. This new in vivo neuroimaging atlas of the 5-HT system not only provides insight in the human brain's regional protein

  12. ATLAS Facility Description Report

    International Nuclear Information System (INIS)

    Kang, Kyoung Ho; Moon, Sang Ki; Park, Hyun Sik; Cho, Seok; Choi, Ki Yong

    2009-04-01

    A thermal-hydraulic integral effect test facility, ATLAS (Advanced Thermal-hydraulic Test Loop for Accident Simulation), has been constructed at KAERI (Korea Atomic Energy Research Institute). The ATLAS has the same two-loop features as the APR1400 and is designed according to the well-known scaling method suggested by Ishii and Kataoka to simulate the various test scenarios as realistically as possible. It is a half-height and 1/288-volume scaled test facility with respect to the APR1400. The fluid system of the ATLAS consists of a primary system, a secondary system, a safety injection system, a break simulating system, a containment simulating system, and auxiliary systems. The primary system includes a reactor vessel, two hot legs, four cold legs, a pressurizer, four reactor coolant pumps, and two steam generators. The secondary system of the ATLAS is simplified to be of a circulating loop-type. Most of the safety injection features of the APR1400 and the OPR1000 are incorporated into the safety injection system of the ATLAS. In the ATLAS test facility, about 1300 instrumentations are installed to precisely investigate the thermal-hydraulic behavior in simulation of the various test scenarios. This report describes the scaling methodology, the geometric data of the individual component, and the specification and the location of the instrumentations in detail

  13. ATLAS FTK a – very complex – custom super computer

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00008600; The ATLAS collaboration

    2016-01-01

    In the ever increasing pile-up LHC environment advanced techniques of analyzing the data are implemented in order to increase the rate of relevant physics processes with respect to background processes. The Fast TracKer (FTK) is a track finding implementation at hardware level that is designed to deliver full-scan tracks with PT above 1GeV to the ATLAS trigger system for every L1 accept (at a maximum rate of 100kHz). In order to achieve this performance a highly parallel system was designed and now it is under installation in ATLAS. In the beginning of 2016 it will provide tracks for the trigger system in a region covering the central part of the ATLAS detector, and during the year it's coverage will be extended to the full detector coverage. The system relies on matching hits coming from the silicon tracking detectors against 1 billion patterns stored in specially designed ASICS chips (Associative memory - AM06). In a first stage coarse resolution hits are matched against the patterns and the accepted hits u...

  14. The updated ATLAS Jet Trigger for the LHC Run II

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00359694; The ATLAS collaboration

    2015-01-01

    After the current shutdown, the LHC is about to resume operation for a new data-taking period, when it will operate with increased luminosity, event rate and center of mass energy. The new conditions will impose more demanding constraints on the ATLAS online trigger reconstruction and selection system. To cope with such increased constraints, the ATLAS High-Level Trigger, placed after a first hardware-based Level~1 trigger, has been redesigned by merging two previously separated software-based processing levels. In the new joint processing level, the algorithms run in the same computing nodes, thus sharing resources, minimizing the data transfer from the detector buffers and increasing the algorithm flexibility. The jet trigger software selects events containing high transverse momentum hadronic jets. It needs optimal jet energy resolution to help rejecting an overwhelming background while retaining good efficiency for interesting jets. In particular, this requires the CPU-intensive reconstruction of tridimen...

  15. Probabilistic atlas-based segmentation of combined T1-weighted and DUTE MRI for calculation of head attenuation maps in integrated PET/MRI scanners.

    Science.gov (United States)

    Poynton, Clare B; Chen, Kevin T; Chonde, Daniel B; Izquierdo-Garcia, David; Gollub, Randy L; Gerstner, Elizabeth R; Batchelor, Tracy T; Catana, Ciprian

    2014-01-01

    We present a new MRI-based attenuation correction (AC) approach for integrated PET/MRI systems that combines both segmentation- and atlas-based methods by incorporating dual-echo ultra-short echo-time (DUTE) and T1-weighted (T1w) MRI data and a probabilistic atlas. Segmented atlases were constructed from CT training data using a leave-one-out framework and combined with T1w, DUTE, and CT data to train a classifier that computes the probability of air/soft tissue/bone at each voxel. This classifier was applied to segment the MRI of the subject of interest and attenuation maps (μ-maps) were generated by assigning specific linear attenuation coefficients (LACs) to each tissue class. The μ-maps generated with this "Atlas-T1w-DUTE" approach were compared to those obtained from DUTE data using a previously proposed method. For validation of the segmentation results, segmented CT μ-maps were considered to the "silver standard"; the segmentation accuracy was assessed qualitatively and quantitatively through calculation of the Dice similarity coefficient (DSC). Relative change (RC) maps between the CT and MRI-based attenuation corrected PET volumes were also calculated for a global voxel-wise assessment of the reconstruction results. The μ-maps obtained using the Atlas-T1w-DUTE classifier agreed well with those derived from CT; the mean DSCs for the Atlas-T1w-DUTE-based μ-maps across all subjects were higher than those for DUTE-based μ-maps; the atlas-based μ-maps also showed a lower percentage of misclassified voxels across all subjects. RC maps from the atlas-based technique also demonstrated improvement in the PET data compared to the DUTE method, both globally as well as regionally.

  16. The ATLAS Education and Outreach Group

    CERN Multimedia

    M. Barnett

    With the unprecedented scale and duration of ATLAS and the unique possibilities to make groundbreaking discoveries in physics, ATLAS has special opportunities to communicate the importance and role of our accomplishments. We want to participate in educating the next generation of scientific and other leaders in our society by involving students of many levels in our research. The Education and Outreach Group has focused on producing informational material of various sorts - like brochures, posters, a film, animations and a public website - to assist the members of the collaboration in their contacts with students, teachers and the general public. Another aim is to facilitate the teaching of particle physics and particularly the role of the ATLAS Experiment by providing ideas and educational material. The Education and Outreach Group meets every ATLAS week, with an attendance of between 25 and 40 people. The meetings have become an interesting forum for education and outreach projects and new ideas. The comi...

  17. A Fast Hardware Tracker for the ATLAS Trigger System

    CERN Document Server

    Neubauer, M; The ATLAS collaboration

    2011-01-01

    In hadron collider experiments, triggering the detector to store interesting events for offline analysis is a challenge due to the high rates and multiplicities of particles produced. The LHC will soon operate at a center-of-mass energy of 14 TeV and at high instantaneous luminosities of the order of $10^{34}$ to $10^{35}$ cm$^{-2}$ s$^{-1}$. A multi-level trigger strategy is used in ATLAS, with the first level (LVL1) implemented in hardware and the second and third levels (LVL2 and EF) implemented in a large computer farm. Maintaining high trigger efficiency for the physics we are most interested in while at the same time suppressing high rate physics from inclusive QCD processes is a difficult but important problem. It is essential that the trigger system be flexible and robust, with sufficient redundancy and operating margin. Providing high quality track reconstruction over the full ATLAS detector by the start of processing at LVL2 is an important element to achieve these needs. As the instantaneous lumino...

  18. Event filter monitoring with the ATLAS tile calorimeter

    CERN Document Server

    Fiorini, L

    2008-01-01

    The ATLAS Tile Calorimeter detector is presently involved in an intense phase of subsystems integration and commissioning with muons of cosmic origin. Various monitoring programs have been developed at different levels of the data flow to tune the set-up of the detector running conditions and to provide a fast and reliable assessment of the data quality already during data taking. This paper focuses on the monitoring system integrated in the highest level of the ATLAS trigger system, the Event Filter, and its deployment during the Tile Calorimeter commissioning with cosmic ray muons. The key feature of Event Filter monitoring is the capability of performing detector and data quality control on complete physics events at the trigger level, hence before events are stored on disk. In ATLAS' online data flow, this is the only monitoring system capable of giving a comprehensive event quality feedback.

  19. FTK: The hardware Fast TracKer of the ATLAS experiment at CERN

    Directory of Open Access Journals (Sweden)

    Maznas Ioannis

    2017-01-01

    Full Text Available In the ever increasing pile-up environment of the Large Hadron Collider, trigger systems of the experiments must use more sophisticated techniques in order to increase purity of signal physics processes with respect to background processes. The Fast TracKer (FTK is a track finding system implemented in custom hardware that is designed to deliver full-scan tracks with pT above 1 GeV to the ATLAS trigger system for every Level-1 (L1 accept (at a maximum rate of 100 kHz. To accomplish this, FTK is a highly parallel system which is currently being installed in ATLAS. It will first provide the trigger system with tracks in the central region of the ATLAS detector, and next year it is expected that it will cover the whole detector. The system is based on pattern matching between hits coming from the silicon trackers of the ATLAS detector and one billion simulated patterns stored in specially designed ASIC Associative Memory chips. This document will provide an overview of the FTK system architecture, its design and information about its expected performance.

  20. SU-E-J-128: Two-Stage Atlas Selection in Multi-Atlas-Based Image Segmentation

    International Nuclear Information System (INIS)

    Zhao, T; Ruan, D

    2015-01-01

    Purpose: In the new era of big data, multi-atlas-based image segmentation is challenged by heterogeneous atlas quality and high computation burden from extensive atlas collection, demanding efficient identification of the most relevant atlases. This study aims to develop a two-stage atlas selection scheme to achieve computational economy with performance guarantee. Methods: We develop a low-cost fusion set selection scheme by introducing a preliminary selection to trim full atlas collection into an augmented subset, alleviating the need for extensive full-fledged registrations. More specifically, fusion set selection is performed in two successive steps: preliminary selection and refinement. An augmented subset is first roughly selected from the whole atlas collection with a simple registration scheme and the corresponding preliminary relevance metric; the augmented subset is further refined into the desired fusion set size, using full-fledged registration and the associated relevance metric. The main novelty of this work is the introduction of an inference model to relate the preliminary and refined relevance metrics, based on which the augmented subset size is rigorously derived to ensure the desired atlases survive the preliminary selection with high probability. Results: The performance and complexity of the proposed two-stage atlas selection method were assessed using a collection of 30 prostate MR images. It achieved comparable segmentation accuracy as the conventional one-stage method with full-fledged registration, but significantly reduced computation time to 1/3 (from 30.82 to 11.04 min per segmentation). Compared with alternative one-stage cost-saving approach, the proposed scheme yielded superior performance with mean and medium DSC of (0.83, 0.85) compared to (0.74, 0.78). Conclusion: This work has developed a model-guided two-stage atlas selection scheme to achieve significant cost reduction while guaranteeing high segmentation accuracy. The benefit

  1. A hardware fast tracker for the ATLAS trigger

    Science.gov (United States)

    Asbah, Nedaa

    2016-09-01

    The trigger system of the ATLAS experiment is designed to reduce the event rate from the LHC nominal bunch crossing at 40 MHz to about 1 kHz, at the design luminosity of 1034 cm-2 s-1. After a successful period of data taking from 2010 to early 2013, the LHC already started with much higher instantaneous luminosity. This will increase the load on High Level Trigger system, the second stage of the selection based on software algorithms. More sophisticated algorithms will be needed to achieve higher background rejection while maintaining good efficiency for interesting physics signals. The Fast TracKer (FTK) is part of the ATLAS trigger upgrade project. It is a hardware processor that will provide, at every Level-1 accepted event (100 kHz) and within 100 microseconds, full tracking information for tracks with momentum as low as 1 GeV. Providing fast, extensive access to tracking information, with resolution comparable to the offline reconstruction, FTK will help in precise detection of the primary and secondary vertices to ensure robust selections and improve the trigger performance. FTK exploits hardware technologies with massive parallelism, combining Associative Memory ASICs, FPGAs and high-speed communication links.

  2. Atlas of Vega: 3850-6860 Å

    Science.gov (United States)

    Kim, Hyun-Sook; Han, Inwoo; Valyavin, G.; Lee, Byeong-Cheol; Shimansky, V.; Galazutdinov, G. A.

    2009-10-01

    We present a high resolving power (λ/Δλ = 90,000) and high signal-to-noise ratio (˜700) spectral atlas of Vega covering the 3850-6860 Å wavelength range. The atlas is a result of averaging of spectra recorded with the aid of the echelle spectrograph BOES fed by the 1.8 m telescope at Bohyunsan Observatory (Korea). The atlas is provided only in machine-readable form (electronic data file) and will be available in the SIMBAD database upon publication. Based on data collected with the 1.8 m telescope operated at BOAO Observatory, Korea.

  3. ATLAS DDM integration in ARC

    DEFF Research Database (Denmark)

    Behrmann, Gerd; Cameron, David; Ellert, Mattias

    by the DQ2 software. Managing ATLAS data within NDGF and between NDGF and other Grids used by ATLAS (the LHC Computing Grid and the Open Science Grid) presents a unique challenge for several reasons. Firstly, the entry point for data, the Tier 1 centre, is physically distributed among heterogeneous...

  4. Brief retrospection on Hungarian school atlases

    Science.gov (United States)

    Klinghammer, István; Jesús Reyes Nuñez, José

    2018-05-01

    The first part of this article is dedicated to the history of Hungarian school atlases to the end of the 1st World War. Although the first maps included in a Hungarian textbook were probably made in 1751, the publication of atlases for schools is dated almost 50 years later, when professor Ézsáiás Budai created his "New School Atlas for elementary pupils" in 1800. This was followed by a long period of 90 years, when the school atlases were mostly translations and adaptations of foreign atlases, the majority of which were made in German-speaking countries. In those years, a school atlas made by a Hungarian astronomer, Antal Vállas, should be highlighted as a prominent independent piece of work. In 1890, a talented cartographer, Manó Kogutowicz founded the Hungarian Geographical Institute, which was the institution responsible for producing school atlases for the different types of schools in Hungary. The professional quality of the school atlases published by his institute was also recognized beyond the Hungarian borders by prizes won in international exhibitions. Kogutowicz laid the foundations of the current Hungarian school cartography: this statement is confirmed in the second part of this article, when three of his school atlases are presented in more detail to give examples of how the pupils were introduced to the basic cartographic and astronomic concepts as well as how different innovative solutions were used on the maps.

  5. Search for the standard-model Higgs boson in the associated WH production with 1.47 fb-1 data of the ATLAS experiment at the LHC

    International Nuclear Information System (INIS)

    Verlage, Tobias

    2011-01-01

    The Large Hadron Collider is a particle accelerator at CERN, in which since March 30th 2010 protons are brought to collision at a c. m. energy of √(s)=7 TeV. These events can be observed b y means of the ATLAS detector, one of two universal detectors at the Large Hadron Collider. One of the main purposes of the ATLAS detector is the search for the Standard-Model Higgs boson. This thesis describes a study on the search for the Standard-Model Higgs boson, whereby the production of the Higgs boson in association with a vector boson W ± and the subsequent decay in a bottom-quark pair iks studied. For this token data of the ATLAS detector, which correspond to an integrated luminosity of 1.47 fb -1 , are compared with simulated physical events. An analysis based on cuts for the separation of the signal events of background processes is presented. Furthermore systematic uncertainties are determined. Finally an upper exclusion limit of the production rate for a Standard-Model Higgs boson in dependence of its mass in the range from 110 GeV to 139 GeV is calculated and discussed. The strongest exclusion limit can be posed for a Higgs boson with a mass of 110 GeV. For this a 16-fold larger production rate as that of the Standard-Model prediction can be excluded with a confidence level of 95%. For the whole studied mass range an upper exclusion limit for Higgs bosons with 16-29-fold increased Standard-Model production rate results.

  6. The ATLAS muon trigger: Experience and performance in the first 3 years of LHC pp runs

    International Nuclear Information System (INIS)

    Ventura, Andrea

    2013-01-01

    The ATLAS experiment at CERN's Large Hadron Collider (LHC) deploys a three-level processing scheme for the trigger system. The Level-1 muon trigger system gets its input from fast muon trigger detectors. Sector logic boards select muon candidates, which are passed via an interface board to the central trigger processor and then to the High Level Trigger (HLT). The muon HLT is purely software based and encompasses a Level-2 trigger followed by an event filter for a staged trigger approach. It has access to the data of the precision muon detectors and other detector elements to refine the muon hypothesis. The ATLAS experiment has taken data with high efficiency continuously over entire running periods from 2010 to 2012, for which sophisticated triggers to guard the highest physics output while reducing effectively the event rate were mandatory. The ATLAS muon trigger has successfully adapted to this challenging environment. The selection strategy has been optimized for the various physics analyses involving muons in the final state. This work briefly summarizes these three years of experience in the ATLAS muon trigger and reports about efficiency, resolution, and general performance of the muon trigger

  7. The Latest from ATLAS

    CERN Multimedia

    2009-01-01

    Since November 2008, ATLAS has undertaken detailed maintenance, consolidation and repair work on the detector (see Bulletin of 20 July 2009). Today, the fraction of the detector that is operational has increased compared to last year: less than 1% of dead channels for most of the sub-systems. "We are going to start taking data this year with a detector which is even more efficient than it was last year," agrees ATLAS Spokesperson, Fabiola Gianotti. By mid-September the detector was fully closed again, and the cavern sealed. The magnet system has been operated at nominal current for extensive periods over recent months. Once the cavern was sealed, ATLAS began two weeks of combined running. Right now, subsystems are joining the run incrementally until the point where the whole detector is integrated and running as one. In the words of ATLAS Technical Coordinator, Marzio Nessi: "Now we really start physics." In parallel, the analysis ...

  8. Instrumentation and measurement method for the ATLAS test facility

    Energy Technology Data Exchange (ETDEWEB)

    Yun, Byong Jo; Chu, In Chul; Eu, Dong Jin; Kang, Kyong Ho; Kim, Yeon Sik; Song, Chul Hwa; Baek, Won Pil

    2007-03-15

    An integral effect test loop for pressurized water reactors (PWRs), the ATLAS is constructed by thermal-hydraulic safety research division in KAERI. The ATLAS facility has been designed to have the length scale of 1/2 and area scale of 1/144 compared with the reference plant, APR1400 which is a Korean evolution type nuclear reactors. A total 1300 instrumentations is equipped in the ATLAS test facility. In this report, the instrumentation of ATLAS test facility and related measurement methods were introduced.

  9. Engineering the ATLAS TAG Browser

    CERN Document Server

    Zhang, Q; The ATLAS collaboration

    2011-01-01

    ELSSI is a web-based event metadata (TAG) browser and event-level selection service for ATLAS. TAGs from all ATLAS physics and Monte Carlo data sets are routinely loaded into Oracle databases as an integral part of event processing. As data volumes increase, more and more sites are joining the distributed TAG data hosting topology[1]. Meanwhile, TAG content and database schemata continue to evolve as new user requirements and additional sources of metadata emerge. All of this has posed many challenges to the development of ELSSI, which must support vast amounts of TAG data while source, content, geographic locations, and user query patterns may change over time. In this paper, we describe some of the challenges encountered in the process of developing ELSSI, and the software engineering strategies adopted to address those challenges. Approaches to management of access to data, browsing, data rendering, query building, query validation, execution, connection management, and communication with auxiliary service...

  10. The ATLAS IBL CO2 Cooling System

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00237783; The ATLAS collaboration; Zwalinski, L.; Bortolin, C.; Vogt, S.; Godlewski, J.; Crespo-Lopez, O.; Van Overbeek, M.; Blaszcyk, T.

    2017-01-01

    The ATLAS Pixel detector has been equipped with an extra B-layer in the space obtained by a reduced beam pipe. This new pixel detector called the ATLAS Insertable B-Layer (IBL) is installed in 2014 and is operational in the current ATLAS data taking. The IBL detector is cooled with evaporative CO2 and is the first of its kind in ATLAS. The ATLAS IBL CO2 cooling system is designed for lower temperature operation (<-35⁰C) than the previous developed CO2 cooling systems in High Energy Physics experiments. The cold temperatures are required to protect the pixel sensors for the high expected radiation dose up to 550 fb^-1 integrated luminosity.

  11. Hidden Valley Search at ATLAS

    CERN Document Server

    Verducci, M

    2011-01-01

    A number of extensions of the Standard Model result in neutral and weakly-coupled particles that decay to multi hadrons or multi leptons with macroscopic decay lengths. These particles with decay paths that can be comparable with ATLAS detector dimensions represent, from an experimental point of view, a challenge both for the trigger and for the reconstruction capabilities of the ATLAS detector. We will present a set of signature driven triggers for the ATLAS detector that target such displaced decays and evaluate their performances for some benchmark models and describe analysis strategies and limits on the production of such long-lived particles. A first estimation of the Hidden Valley trigger rates has been evaluated with 6 pb-1 of data collected at ATLAS during the data taking of 2010.

  12. Mesure des champs de radiation dans le detecteur ATLAS et sa caverne avec les detecteurs au silicium a pixels ATLAS-MPX

    Science.gov (United States)

    Bouchami, Jihene

    The LHC proton-proton collisions create a hard radiation environment in the ATLAS detector. In order to quantify the effects of this environment on the detector performance and human safety, several Monte Carlo simulations have been performed. However, direct measurement is indispensable to monitor radiation levels in ATLAS and also to verify the simulation predictions. For this purpose, sixteen ATLAS-MPX devices have been installed at various positions in the ATLAS experimental and technical areas. They are composed of a pixelated silicon detector called MPX whose active surface is partially covered with converter layers for the detection of thermal, slow and fast neutrons. The ATLAS-MPX devices perform real-time measurement of radiation fields by recording the detected particle tracks as raster images. The analysis of the acquired images allows the identification of the detected particle types by the shapes of their tracks. For this aim, a pattern recognition software called MAFalda has been conceived. Since the tracks of strongly ionizing particles are influenced by charge sharing between adjacent pixels, a semi-empirical model describing this effect has been developed. Using this model, the energy of strongly ionizing particles can be estimated from the size of their tracks. The converter layers covering each ATLAS-MPX device form six different regions. The efficiency of each region to detect thermal, slow and fast neutrons has been determined by calibration measurements with known sources. The study of the ATLAS-MPX devices response to the radiation produced by proton-proton collisions at a center of mass energy of 7 TeV has demonstrated that the number of recorded tracks is proportional to the LHC luminosity. This result allows the ATLAS-MPX devices to be employed as luminosity monitors. To perform an absolute luminosity measurement and calibration with these devices, the van der Meer method based on the LHC beam parameters has been proposed. Since the ATLAS

  13. ATLAS Upgrades: a challenge for the next Decades

    CERN Document Server

    Aielli, Giulio; The ATLAS collaboration

    2016-01-01

    After the successful operation at the center-of-mass energies of 7 and 8 TeV in 2010 - 2012, the LHC is now running at the center-of-mass energies of 13 TeV. Meanwhile, plans are actively advancing for a series of upgrades of the accelerator, culminating roughly ten years from now in the high-luminosity LHC (HL-LHC) project, delivering of the order of five times the LHC nominal instantaneous luminosity along with luminosity leveling. The ultimate goal is to extend the dataset for ATLAS and CMS from about few hundred fb-1 expected for LHC running in the next 10 years to 3000 fb-1 by around 2035. In parallel, the experiments need to be kept lockstep with the accelerator to accommodate running beyond the nominal luminosity this decade. Along with maintenance and consolidation of the detector in the past few years, ATLAS has added inner b-layer to its tracking system. The challenge of coping with the HL-LHC instantaneous and integrated luminosities, along with the associated radiation levels, requires further maj...

  14. Alignment of the ATLAS Inner Detector Tracking System

    CERN Document Server

    Lacuesta, V; The ATLAS collaboration

    2010-01-01

    ATLAS is a multipurpose experiment that records the LHC collisions. To reconstruct trajectories of charged particles produced in these collisions, ATLAS tracking system is equipped with silicon planar sensors and drift‐tube based detectors. They constitute the ATLAS Inner Detector. In order to achieve its scientific goals, the alignment of the ATLAS tracking system requires the determine accurately its almost 36000 degrees of freedom. Thus the demanded precision for the alignment of the silicon sensors is below 10 micrometers. This implies to use a large sample of high momentum and isolated charge particle tracks. The high level trigger selects those tracks online. Then the raw data with the hits information of the triggered tracks is stored in a calibration stream. Tracks from cosmic trigger during empty LHC bunches are also used as input for the alignment. The implementation of the track based alignment within the ATLAS software framework unifies different alignment approaches and allows the alignment of ...

  15. The ATLAS Trigger: Recent Experience and Future Plans

    CERN Document Server

    The ATLAS collaboration

    2009-01-01

    This paper will give an overview of the ATLAS trigger design and its innovative features. It will describe the valuable experience gained in running the trigger reconstruction and event selection in the fastchanging environment of the detector commissioning during 2008. It will also include a description of the trigger selection menu and its 2009 deployment plan from first collisions to the nominal luminosity. ATLAS is one of the two general-purpose detectors at the Large Hadron Collider (LHC). The trigger system needs to efficiently reject a large rate of background events and still select potentially interesting ones with high efficiency. After a first level trigger implemented in custom electronics, the trigger event selection is made by the High Level Trigger (HLT) system, implemented in software. To reduce the processing time to manageable levels, the HLT uses seeded, step-wise and fast selection algorithms, aiming at the earliest possible rejection of background events. The ATLAS trigger event selection...

  16. 30 August 2013 - Senior Vice Minister for Foreign Affairs in Japan M. Matsuyama signing the guest book with CERN Director-General; visit the ATLAS experimental cavern with ATLAS Spokesperson D. Charlton and visiting the LHC tunnel at Point 1 with former ATLAS Japan national contact physicist T. Kondo. R. Voss and K. Yoshida present throughout.

    CERN Multimedia

    Jean-Claude Gadmer

    2013-01-01

    30 August 2013 - Senior Vice Minister for Foreign Affairs in Japan M. Matsuyama signing the guest book with CERN Director-General; visit the ATLAS experimental cavern with ATLAS Spokesperson D. Charlton and visiting the LHC tunnel at Point 1 with former ATLAS Japan national contact physicist T. Kondo. R. Voss and K. Yoshida present throughout.

  17. Tracking and flavour tagging selection in the ATLAS High Level Trigger

    CERN Document Server

    Calvetti, Milene; The ATLAS collaboration

    2017-01-01

    In high-energy physics experiments, track based selection in the online environment is crucial for the efficient real time selection of the rare physics process of interest. This is of particular importance at the Large Hadron Collider (LHC), where the increasingly harsh collision environment is challenging the experiments to improve the performance of their online selection. Principal among these challenges is the increasing number of interactions per bunch crossing, known as pileup. In the ATLAS experiment the challenge has been addressed with multiple strategies. Firstly, specific trigger objects have been improved by building algorithms using detailed tracking and vertexing in specific detector regions to improve background rejection without loosing signal efficiency. Secondly, since 2015 all trigger areas have benefited from a new high performance Inner Detector (ID) software tracking system implemented in the High Level Trigger. Finally, performance will be further enhanced in future by the installation...

  18. Tracking and flavour tagging selection in the ATLAS High Level Trigger

    CERN Document Server

    Calvetti, Milene; The ATLAS collaboration

    2017-01-01

    In high-energy physics experiments, track based selection in the online environment is crucial for the detection of physics processes of interest for further study. This is of particular importance at the Large Hadron Collider (LHC), where the increasingly harsh collision environment is challenging participating experiments to improve the performance of their online selection. Principle among these challenges is the increasing number of interactions per bunch crossing, known as pileup. In the ATLAS experiment the challenge has been addressed with multiple strategies. Firstly, individual trigger groups focusing on specific physics objects have implemented novel algorithms which make use of the detailed tracking and vertexing performed within the trigger to improve rejection without losing efficiency. Secondly, since 2015 all trigger areas have also benefited from a new high performance inner detector software tracking system implemented in the High Level Trigger. Finally, performance will be further enhanced i...

  19. The ATLAS semiconductor tracker (SCT)

    International Nuclear Information System (INIS)

    Jackson, J.N.

    2005-01-01

    The ATLAS detector (CERN,LHCC,94-43 (1994)) is designed to study a wide range of physics at the CERN Large Hadron Collider (LHC) at luminosities up to 10 34 cm -2 s -1 with a bunch-crossing rate of 40 MHz. The Semiconductor Tracker (SCT) forms a key component of the Inner Detector (vol. 1, ATLAS TDR 4, CERN,LHCC 97-16 (1997); vol. 2, ATLAS TDR 5, CERN,LHCC 97-17 (1997)) which is situated inside a 2 T solenoid field. The ATLAS Semiconductor Tracker (SCT) utilises 4088 silicon modules with binary readout mounted on carbon fibre composite structures arranged in the forms of barrels in the central region and discs in the forward region. The construction of the SCT is now well advanced. The design of the SCT modules, services and support structures will be briefly outlined. A description of the various stages in the construction process will be presented with examples of the performance achieved and the main difficulties encountered. Finally, the current status of the construction is reviewed

  20. Towards a Level-1 tracking trigger for the ATLAS experiment at the High Luminosity LHC

    CERN Document Server

    Martin, T A D; The ATLAS collaboration

    2014-01-01

    The ability to apply fast processing that can take account of the properties of the tracks that are being reconstructed will enhance the rejection, while retaining high efficiency for events with desired signatures, such as high momentum leptons or multiple jets. Studies to understand the feasibility of such a system have begun, and proceed in two directions: a fast readout for high granularity silicon detectors, and a fast pattern recognition algorithm to be applied just after the Front-End readout for specific sub detectors. Both existing, and novel technologies can offer solutions. The aim of these studies is to determine the parameter space to which this system must be adapted. The status of ongoing tests on specific hardware components crucial for this system, both to increase the ATLAS physics potential and fully satisfy the trigger requirements at very high luminosities are discussed.

  1. ATLAS Thesis Award 2017

    CERN Multimedia

    Anthony, Katarina

    2018-01-01

    Winners of the ATLAS Thesis Award were presented with certificates and glass cubes during a ceremony on 22 February, 2018. They are pictured here with Karl Jakobs (ATLAS Spokesperson), Max Klein (ATLAS Collaboration Board Chair) and Katsuo Tokushuku (ATLAS Collaboration Board Deputy Chair).

  2. Performance and development for the Inner Detector Trigger algorithms at ATLAS

    CERN Document Server

    Penc, O; The ATLAS collaboration

    2014-01-01

    The performance of the ATLAS Inner Detector (ID) Trigger algorithms being developed for running on the ATLAS High Level Trigger (HLT) processor farm during Run 2 of the LHC are presented. During the 2013-14 LHC long shutdown modifications are being carried out to the LHC accelerator to increase both the beam energy and luminosity. These modifications will pose significant challenges for the ID Trigger algorithms, both in terms execution time and physics performance. To meet these challenges, the ATLAS HLT software is being restructured to run as a more flexible single stage HLT, instead of two separate stages (Level2 and Event Filter) as in Run 1. This will reduce the overall data volume that needs to be requested by the HLT system, since data will no longer need to be requested for each of the two separate processing stages. Development of the ID Trigger algorithms for Run 2, currently expected to be ready for detector commissioning near the end of 2014, is progressing well and the current efforts towards op...

  3. EnviroAtlas Impervious Proximity Gradient Web Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). In any given 1-square meter...

  4. ATLAS

    CERN Multimedia

    Akhnazarov, V; Canepa, A; Bremer, J; Burckhart, H; Cattai, A; Voss, R; Hervas, L; Kaplon, J; Nessi, M; Werner, P; Ten kate, H; Tyrvainen, H; Vandelli, W; Krasznahorkay, A; Gray, H; Alvarez gonzalez, B; Eifert, T F; Rolando, G; Oide, H; Barak, L; Glatzer, J; Backhaus, M; Schaefer, D M; Maciejewski, J P; Milic, A; Jin, S; Von torne, E; Limbach, C; Medinnis, M J; Gregor, I; Levonian, S; Schmitt, S; Waananen, A; Monnier, E; Muanza, S G; Pralavorio, P; Talby, M; Tiouchichine, E; Tocut, V M; Rybkin, G; Wang, S; Lacour, D; Laforge, B; Ocariz, J H; Bertoli, W; Malaescu, B; Sbarra, C; Yamamoto, A; Sasaki, O; Koriki, T; Hara, K; Da silva gomes, A; Carvalho maneira, J; Marcalo da palma, A; Chekulaev, S; Tikhomirov, V; Snesarev, A; Buzykaev, A; Maslennikov, A; Peleganchuk, S; Sukharev, A; Kaplan, B E; Swiatlowski, M J; Nef, P D; Schnoor, U; Oakham, G F; Ueno, R; Orr, R S; Abouzeid, O; Haug, S; Peng, H; Kus, V; Vitek, M; Temming, K K; Dang, N P; Meier, K; Schultz-coulon, H; Geisler, M P; Sander, H; Schaefer, U; Ellinghaus, F; Rieke, S; Nussbaumer, A; Liu, Y; Richter, R; Kortner, S; Fernandez-bosman, M; Ullan comes, M; Espinal curull, J; Chiriotti alvarez, S; Caubet serrabou, M; Valladolid gallego, E; Kaci, M; Carrasco vela, N; Lancon, E C; Besson, N E; Gautard, V; Bracinik, J; Bartsch, V C; Potter, C J; Lester, C G; Moeller, V A; Rosten, J; Crooks, D; Mathieson, K; Houston, S C; Wright, M; Jones, T W; Harris, O B; Byatt, T J; Dobson, E; Hodgson, P; Hodgkinson, M C; Dris, M; Karakostas, K; Ntekas, K; Oren, D; Duchovni, E; Etzion, E; Oren, Y; Ferrer, L M; Testa, M; Doria, A; Merola, L; Sekhniaidze, G; Giordano, R; Ricciardi, S; Milazzo, A; Falciano, S; De pedis, D; Dionisi, C; Veneziano, S; Cardarelli, R; Verzegnassi, C; Soualah, R; Ochi, A; Ohshima, T; Kishiki, S; Linde, F L; Vreeswijk, M; Werneke, P; Muijs, A; Vankov, P H; Jansweijer, P P M; Dale, O; Lund, E; Bruckman de renstrom, P; Dabrowski, W; Adamek, J D; Wolters, H; Micu, L; Pantea, D; Tudorache, V; Mjoernmark, J; Klimek, P J; Ferrari, A; Abdinov, O; Akhoundov, A; Hashimov, R; Shelkov, G; Khubua, J; Ladygin, E; Lazarev, A; Glagolev, V; Dedovich, D; Lykasov, G; Zhemchugov, A; Zolnikov, Y; Ryabenko, M; Sivoklokov, S; Vasilyev, I; Shalimov, A; Lobanov, M; Paramoshkina, E; Mosidze, M; Bingul, A; Nodulman, L J; Guarino, V J; Yoshida, R; Drake, G R; Calafiura, P; Haber, C; Quarrie, D R; Alonso, J R; Anderson, C; Evans, H; Lammers, S W; Baubock, M; Anderson, K; Petti, R; Suhr, C A; Linnemann, J T; Richards, R A; Tollefson, K A; Holzbauer, J L; Stoker, D P; Pier, S; Nelson, A J; Isakov, V; Martin, A J; Adelman, J A; Paganini, M; Gutierrez, P; Snow, J M; Pearson, B L; Cleland, W E; Savinov, V; Wong, W; Goodson, J J; Li, H; Lacey, R A; Gordeev, A; Gordon, H; Lanni, F; Nevski, P; Rescia, S; Kierstead, J A; Liu, Z; Yu, W W H; Bensinger, J; Hashemi, K S; Bogavac, D; Cindro, V; Hoeferkamp, M R; Coelli, S; Iodice, M; Piegaia, R N; Alonso, F; Wahlberg, H P; Barberio, E L; Limosani, A; Rodd, N L; Jennens, D T; Hill, E C; Pospisil, S; Smolek, K; Schaile, D A; Rauscher, F G; Adomeit, S; Mattig, P M; Wahlen, H; Volkmer, F; Calvente lopez, S; Sanchis peris, E J; Pallin, D; Podlyski, F; Says, L; Boumediene, D E; Scott, W; Phillips, P W; Greenall, A; Turner, P; Gwilliam, C B; Kluge, T; Wrona, B; Sellers, G J; Millward, G; Adragna, P; Hartin, A; Alpigiani, C; Piccaro, E; Bret cano, M; Hughes jones, R E; Mercer, D; Oh, A; Chavda, V S; Carminati, L; Cavasinni, V; Fedin, O; Patrichev, S; Ryabov, Y; Nesterov, S; Grebenyuk, O; Sasso, J; Mahmood, H; Polsdofer, E; Dai, T; Ferretti, C; Liu, H; Hegazy, K H; Benjamin, D P; Zobernig, G; Ban, J; Brooijmans, G H; Keener, P; Williams, H H; Le geyt, B C; Hines, E J; Fadeyev, V; Schumm, B A; Law, A T; Kuhl, A D; Neubauer, M S; Shang, R; Gagliardi, G; Calabro, D; Conta, C; Zinna, M; Jones, G; Li, J; Stradling, A R; Hadavand, H K; Mcguigan, P; Chiu, P; Baldelomar, E; Stroynowski, R A; Kehoe, R L; De groot, N; Timmermans, C; Lach-heb, F; Addy, T N; Nakano, I; Moreno lopez, D; Grosse-knetter, J; Tyson, B; Rude, G D; Tafirout, R; Benoit, P; Danielsson, H O; Elsing, M; Fassnacht, P; Froidevaux, D; Ganis, G; Gorini, B; Lasseur, C; Lehmann miotto, G; Kollar, D; Aleksa, M; Sfyrla, A; Duehrssen-debling, K; Fressard-batraneanu, S; Van der ster, D C; Bortolin, C; Schumacher, J; Mentink, M; Geich-gimbel, C; Yau wong, K H; Lafaye, R; Crepe-renaudin, S; Albrand, S; Hoffmann, D; Pangaud, P; Meessen, C; Hrivnac, J; Vernay, E; Perus, A; Henrot versille, S L; Le dortz, O; Derue, F; Piccinini, M; Polini, A; Terada, S; Arai, Y; Ikeno, M; Fujii, H; Nagano, K; Ukegawa, F; Aguilar saavedra, J A; Conde muino, P; Castro, N F; Eremin, V; Kopytine, M; Sulin, V; Tsukerman, I; Korol, A; Nemethy, P; Bartoldus, R; Glatte, A; Chelsky, S; Van nieuwkoop, J; Bellerive, A; Sinervo, J K; Battaglia, A; Barbier, G J; Pohl, M; Rosselet, L; Alexandre, G B; Prokoshin, F; Pezoa rivera, R A; Batkova, L; Kladiva, E; Stastny, J; Kubes, T; Vidlakova, Z; Esch, H; Homann, M; Herten, L G; Zimmermann, S U; Pfeifer, B; Stenzel, H; Andrei, G V; Wessels, M; Buescher, V; Kleinknecht, K; Fiedler, F M; Schroeder, C D; Fernandez, E; Mir martinez, L; Vorwerk, V; Bernabeu verdu, J; Salt, J; Civera navarrete, J V; Bernard, R; Berriaud, C P; Chevalier, L P; Hubbard, R; Schune, P; Nikolopoulos, K; Batley, J R; Brochu, F M; Phillips, A W; Teixeira-dias, P J; Rose, M B D; Buttar, C; Buckley, A G; Nurse, E L; Larner, A B; Boddy, C; Henderson, J; Costanzo, D; Tarem, S; Maccarrone, G; Laurelli, P F; Alviggi, M; Chiaramonte, R; Izzo, V; Palumbo, V; Fraternali, M; Crosetti, G; Marchese, F; Yamaguchi, Y; Hessey, N P; Mechnich, J M; Liebig, W; Kastanas, K A; Sjursen, T B; Zalieckas, J; Cameron, D G; Banka, P; Kowalewska, A B; Dwuznik, M; Mindur, B; Boldea, V; Hedberg, V; Smirnova, O; Sellden, B; Allahverdiyev, T; Gornushkin, Y; Koultchitski, I; Tokmenin, V; Chizhov, M; Gongadze, A; Khramov, E; Sadykov, R; Krasnoslobodtsev, I; Smirnova, L; Kramarenko, V; Minaenko, A; Zenin, O; Beddall, A J; Ozcan, E V; Hou, S; Wang, S; Moyse, E; Willocq, S; Chekanov, S; Le compte, T J; Love, J R; Ciocio, A; Hinchliffe, I; Tsulaia, V; Gomez, A; Luehring, F; Zieminska, D; Huth, J E; Gonski, J L; Oreglia, M; Tang, F; Shochet, M J; Costin, T; Mcleod, A; Uzunyan, S; Martin, S P; Pope, B G; Schwienhorst, R H; Brau, J E; Ptacek, E S; Milburn, R H; Sabancilar, E; Lauer, R; Saleem, M; Mohamed meera lebbai, M R; Lou, X; Reeves, K B; Rijssenbeek, M; Novakova, P N; Rahm, D; Steinberg, P A; Wenaus, T J; Paige, F; Ye, S; Kotcher, J R; Assamagan, K A; Oliveira damazio, D; Maeno, T; Henry, A; Dushkin, A; Costa, G; Meroni, C; Resconi, S; Lari, T; Biglietti, M; Lohse, T; Gonzalez silva, M L; Monticelli, F G; Saavedra, A F; Patel, N D; Ciodaro xavier, T; Asevedo nepomuceno, A; Lefebvre, M; Albert, J E; Kubik, P; Faltova, J; Turecek, D; Solc, J; Schaile, O; Ebke, J; Losel, P J; Zeitnitz, C; Sturm, P D; Barreiro alonso, F; Modesto alapont, P; Soret medel, J; Garzon alama, E J; Gee, C N; Mccubbin, N A; Sankey, D; Emeliyanov, D; Dewhurst, A L; Houlden, M A; Klein, M; Burdin, S; Lehan, A K; Eisenhandler, E; Lloyd, S; Traynor, D P; Ibbotson, M; Marshall, R; Pater, J; Freestone, J; Masik, J; Haughton, I; Manousakis katsikakis, A; Sampsonidis, D; Krepouri, A; Roda, C; Sarri, F; Fukunaga, C; Nadtochiy, A; Kara, S O; Timm, S; Alam, S M; Rashid, T; Goldfarb, S; Espahbodi, S; Marley, D E; Rau, A W; Dos anjos, A R; Haque, S; Grau, N C; Havener, L B; Thomson, E J; Newcomer, F M; Hansl-kozanecki, G; Deberg, H A; Takeshita, T; Goggi, V; Ennis, J S; Olness, F I; Kama, S; Ordonez sanz, G; Koetsveld, F; Elamri, M; Mansoor-ul-islam, S; Lemmer, B; Kawamura, G; Bindi, M; Schulte, S; Kugel, A; Kretz, M P; Kurchaninov, L; Blanchot, G; Chromek-burckhart, D; Di girolamo, B; Francis, D; Gianotti, F; Nordberg, M Y; Pernegger, H; Roe, S; Boyd, J; Wilkens, H G; Pauly, T; Fabre, C; Tricoli, A; Bertet, D; Ruiz martinez, M A; Arnaez, O L; Lenzi, B; Boveia, A J; Gillberg, D I; Davies, J M; Zimmermann, R; Uhlenbrock, M; Kraus, J K; Narayan, R T; John, A; Dam, M; Padilla aranda, C; Bellachia, F; Le flour chollet, F M; Jezequel, S; Dumont dayot, N; Fede, E; Mathieu, M; Gensolen, F D; Alio, L; Arnault, C; Bouchel, M; Ducorps, A; Kado, M M; Lounis, A; Zhang, Z P; De vivie de regie, J; Beau, T; Bruni, A; Bruni, G; Grafstrom, P; Romano, M; Lasagni manghi, F; Massa, L; Shaw, K; Ikegami, Y; Tsuno, S; Kawanishi, Y; Benincasa, G; Blagov, M; Fedorchuk, R; Shatalov, P; Romaniouk, A; Belotskiy, K; Timoshenko, S; Hooft van huysduynen, L; Lewis, G H; Wittgen, M M; Mader, W F; Rudolph, C J; Gumpert, C; Mamuzic, J; Rudolph, G; Schmid, P; Corriveau, F; Belanger-champagne, C; Yarkoni, S; Leroy, C; Koffas, T; Harack, B D; Weber, M S; Beck, H; Leger, A; Gonzalez sevilla, S; Zhu, Y; Gao, J; Zhang, X; Blazek, T; Rames, J; Sicho, P; Kouba, T; Sluka, T; Lysak, R; Ristic, B; Kompatscher, A E; Von radziewski, H; Groll, M; Meyer, C P; Oberlack, H; Stonjek, S M; Cortiana, G; Werthenbach, U; Ibragimov, I; Czirr, H S; Cavalli-sforza, M; Puigdengoles olive, C; Tallada crespi, P; Marti i garcia, S; Gonzalez de la hoz, S; Guyot, C; Meyer, J; Schoeffel, L O; Garvey, J; Hawkes, C; Hillier, S J; Staley, R J; Salvatore, P F; Santoyo castillo, I; Carter, J; Yusuff, I B; Barlow, N R; Berry, T S; Savage, G; Wraight, K G; Steele, G E; Hughes, G; Walder, J W; Love, P A; Crone, G J; Waugh, B M; Boeser, S; Sarkar, A M; Holmes, A; Massey, R; Pinder, A; Nicholson, R; Korolkova, E; Katsoufis, I; Maltezos, S; Tsipolitis, G; Leontsinis, S; Levinson, L J; Shoa, M; Abramowicz, H E; Bella, G; Gershon, A; Urkovsky, E; Taiblum, N; Gatti, C; Della pietra, M; Lanza, A; Negri, A; Flaminio, V; Lacava, F; Petrolo, E; Pontecorvo, L; Rosati, S; Zanello, L; Pasqualucci, E; Di ciaccio, A; Giordani, M; Yamazaki, Y; Jinno, T; Nomachi, M; De jong, P J; Ferrari, P; Homma, J; Van der graaf, H; Igonkina, O B; Stugu, B S; Buanes, T; Pedersen, M; Turala, M; Olszewski, A J; Koperny, S Z; Onofre, A; Castro nunes fiolhais, M; Alexa, C; Cuciuc, C M; Akesson, T P A; Hellman, S L; Milstead, D A; Bondyakov, A; Pushnova, V; Budagov, Y; Minashvili, I; Romanov, V; Sniatkov, V; Tskhadadze, E; Kalinovskaya, L; Shalyugin, A; Tavkhelidze, A; Rumyantsev, L; Karpov, S; Soloshenko, A; Vostrikov, A; Borissov, E; Solodkov, A; Vorob'ev, A; Sidorov, S; Malyaev, V; Lee, S; Grudzinski, J J; Virzi, J S; Vahsen, S E; Lys, J; Penwell, J W; Yan, Z; Bernard, C S; Barreiro guimaraes da costa, J P; Oliver, J N; Merritt, F S; Brubaker, E M; Kapliy, A; Kim, J; Zutshi, V V; Burghgrave, B O; Abolins, M A; Arabidze, G; Caughron, S A; Frey, R E; Radloff, P T; Schernau, M; Murillo garcia, R; Porter, R A; Mccormick, C A; Karn, P J; Sliwa, K J; Demers konezny, S M; Strauss, M G; Mueller, J A; Izen, J M; Klimentov, A; Lynn, D; Polychronakos, V; Radeka, V; Sondericker, J I I I; Bathe, S; Duffin, S; Chen, H; De castro faria salgado, P E; Kersevan, B P; Lacker, H M; Schulz, H; Kubota, T; Tan, K G; Yabsley, B D; Nunes de moura junior, N; Pinfold, J; Soluk, R A; Ouellette, E A; Leitner, R; Sykora, T; Solar, M; Sartisohn, G; Hirschbuehl, D; Huning, D; Fischer, J; Terron cuadrado, J; Glasman kuguel, C B; Lacasta llacer, C; Lopez-amengual, J; Calvet, D; Chevaleyre, J; Daudon, F; Montarou, G; Guicheney, C; Calvet, S P J; Tyndel, M; Dervan, P J; Maxfield, S J; Hayward, H S; Beck, G; Cox, B; Da via, C; Paschalias, P; Manolopoulou, M; Ragusa, F; Cimino, D; Ezzi, M; Fiuza de barros, N F; Yildiz, H; Ciftci, A K; Turkoz, S; Zain, S B; Tegenfeldt, F; Chapman, J W; Panikashvili, N; Bocci, A; Altheimer, A D; Martin, F F; Fratina, S; Jackson, B D; Grillo, A A; Seiden, A; Watts, G T; Mangiameli, S; Johns, K A; O'grady, F T; Errede, D R; Darbo, G; Ferretto parodi, A; Leahu, M C; Farbin, A; Ye, J; Liu, T; Wijnen, T A; Naito, D; Takashima, R; Sandoval usme, C E; Zinonos, Z; Moreno llacer, M; Agricola, J B; Mcgovern, S A; Sakurai, Y; Trigger, I M; Qing, D; De silva, A S; Butin, F; Dell'acqua, A; Hawkings, R J; Lamanna, M; Mapelli, L; Passardi, G; Rembser, C; Tremblet, L; Andreazza, W; Dobos, D A; Koblitz, B; Bianco, M; Dimitrov, G V; Schlenker, S; Armbruster, A J; Rammensee, M C; Romao rodrigues, L F; Peters, K; Pozo astigarraga, M E; Yi, Y; Desch, K K; Huegging, F G; Muller, K K; Stillings, J A; Schaetzel, S; Xella, S; Hansen, J D; Colas, J; Daguin, G; Wingerter, I; Ionescu, G D; Ledroit, F; Lucotte, A; Clement, B E; Stark, J; Clemens, J; Djama, F; Knoops, E; Coadou, Y; Vigeolas-choury, E; Feligioni, L; Iconomidou-fayard, L; Imbert, P; Schaffer, A C; Nikolic, I; Trincaz-duvoid, S; Warin, P; Camard, A F; Ridel, M; Pires, S; Giacobbe, B; Spighi, R; Villa, M; Negrini, M; Sato, K; Gavrilenko, I; Akimov, A; Khovanskiy, V; Talyshev, A; Voronkov, A; Hakobyan, H; Mallik, U; Shibata, A; Konoplich, R; Barklow, T L; Koi, T; Straessner, A; Stelzer, B; Robertson, S H; Vachon, B; Stoebe, M; Keyes, R A; Wang, K; Billoud, T R V; Strickland, V; Batygov, M; Krieger, P; Palacino caviedes, G D; Gay, C W; Jiang, Y; Han, L; Liu, M; Zenis, T; Lokajicek, M; Staroba, P; Tasevsky, M; Popule, J; Svatos, M; Seifert, F; Landgraf, U; Lai, S T; Schmitt, K H; Achenbach, R; Schuh, N; Kiesling, C; Macchiolo, A; Nisius, R; Schacht, P; Von der schmitt, J G; Kortner, O; Atlay, N B; Segura sole, E; Grinstein, S; Neissner, C; Bruckner, D M; Oliver garcia, E; Boonekamp, M; Perrin, P; Gaillot, F M; Wilson, J A; Thomas, J P; Thompson, P D; Palmer, J D; Falk, I E; Chavez barajas, C A; Sutton, M R; Robinson, D; Kaneti, S A; Wu, T; Robson, A; Shaw, C; Buzatu, A; Qin, G; Jones, R; Bouhova-thacker, E V; Viehhauser, G; Weidberg, A R; Gilbert, L; Johansson, P D C; Orphanides, M; Vlachos, S; Behar harpaz, S; Papish, O; Lellouch, D J H; Turgeman, D; Benary, O; La rotonda, L; Vena, R; Tarasio, A; Marzano, F; Gabrielli, A; Di stante, L; Liberti, B; Aielli, G; Oda, S; Nozaki, M; Takeda, H; Hayakawa, T; Miyazaki, K; Maeda, J; Sugimoto, T; Pettersson, N E; Bentvelsen, S; Groenstege, H L; Lipniacka, A; Vahabi, M; Ould-saada, F; Chwastowski, J J; Hajduk, Z; Kaczmarska, A; Olszowska, J B; Trzupek, A; Staszewski, R P; Palka, M; Constantinescu, S; Jarlskog, G; Lundberg, B L A; Pearce, M; Ellert, M F; Bannikov, A; Fechtchenko, A; Iambourenko, V; Kukhtin, V; Pozdniakov, V; Topilin, N; Vorozhtsov, S; Khassanov, A; Fliaguine, V; Kharchenko, D; Nikolaev, K; Kotenov, K; Kozhin, A; Zenin, A; Ivashin, A; Golubkov, D; Beddall, A; Su, D; Dallapiccola, C J; Cranshaw, J M; Price, L; Stanek, R W; Gieraltowski, G; Zhang, J; Gilchriese, M; Shapiro, M; Ahlen, S; Morii, M; Taylor, F E; Miller, R J; Phillips, F H; Torrence, E C; Wheeler, S J; Benedict, B H; Napier, A; Hamilton, S F; Petrescu, T A; Boyd, G R J; Jayasinghe, A L; Smith, J M; Mc carthy, R L; Adams, D L; Le vine, M J; Zhao, X; Patwa, A M; Baker, M; Kirsch, L; Krstic, J; Simic, L; Filipcic, A; Seidel, S C; Cantore-cavalli, D; Baroncelli, A; Kind, O M; Scarcella, M J; Maidantchik, C L L; Seixas, J; Balabram filho, L E; Vorobel, V; Spousta, M; Strachota, P; Vokac, P; Slavicek, T; Bergmann, B L; Biebel, O; Kersten, S; Srinivasan, M; Trefzger, T; Vazeille, F; Insa, C; Kirk, J; Middleton, R; Burke, S; Klein, U; Morris, J D; Ellis, K V; Millward, L R; Giokaris, N; Ioannou, P; Angelidakis, S; Bouzakis, K; Andreazza, A; Perini, L; Chtcheguelski, V; Spiridenkov, E; Yilmaz, M; Kaya, U; Ernst, J; Mahmood, A; Saland, J; Kutnink, T; Holler, J; Kagan, H P; Wang, C; Pan, Y; Xu, N; Ji, H; Willis, W J; Tuts, P M; Litke, A; Wilder, M; Rothberg, J; Twomey, M S; Rizatdinova, F; Loch, P; Rutherfoord, J P; Varnes, E W; Barberis, D; Osculati-becchi, B; Brandt, A G; Turvey, A J; Benchekroun, D; Nagasaka, Y; Thanakornworakij, T; Quadt, A; Nadal serrano, J; Magradze, E; Nackenhorst, O; Musheghyan, H; Kareem, M; Chytka, L; Perez codina, E; Stelzer-chilton, O; Brunel, B; Henriques correia, A M; Dittus, F; Hatch, M; Haug, F; Hauschild, M; Huhtinen, M; Lichard, P; Schuh-erhard, S; Spigo, G; Avolio, G; Tsarouchas, C; Ahmad, I; Backes, M P; Barisits, M; Gadatsch, S; Cerv, M; Sicoe, A D; Nattamai sekar, L P; Fazio, D; Shan, L; Sun, X; Gaycken, G F; Hemperek, T; Petersen, T C; Alonso diaz, A; Moynot, M; Werlen, M; Hryn'ova, T; Gallin-martel, M; Wu, M; Touchard, F; Menouni, M; Fougeron, D; Le guirriec, E; Chollet, J C; Veillet, J; Barrillon, P; Prat, S; Krasny, M W; Roos, L; Boudarham, G; Lefebvre, G; Boscherini, D; Valentinetti, S; Acharya, B S; Miglioranzi, S; Kanzaki, J; Unno, Y; Yasu, Y; Iwasaki, H; Tokushuku, K; Maio, A; Rodrigues fernandes, B J; Pinto figueiredo raimundo ribeiro, N M; Bot, A; Shmeleva, A; Zaidan, R; Djilkibaev, R; Mincer, A I; Salnikov, A; Aracena, I A; Schwartzman, A G; Silverstein, D J; Fulsom, B G; Anulli, F; Kuhn, D; White, M J; Vetterli, M J; Stockton, M C; Mantifel, R L; Azuelos, G; Shoaleh saadi, D; Savard, P; Clark, A; Ferrere, D; Gaumer, O P; Diaz gutierrez, M A; Liu, Y; Dubnickova, A; Sykora, I; Strizenec, P; Weichert, J; Zitek, K; Naumann, T; Goessling, C; Klingenberg, R; Jakobs, K; Rurikova, Z; Werner, M W; Arnold, H R; Buscher, D; Hanke, P; Stamen, R; Dietzsch, T A; Kiryunin, A; Salihagic, D; Buchholz, P; Pacheco pages, A; Sushkov, S; Porto fernandez, M D C; Cruz josa, R; Vos, M A; Schwindling, J; Ponsot, P; Charignon, C; Kivernyk, O; Goodrick, M J; Hill, J C; Green, B J; Quarman, C V; Bates, R L; Allwood-spiers, S E; Quilty, D; Chilingarov, A; Long, R E; Barton, A E; Konstantinidis, N; Simmons, B; Davison, A R; Christodoulou, V; Wastie, R L; Gallas, E J; Cox, J; Dehchar, M; Behr, J K; Pickering, M A; Filippas, A; Panagoulias, I; Tenenbaum katan, Y D; Roth, I; Pitt, M; Citron, Z H; Benhammou, Y; Amram, N Y N; Soffer, A; Gorodeisky, R; Antonelli, M; Chiarella, V; Curatolo, M; Esposito, B; Nicoletti, G; Martini, A; Sansoni, A; Carlino, G; Del prete, T; Bini, C; Vari, R; Kuna, M; Pinamonti, M; Itoh, Y; Colijn, A P; Klous, S; Garitaonandia elejabarrieta, H; Rosendahl, P L; Taga, A V; Malecki, P; Malecki, P; Wolter, M W; Kowalski, T; Korcyl, G M; Caprini, M; Caprini, I; Dita, P; Olariu, A; Tudorache, A; Lytken, E; Hidvegi, A; Aliyev, M; Alexeev, G; Bardin, D; Kakurin, S; Lebedev, A; Golubykh, S; Chepurnov, V; Gostkin, M; Kolesnikov, V; Karpova, Z; Davkov, K I; Yeletskikh, I; Grishkevich, Y; Rud, V; Myagkov, A; Nikolaenko, V; Starchenko, E; Zaytsev, A; Fakhrutdinov, R; Cheine, I; Istin, S; Sahin, S; Teng, P; Chu, M L; Trilling, G H; Heinemann, B; Richoz, N; Degeorge, C; Youssef, S; Pilcher, J; Cheng, Y; Purohit, M V; Kravchenko, A; Calkins, R E; Blazey, G; Hauser, R; Koll, J D; Reinsch, A; Brost, E C; Allen, B W; Lankford, A J; Ciobotaru, M D; Slagle, K J; Haffa, B; Mann, A; Loginov, A; Cummings, J T; Loyal, J D; Skubic, P L; Boudreau, J F; Lee, B E; Redlinger, G; Wlodek, T; Carcassi, G; Sexton, K A; Yu, D; Deng, W; Metcalfe, J E; Panitkin, S; Sijacki, D; Mikuz, M; Kramberger, G; Tartarelli, G F; Farilla, A; Stanescu, C; Herrberg, R; Alconada verzini, M J; Brennan, A J; Varvell, K; Marroquim, F; Gomes, A A; Do amaral coutinho, Y; Gingrich, D; Moore, R W; Dolejsi, J; Valkar, S; Broz, J; Jindra, T; Kohout, Z; Kral, V; Mann, A W; Calfayan, P P; Langer, T; Hamacher, K; Sanny, B; Wagner, W; Flick, T; Redelbach, A R; Ke, Y; Higon-rodriguez, E; Donini, J N; Lafarguette, P; Adye, T J; Baines, J; Barnett, B; Wickens, F J; Martin, V J; Jackson, J N; Prichard, P; Kretzschmar, J; Martin, A J; Walker, C J; Potter, K M; Kourkoumelis, C; Tzamarias, S; Houiris, A G; Iliadis, D; Fanti, M; Bertolucci, F; Maleev, V; Sultanov, S; Rosenberg, E I; Krumnack, N E; Bieganek, C; Diehl, E B; Mc kee, S P; Eppig, A P; Harper, D R; Liu, C; Schwarz, T A; Mazor, B; Looper, K A; Wiedenmann, W; Huang, P; Stahlman, J M; Battaglia, M; Nielsen, J A; Zhao, T; Khanov, A; Kaushik, V S; Vichou, E; Liss, A M; Gemme, C; Morettini, P; Parodi, F; Passaggio, S; Rossi, L; Kuzhir, P; Ignatenko, A; Ferrari, R; Spairani, M; Pianori, E; Sekula, S J; Firan, A I; Cao, T; Hetherly, J W; Gouighri, M; Vassilakopoulos, V; Long, M C; Shimojima, M; Sawyer, L H; Brummett, R E; Losada, M A; Schorlemmer, A L; Mantoani, M; Bawa, H S; Mornacchi, G; Nicquevert, B; Palestini, S; Stapnes, S; Veness, R; Kotamaki, M J; Sorde, C; Iengo, P; Campana, S; Goossens, L; Zajacova, Z; Pribyl, L; Poveda torres, J; Marzin, A; Conti, G; Carrillo montoya, G D; Kroseberg, J; Gonella, L; Velz, T; Schmitt, S; Lobodzinska, E M; Lovschall-jensen, A E; Galster, G; Perrot, G; Cailles, M; Berger, N; Barnovska, Z; Delsart, P; Lleres, A; Tisserant, S; Grivaz, J; Matricon, P; Bellagamba, L; Bertin, A; Bruschi, M; De castro, S; Semprini cesari, N; Fabbri, L; Rinaldi, L; Quayle, W B; Truong, T N L; Kondo, T; Haruyama, T; Ng, C; Do valle wemans, A; Almeida veloso, F M; Konovalov, S; Ziegler, J M; Su, D; Lukas, W; Prince, S; Ortega urrego, E J; Teuscher, R J; Knecht, N; Pretzl, K; Borer, C; Gadomski, S; Koch, B; Kuleshov, S; Brooks, W K; Antos, J; Kulkova, I; Chudoba, J; Chyla, J; Tomasek, L; Bazalova, M; Messmer, I; Tobias, J; Sundermann, J E; Kuehn, S S; Kluge, E; Scharf, V L; Barillari, T; Kluth, S; Menke, S; Weigell, P; Schwegler, P; Ziolkowski, M; Casado lechuga, P M; Garcia, C; Sanchez, J; Costa mezquita, M J; Valero biot, J A; Laporte, J; Nikolaidou, R; Virchaux, M; Nguyen, V T H; Charlton, D; Harrison, K; Slater, M W; Newman, P R; Parker, A M; Ward, P; Mcgarvie, S A; Kilvington, G J; D'auria, S; O'shea, V; Mcglone, H M; Fox, H; Henderson, R; Kartvelishvili, V; Davies, B; Sherwood, P; Fraser, J T; Lancaster, M A; Tseng, J C; Hays, C P; Apolle, R; Dixon, S D; Parker, K A; Gazis, E; Papadopoulou, T; Panagiotopoulou, E; Karastathis, N; Hershenhorn, A D; Milov, A; Groth-jensen, J; Bilokon, H; Miscetti, S; Canale, V; Rebuzzi, D M; Capua, M; Bagnaia, P; De salvo, A; Gentile, S; Safai tehrani, F; Solfaroli camillocci, E; Sasao, N; Tsunada, K; Massaro, G; Magrath, C A; Van kesteren, Z; Beker, M G; Van den wollenberg, W; Bugge, L; Buran, T; Read, A L; Gjelsten, B K; Banas, E A; Turnau, J; Derendarz, D K; Kisielewska, D; Chesneanu, D; Rotaru, M; Maurer, J B; Wong, M L; Lund-jensen, B; Asman, B; Jon-and, K B; Silverstein, S B; Johansen, M; Alexandrov, I; Iatsounenko, I; Krumshteyn, Z; Peshekhonov, V; Rybaltchenko, K; Samoylov, V; Cheplakov, A; Kekelidze, G; Lyablin, M; Teterine, V; Bednyakov, V; Kruchonak, U; Shiyakova, M M; Demichev, M; Denisov, S P; Fenyuk, A; Djobava, T; Salukvadze, G; Cetin, S A; Brau, B P; Pais, P R; Proudfoot, J; Van gemmeren, P; Zhang, Q; Beringer, J A; Ely, R; Leggett, C; Pengg, F X; Barnett, M R; Quick, R E; Williams, S; Gardner jr, R W; Huston, J; Brock, R; Wanotayaroj, C; Unel, G N; Taffard, A C; Frate, M; Baker, K O; Tipton, P L; Hutchison, A; Walsh, B J; Norberg, S R; Su, J; Tsybyshev, D; Caballero bejar, J; Ernst, M U; Wellenstein, H; Vudragovic, D; Vidic, I; Gorelov, I V; Toms, K; Alimonti, G; Petrucci, F; Kolanoski, H; Smith, J; Jeng, G; Watson, I J; Guimaraes ferreira, F; Miranda vieira xavier, F; Araujo pereira, R; Poffenberger, P; Sopko, V; Elmsheuser, J; Wittkowski, J; Glitza, K; Gorfine, G W; Ferrer soria, A; Fuster verdu, J A; Sanchis lozano, A; Reinmuth, G; Busato, E; Haywood, S J; Mcmahon, S J; Qian, W; Villani, E G; Laycock, P J; Poll, A J; Rizvi, E S; Foster, J M; Loebinger, F; Forti, A; Plano, W G; Brown, G J A; Kordas, K; Vegni, G; Ohsugi, T; Iwata, Y; Cherkaoui el moursli, R; Sahin, M; Akyazi, E; Carlsen, A; Kanwal, B; Cochran jr, J H; Aronnax, M V; Lockner, M J; Zhou, B; Levin, D S; Weaverdyck, C J; Grom, G F; Rudge, A; Ebenstein, W L; Jia, B; Yamaoka, J; Jared, R C; Wu, S L; Banerjee, S; Lu, Q; Hughes, E W; Alkire, S P; Degenhardt, J D; Lipeles, E D; Spencer, E N; Savine, A; Cheu, E C; Lampl, W; Veatch, J R; Roberts, K; Atkinson, M J; Odino, G A; Polesello, G; Martin, T; White, A P; Stephens, R; Grinbaum sarkisyan, E; Vartapetian, A; Yu, J; Sosebee, M; Thilagar, P A; Spurlock, B; Bonde, R; Filthaut, F; Klok, P; Hoummada, A; Ouchrif, M; Pellegrini, G; Rafi tatjer, J M; Navarro, G A; Blumenschein, U; Weingarten, J C; Mueller, D; Graber, L; Gao, Y; Bode, A; Capeans garrido, M D M; Carli, T; Wells, P; Beltramello, O; Vuillermet, R; Dudarev, A; Salzburger, A; Torchiani, C I; Serfon, C L G; Sloper, J E; Duperrier, G; Lilova, P T; Knecht, M O; Lassnig, M; Anders, G; Deviveiros, P; Young, C; Sforza, F; Shaochen, C; Lu, F; Wermes, N; Wienemann, P; Schwindt, T; Hansen, P H; Hansen, J B; Pingel, A M; Massol, N; Elles, S L; Hallewell, G D; Rozanov, A; Vacavant, L; Fournier, D A; Poggioli, L; Puzo, P M; Tanaka, R; Escalier, M A; Makovec, N; Rezynkina, K; De cecco, S; Cavalleri, P G; Massa, I; Zoccoli, A; Tanaka, S; Odaka, S; Mitsui, S; Tomasio pina, J A; Santos, H F; Satsounkevitch, I; Harkusha, S; Baranov, S; Nechaeva, P; Kayumov, F; Kazanin, V; Asai, M; Mount, R P; Nelson, T K; Smith, D; Kenney, C J; Malone, C M; Kobel, M; Friedrich, F; Grohs, J P; Jais, W J; O'neil, D C; Warburton, A T; Vincter, M; Mccarthy, T G; Groer, L S; Pham, Q T; Taylor, W J; La marra, D; Perrin, E; Wu, X; Bell, W H; Delitzsch, C M; Feng, C; Zhu, C; Tokar, S; Bruncko, D; Kupco, A; Marcisovsky, M; Jakoubek, T; Bruneliere, R; Aktas, A; Narrias villar, D I; Tapprogge, S; Mattmann, J; Kroha, H; Crespo, J; Korolkov, I; Cavallaro, E; Cabrera urban, S; Mitsou, V; Kozanecki, W; Mansoulie, B; Pabot, Y; Etienvre, A; Bauer, F; Chevallier, F; Bouty, A R; Watkins, P; Watson, A; Faulkner, P J W; Curtis, C J; Murillo quijada, J A; Grout, Z J; Chapman, J D; Cowan, G D; George, S; Boisvert, V; Mcmahon, T R; Doyle, A T; Thompson, S A; Britton, D; Smizanska, M; Campanelli, M; Butterworth, J M; Loken, J; Renton, P; Barr, A J; Issever, C; Short, D; Crispin ortuzar, M; Tovey, D R; French, R; Rozen, Y; Alexander, G; Kreisel, A; Conventi, F; Raulo, A; Schioppa, M; Susinno, G; Tassi, E; Giagu, S; Luci, C; Nisati, A; Cobal, M; Ishikawa, A; Jinnouchi, O; Bos, K; Verkerke, W; Vermeulen, J; Van vulpen, I B; Kieft, G; Mora, K D; Olsen, F; Rohne, O M; Pajchel, K; Nilsen, J K; Wosiek, B K; Wozniak, K W; Badescu, E; Jinaru, A; Bohm, C; Johansson, E K; Sjoelin, J B R; Clement, C; Buszello, C P; Huseynova, D; Boyko, I; Popov, B; Poukhov, O; Vinogradov, V; Tsiareshka, P; Skvorodnev, N; Soldatov, A; Chuguev, A; Gushchin, V; Yazici, E; Lutz, M S; Malon, D; Vanyashin, A; Lavrijsen, W; Spieler, H; Biesiada, J L; Bahr, M; Kong, J; Tatarkhanov, M; Ogren, H; Van kooten, R J; Cwetanski, P; Butler, J M; Shank, J T; Chakraborty, D; Ermoline, I; Sinev, N; Whiteson, D O; Corso radu, A; Huang, J; Werth, M P; Kastoryano, M; Meirose da silva costa, B; Namasivayam, H; Hobbs, J D; Schamberger jr, R D; Guo, F; Potekhin, M; Popovic, D; Gorisek, A; Sokhrannyi, G; Hofsajer, I W; Mandelli, L; Ceradini, F; Graziani, E; Giorgi, F; Zur nedden, M E G; Grancagnolo, S; Volpi, M; Nunes hanninger, G; Rados, P K; Milesi, M; Cuthbert, C J; Black, C W; Fink grael, F; Fincke-keeler, M; Keeler, R; Kowalewski, R V; Berghaus, F O; Qi, M; Davidek, T; Tas, P; Jakubek, J; Duckeck, G; Walker, R; Mitterer, C A; Harenberg, T; Sandvoss, S A; Del peso, J; Llorente merino, J; Gonzalez millan, V; Irles quiles, A; Crouau, M; Gris, P L Y; Liauzu, S; Romano saez, S M; Gallop, B J; Jones, T J; Austin, N C; Morris, J; Duerdoth, I; Thompson, R J; Kelly, M P; Leisos, A; Garas, A; Pizio, C; Venda pinto, B A; Kudin, L; Qian, J; Wilson, A W; Mietlicki, D; Long, J D; Sang, Z; Arms, K E; Rahimi, A M; Moss, J J; Oh, S H; Parker, S I; Parsons, J; Cunitz, H; Vanguri, R S; Sadrozinski, H; Lockman, W S; Martinez-mc kinney, G; Goussiou, A; Jones, A; Lie, K; Hasegawa, Y; Olcese, M; Gilewsky, V; Harrison, P F; Janus, M; Spangenberg, M; De, K; Ozturk, N; Pal, A K; Darmora, S; Bullock, D J; Oviawe, O; Derkaoui, J E; Rahal, G; Sircar, A; Frey, A S; Stolte, P; Rosien, N; Zoch, K; Li, L; Schouten, D W; Catinaccio, A; Ciapetti, M; Delruelle, N; Ellis, N; Farthouat, P; Hoecker, A; Klioutchnikova, T; Macina, D; Malyukov, S; Spiwoks, R D; Unal, G P; Vandoni, G; Petersen, B A; Pommes, K; Nairz, A M; Wengler, T; Mladenov, D; Solans sanchez, C A; Lantzsch, K; Schmieden, K; Jakobsen, S; Ritsch, E; Sciuccati, A; Alves dos santos, A M; Ouyang, Q; Zhou, M; Brock, I C; Janssen, J; Katzy, J; Anders, C F; Nilsson, B S; Bazan, A; Di ciaccio, L; Yildizkaya, T; Collot, J; Malek, F; Trocme, B S; Breugnon, P; Godiot, S; Adam bourdarios, C; Coulon, J; Duflot, L; Petroff, P G; Zerwas, D; Lieuvin, M; Calderini, G; Laporte, D; Ocariz, J; Gabrielli, A; Ohska, T K; Kurochkin, Y; Kantserov, V; Vasilyeva, L; Speransky, M; Smirnov, S; Antonov, A; Bulekov, O; Tikhonov, Y; Sargsyan, L; Vardanyan, G; Budick, B; Kocian, M L; Luitz, S; Young, C C; Grenier, P J; Kelsey, M; Black, J E; Kneringer, E; Jussel, P; Horton, A J; Beaudry, J; Chandra, A; Ereditato, A; Topfel, C M; Mathieu, R; Bucci, F; Muenstermann, D; White, R M; He, M; Urban, J; Straka, M; Vrba, V; Schumacher, M; Parzefall, U; Mahboubi, K; Sommer, P O; Koepke, L H; Bethke, S; Moser, H; Wiesmann, M; Walkowiak, W A; Fleck, I J; Martinez-perez, M; Sanchez sanchez, C A; Jorgensen roca, S; Accion garcia, E; Sainz ruiz, C A; Valls ferrer, J A; Amoros vicente, G; Vives torrescasana, R; Ouraou, A; Formica, A; Hassani, S; Watson, M F; Cottin buracchio, G F; Bussey, P J; Saxon, D; Ferrando, J E; Collins-tooth, C L; Hall, D C; Cuhadar donszelmann, T; Dawson, I; Duxfield, R; Argyropoulos, T; Brodet, E; Livneh, R; Shougaev, K; Reinherz, E I; Guttman, N; Beretta, M M; Vilucchi, E; Aloisio, A; Patricelli, S; Caprio, M; Cevenini, F; De vecchi, C; Livan, M; Rimoldi, A; Vercesi, V; Ayad, R; Mastroberardino, A; Ciapetti, G; Luminari, L; Rescigno, M; Santonico, R; Salamon, A; Del papa, C; Kurashige, H; Homma, Y; Tomoto, M; Horii, Y; Sugaya, Y; Hanagaki, K; Bobbink, G; Kluit, P M; Koffeman, E N; Van eijk, B; Lee, H; Eigen, G; Dorholt, O; Strandlie, A; Strzempek, P B; Dita, S; Stoicea, G; Chitan, A; Leven, S S; Moa, T; Brenner, R; Ekelof, T J C; Olshevskiy, A; Roumiantsev, V; Chlachidze, G; Zimine, N; Gusakov, Y; Grigalashvili, N; Mineev, M; Potrap, I; Barashkou, A; Shoukavy, D; Shaykhatdenov, B; Pikelner, A; Gladilin, L; Ammosov, V; Abramov, A; Arik, M; Sahinsoy, M; Uysal, Z; Azizi, K; Hotinli, S C; Zhou, S; Berger, E; Blair, R; Underwood, D G; Einsweiler, K; Garcia-sciveres, M A; Siegrist, J L; Kipnis, I; Dahl, O; Holland, S; Barbaro galtieri, A; Smith, P T; Parua, N; Franklin, M; Mercurio, K M; Tong, B; Pod, E; Cole, S G; Hopkins, W H; Guest, D H; Severini, H; Marsicano, J J; Abbott, B K; Wang, Q; Lissauer, D; Ma, H; Takai, H; Rajagopalan, S; Protopopescu, S D; Snyder, S S; Undrus, A; Popescu, R N; Begel, M A; Blocker, C A; Amelung, C; Mandic, I; Macek, B; Tucker, B H; Citterio, M; Troncon, C; Orestano, D; Taccini, C; Romeo, G L; Dova, M T; Taylor, G N; Gesualdi manhaes, A; Mcpherson, R A; Sobie, R; Taylor, R P; Dolezal, Z; Kodys, P; Slovak, R; Sopko, B; Vacek, V; Sanders, M P; Hertenberger, R; Meineck, C; Becks, K; Kind, P; Sandhoff, M; Cantero garcia, J; De la torre perez, H; Castillo gimenez, V; Ros, E; Hernandez jimenez, Y; Chadelas, R; Santoni, C; Washbrook, A J; O'brien, B J; Wynne, B M; Mehta, A; Vossebeld, J H; Landon, M; Teixeira dias castanheira, M; Cerrito, L; Keates, J R; Fassouliotis, D; Chardalas, M; Manousos, A; Grachev, V; Seliverstov, D; Sedykh, E; Cakir, O; Ciftci, R; Edson, W; Prell, S A; Rosati, M; Stroman, T; Jiang, H; Neal, H A; Li, X; Gan, K K; Smith, D S; Kruse, M C; Ko, B R; Leung fook cheong, A M; Cole, B; Angerami, A R; Greene, Z S; Kroll, J I; Van berg, R P; Forbush, D A; Lubatti, H; Raisher, J; Shupe, M A; Wolin, S; Oshita, H; Gaudio, G; Das, R; Konig, A C; Croft, V A; Harvey, A; Maaroufi, F; Melo, I; Greenwood jr, Z D; Shabalina, E; Mchedlidze, G; Drechsler, E; Rieger, J K; Blackston, M; Colombo, T

    2002-01-01

    % ATLAS \\\\ \\\\ ATLAS is a general-purpose experiment for recording proton-proton collisions at LHC. The ATLAS collaboration consists of 144 participating institutions (June 1998) with more than 1750~physicists and engineers (700 from non-Member States). The detector design has been optimized to cover the largest possible range of LHC physics: searches for Higgs bosons and alternative schemes for the spontaneous symmetry-breaking mechanism; searches for supersymmetric particles, new gauge bosons, leptoquarks, and quark and lepton compositeness indicating extensions to the Standard Model and new physics beyond it; studies of the origin of CP violation via high-precision measurements of CP-violating B-decays; high-precision measurements of the third quark family such as the top-quark mass and decay properties, rare decays of B-hadrons, spectroscopy of rare B-hadrons, and $ B ^0 _{s} $-mixing. \\\\ \\\\The ATLAS dectector, shown in the Figure includes an inner tracking detector inside a 2~T~solenoid providing an axial...

  5. gFEX, the ATLAS Calorimeter Global Feature Extractor

    CERN Document Server

    Takai, Helio; The ATLAS collaboration; Chen, Hucheng

    2015-01-01

    The global feature extractor (gFEX) is a component of the Level-1 Calorimeter trigger Phase-I upgrade for the ATLAS experiment. It is intended to identify patterns of energy associated with the hadronic decays of high momentum Higgs, W, & Z bosons, top quarks, and exotic particles in real time at the LHC crossing rate. The single processor board will be implemented as a fast reconfigurable processor based on four large FPGAs. The board will receive coarse-granularity information from all the ATLAS calorimeters on 264 optical fibers with the data transferred at the 40 MHz LHC clock frequency. The gFEX will be controlled by a single system-on-chip processor, ZYNQ, that will be used to configure FPGAs, monitor board health, and interface to external signals. Although the board is being designed specifically for the ATLAS experiment, it is sufficiently generic that it could be used for fast data processing at other HEP or NP experiments. We will present the design of the gFEX board and discuss how it is being...

  6. 17 April 2008 - Head of Internal Audit Network meeting visiting the ATLAS experimental area with CERN ATLAS Team Leader P. Fassnacht, ATLAS Technical Coordinator M. Nessi and ATLAS Resources Manager M. Nordberg.

    CERN Multimedia

    Mona Schweizer

    2008-01-01

    17 April 2008 - Head of Internal Audit Network meeting visiting the ATLAS experimental area with CERN ATLAS Team Leader P. Fassnacht, ATLAS Technical Coordinator M. Nessi and ATLAS Resources Manager M. Nordberg.

  7. ATLAS Detector Operation 2011 
Muon System

    CERN Document Server

    Iakovidis, G; The ATLAS collaboration

    2012-01-01

    During the 2011 LHC Data taking period the ATLAS Detector recorded 5.22 fb-1 which is 96.5% of the delivered data from proton-proton collisions. The Muon Spectrometer was improved to 100% operational fraction at the Level 1 trigger and more than 98.7% operational fraction of trigger and precision chambers. The recorded data with Muon Spectrometer was at a level of more than 99% good for physics analysis. This illustrates an excellent performance. This poster presents performance of the Muon Spectrometer trigger chambers as well as precision chambers. In addition a combined Muon Spectrometer performance is presented.

  8. ATLAS DDM integration in ARC

    DEFF Research Database (Denmark)

    Behrmann, Gerd; Cameron, David; Ellert, Mattias

    2008-01-01

    The Nordic Data Grid Facility (NDGF) consists of Grid resources running ARC middleware in Denmark, Finland, Norway and Sweden. These resources serve many virtual organisations and contribute a large fraction of total worldwide resources for the ATLAS experiment, whose data is distributed and mana......The Nordic Data Grid Facility (NDGF) consists of Grid resources running ARC middleware in Denmark, Finland, Norway and Sweden. These resources serve many virtual organisations and contribute a large fraction of total worldwide resources for the ATLAS experiment, whose data is distributed...... and managed by the DQ2 software. Managing ATLAS data within NDGF and between NDGF and other Grids used by ATLAS (the Enabling Grids for E-sciencE Grid and the Open Science Grid) presents a unique challenge for several reasons. Firstly, the entry point for data, the Tier 1 centre, is physically distributed...

  9. ATLAS Offline Software Performance Monitoring and Optimization

    CERN Document Server

    Chauhan, N; Kittelmann, T; Langenberg, R; Mandrysch , R; Salzburger, A; Seuster, R; Ritsch, E; Stewart, G; van Eldik, N; Vitillo, R

    2014-01-01

    In a complex multi-developer, multi-package software environment, such as the ATLAS offline Athena framework, tracking the performance of the code can be a non-trivial task in itself. In this paper we describe improvements in the instrumentation of ATLAS offline software that have given considerable insight into the performance of the code and helped to guide optimisation. Code can be instrumented firstly using the PAPI tool, which is a programing interface for accessing hardware performance counters. PAPI events can count floating point operations, cycles and instructions and cache accesses. Triggering PAPI to start/stop counting for each algorithm and processed event gives a good understanding of the whole algorithm level performance of ATLAS code. Further data can be obtained using pin, a dynamic binary instrumentation tool. Pintools can be used to obtain similar statistics as PAPI, but advantageously without requiring recompilation of the code. Fine grained routine and instruction level instrumentation is...

  10. ATLAS Offline Software Performance Monitoring and Optimization

    CERN Document Server

    Chauhan, N; The ATLAS collaboration; Kittelmann, T; Langenberg, R; Mandrysch , R; Salzburger, A; Seuster, R; Ritsch, E; Stewart, G; van Eldik, N; Vitillo, R

    2013-01-01

    In a complex multi-developer, multi-package software environment, such as the ATLAS offline Athena framework, tracking the performance of the code can be a non-trivial task in itself. In this paper we describe improvements in the instrumentation of ATLAS offline software that have given considerable insight into the performance of the code and helped to guide optimisation. Code can be instrumented firstly using the PAPI tool, which is a programing interface for accessing hardware performance counters. PAPI events can count floating point operations, cycles and instructions and cache accesses. Triggering PAPI to start/stop counting for each algorithm and processed event gives a good understanding of the whole algorithm level performance of ATLAS code. Further data can be obtained using pin, a dynamic binary instrumentation tool. Pintools can be used to obtain similar statistics as PAPI, but advantageously without requiring recompilation of the code. Fine grained routine and instruction level instrumentation is...

  11. Atlas Linguarum Fennicarum / Jüri Viikberg

    Index Scriptorium Estoniae

    Viikberg, Jüri

    2007-01-01

    Rets. rmt. : Atlas Linguarum Fennicarum : ALFE. 1, Itämerensuomalainen kielikartasto = Läänemeresoome keelteatlas = Ostseefinnischer Sprachatlas / [1. osan vastaava toimittaja Seppo Suhonen]. Helsinki : Suomalaisen kirjallisuuden seura, 2004. 464 lk.; Atlas Linguarum Fennicarum : ALFE. 2, Itämerensuomalainen kielikartasto = Läänemeresoome keeleatlas = Ostseefinnischer Sprachatlas / [2. osan vastaava toimittaja Tiit-Rein Viitso]. Helsinki : Suomalaisen kirjallisuuden seura : Kotimaisen kielten tutkimuskeskus, 2007. 540 lk.

  12. National Atlas of Arctic: structure and creation approaches

    Directory of Open Access Journals (Sweden)

    N. S. Kasimov

    2015-01-01

    Full Text Available On the instructions of President and Government of the Russian Federation, works for development of National Atlas of Arctic are started in the country. In this article the authors present their ideas from viewpoint of geographers who are well experienced in the field of cartographic works. A structure of future Atlas and lines of approaches to its development are proposed. The totality of experiences of preparation of other geographical atlases in both, the USSR and Russia, as well as the latest achievements of cartography, aerospace sources and GIS-technologies are recommended to be used. The National Atlas of Arctic is understood as a collection of knowledge of spatial-temporal information about geographical, ecological, economic, historical-ethnographic, cultural and social features of the Arctic. This cartographic model of the territory is designed for using in a wide range of scientific, managing, economic, defensive and social activities. A hard copy of the atlas is intended to be used as scientific-reference publication while its electronic version will make it possible to renovate its content and to improve it by means of actualization according to various directions of its practical use 16 sections proposed in a draft of the Atlas content are as follows: introductory, geological structure, relief, mineral resources, environment evolution, climate, land waters, seas, seashores, snow cover, glaciers, permafrost, soils, flora and fauna, state of the environment and the Nature protection, population, economics, and prospects for future. The popular-scientific edition of the Atlas is intended for use by wide circle of readers and also as a textbook for all levels of education. Presentation of material in the Atlas should combine a high scientific level and accessible language. In a popular form it will clarify traditions of careful treatment to the Nature and the nature-protective ethics of religious confessions of local people

  13. Fast pattern recognition with the ATLAS L1Track trigger for HL-LHC

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00530554; The ATLAS collaboration

    2017-01-01

    A fast hardware based track trigger is being developed in ATLAS for the High Luminosity upgrade of the Large Hadron Collider. The goal is to achieve trigger levels in the high pile-up conditions of the High Luminosity Large Hadron Collider that are similar or better than those achieved at low pile-up conditions by adding tracking information to the ATLAS hardware trigger. A method for fast pattern recognition using the Hough transform is investigated. In this method, detector hits are mapped onto a 2D parameter space with one parameter related to the transverse momentum and one to the initial track direction. The performance of the Hough transform is studied at different pile-up values. It is also compared, using full event simulation of events with average pile-up of 200, with a method based on matching detector hits to pattern banks of simulated tracks stored in a custom made Associative Memory ASICs. The pattern recognition is followed by a track fitting step which calculates the track parameters. The spee...

  14. ATLAS overview week highlights

    CERN Multimedia

    D. Froidevaux

    2005-01-01

    A warm and early October afternoon saw the beginning of the 2005 ATLAS overview week, which took place Rue de La Montagne Sainte-Geneviève in the heart of the Quartier Latin in Paris. All visitors had been warned many times by the ATLAS management and the organisers that the premises would be the subject of strict security clearance because of the "plan Vigipirate", which remains at some level of alert in all public buildings across France. The public building in question is now part of the Ministère de La Recherche, but used to host one of the so-called French "Grandes Ecoles", called l'Ecole Polytechnique (in France there is only one Ecole Polytechnique, whereas there are two in Switzerland) until the end of the seventies, a little while after it opened its doors also to women. In fact, the setting chosen for this ATLAS overview week by our hosts from LPNHE Paris has turned out to be ideal and the security was never an ordeal. For those seeing Paris for the first time, there we...

  15. Clean tracks for ATLAS

    CERN Multimedia

    2006-01-01

    First cosmic ray tracks in the integrated ATLAS barrel SCT and TRT tracking detectors. A snap-shot of a cosmic ray event seen in the different layers of both the SCT and TRT detectors. The ATLAS Inner Detector Integration Team celebrated a major success recently, when clean tracks of cosmic rays were detected in the completed semiconductor tracker (SCT) and transition radiation tracker (TRT) barrels. These tracking tests come just months after the successful insertion of the SCT into the TRT (See Bulletin 09/2006). The cosmic ray test is important for the experiment because, after 15 years of hard work, it is the last test performed on the fully assembled barrel before lowering it into the ATLAS cavern. The two trackers work together to provide millions of channels so that particles' tracks can be identified and measured with great accuracy. According to the team, the preliminary results were very encouraging. After first checks of noise levels in the final detectors, a critical goal was to study their re...

  16. The FTK to Level-2 Interface Card (FLIC)

    CERN Document Server

    Wang, R.; The ATLAS collaboration; Auerbach, Benjamin; Blair, Robert; Drake, Gary; Love, Jeremy; Proudfoot, James; Anderson, J.; Zhang, Jinlong

    2016-01-01

    The FTK to Level-2 Interface Card (FLIC) of the ATLAS Fast TracKer (FTK) trigger upgrade is the final component in the FTK chain of custom electronics. The FTK performs full event tracking using the ATLAS Silicon detectors for every Level-1(L1) accepted event at 100 kHz. The FLIC is a custom Advanced Telecommunications Architecture (ATCA) card that interfaces the upstream FTK system with the ATLAS trigger and data acquisition (TDAQ) system, and allows for event processing on commercial PC blades making use of the 10 GB Ethernet full mesh ATCA back-plane. The FLIC receives data on 8 optical links at a bandwidth of about 1 Gbps per channel, reformats the data to the ATLAS standard record format, and performs the conversion from local to global module identifier using look up tables in SRAM. After processing, the event records are sent out to the TDAQ system using the S-LINK protocol at 2 Gbps, with a latency of O(10 microseconds). The data processing is handled in two Xilinx Virtex-6 FPGAs, with two additional ...

  17. ATLAS Strip Detector: Operational Experience and Run1-> Run2 Transition

    CERN Document Server

    Nagai, Koichi; The ATLAS collaboration

    2014-01-01

    Large hadron collider was operated very successfully during the Run1 and provided a lot of opportunities of physics studies. It currently has a consolidation work toward to the operation at $\\sqrt{s}=14 \\mathrm{TeV}$ in Run2. The ATLAS experiment has achieved excellent performance in Run1 operation, delivering remarkable physics results. The SemiConductor Tracker contributed to the precise measurement of momentum of charged particles. This paper describes the operation experience of the SemiConductor Tracker in Run1 and the preparation toward to the Run2 operation during the LS1.

  18. The laser calibration of the ATLAS Tile Calorimeter during the LHC run 1

    Czech Academy of Sciences Publication Activity Database

    Abdallah, J.; Alexa, C.; Coutinho, Y.A.; Lokajíček, Miloš; Němeček, Stanislav

    2016-01-01

    Roč. 11, Oct (2016), 1-31, č. článku T10005. ISSN 1748-0221 R&D Projects: GA MŠk(CZ) LG15047; GA MŠk LM2015068 Institutional support: RVO:68378271 Keywords : electronics * readout * calorimeter * hadronic * calibration * laser * stability * ATLAS * data analysis method Subject RIV: BF - Elementary Particles and High Energy Physics Impact factor: 1.220, year: 2016

  19. System administration of ATLAS TDAQ computing environment

    Science.gov (United States)

    Adeel-Ur-Rehman, A.; Bujor, F.; Benes, J.; Caramarcu, C.; Dobson, M.; Dumitrescu, A.; Dumitru, I.; Leahu, M.; Valsan, L.; Oreshkin, A.; Popov, D.; Unel, G.; Zaytsev, A.

    2010-04-01

    This contribution gives a thorough overview of the ATLAS TDAQ SysAdmin group activities which deals with administration of the TDAQ computing environment supporting High Level Trigger, Event Filter and other subsystems of the ATLAS detector operating on LHC collider at CERN. The current installation consists of approximately 1500 netbooted nodes managed by more than 60 dedicated servers, about 40 multi-screen user interface machines installed in the control rooms and various hardware and service monitoring machines as well. In the final configuration, the online computer farm will be capable of hosting tens of thousands applications running simultaneously. The software distribution requirements are matched by the two level NFS based solution. Hardware and network monitoring systems of ATLAS TDAQ are based on NAGIOS and MySQL cluster behind it for accounting and storing the monitoring data collected, IPMI tools, CERN LANDB and the dedicated tools developed by the group, e.g. ConfdbUI. The user management schema deployed in TDAQ environment is founded on the authentication and role management system based on LDAP. External access to the ATLAS online computing facilities is provided by means of the gateways supplied with an accounting system as well. Current activities of the group include deployment of the centralized storage system, testing and validating hardware solutions for future use within the ATLAS TDAQ environment including new multi-core blade servers, developing GUI tools for user authentication and roles management, testing and validating 64-bit OS, and upgrading the existing TDAQ hardware components, authentication servers and the gateways.

  20. Multilevel Workflow System in the ATLAS Experiment

    CERN Document Server

    Borodin, M; The ATLAS collaboration; Golubkov, D; Klimentov, A; Maeno, T; Vaniachine, A

    2015-01-01

    The ATLAS experiment is scaling up Big Data processing for the next LHC run using a multilevel workflow system comprised of many layers. In Big Data processing ATLAS deals with datasets, not individual files. Similarly a task (comprised of many jobs) has become a unit of the ATLAS workflow in distributed computing, with about 0.8M tasks processed per year. In order to manage the diversity of LHC physics (exceeding 35K physics samples per year), the individual data processing tasks are organized into workflows. For example, the Monte Carlo workflow is composed of many steps: generate or configure hard-processes, hadronize signal and minimum-bias (pileup) events, simulate energy deposition in the ATLAS detector, digitize electronics response, simulate triggers, reconstruct data, convert the reconstructed data into ROOT ntuples for physics analysis, etc. Outputs are merged and/or filtered as necessary to optimize the chain. The bi-level workflow manager - ProdSys2 - generates actual workflow tasks and their jobs...

  1. Probabilistic liver atlas construction.

    Science.gov (United States)

    Dura, Esther; Domingo, Juan; Ayala, Guillermo; Marti-Bonmati, Luis; Goceri, E

    2017-01-13

    Anatomical atlases are 3D volumes or shapes representing an organ or structure of the human body. They contain either the prototypical shape of the object of interest together with other shapes representing its statistical variations (statistical atlas) or a probability map of belonging to the object (probabilistic atlas). Probabilistic atlases are mostly built with simple estimations only involving the data at each spatial location. A new method for probabilistic atlas construction that uses a generalized linear model is proposed. This method aims to improve the estimation of the probability to be covered by the liver. Furthermore, all methods to build an atlas involve previous coregistration of the sample of shapes available. The influence of the geometrical transformation adopted for registration in the quality of the final atlas has not been sufficiently investigated. The ability of an atlas to adapt to a new case is one of the most important quality criteria that should be taken into account. The presented experiments show that some methods for atlas construction are severely affected by the previous coregistration step. We show the good performance of the new approach. Furthermore, results suggest that extremely flexible registration methods are not always beneficial, since they can reduce the variability of the atlas and hence its ability to give sensible values of probability when used as an aid in segmentation of new cases.

  2. Rare Decays of B0(s) Mesons to Muon Pairs with the ATLAS Detector (Run 1)

    CERN Document Server

    Walkowiak, Wolfgang; The ATLAS collaboration

    2016-01-01

    The large amount of Heavy Flavor data collected by the ATLAS experiment at the LHC is potentially sensitive to New Physics, which could be evident in processes that are naturally suppressed in the Standard Model. The most recent results for the rare decays of B0s and B0 to two muons based on the full sample of data (Run 1) collected by the ATLAS detector at 7 and 8 TeV of collision energy are presented. The consistency with the Standard Model and with other available measurements is discussed.

  3. ATLAS solenoid operates underground

    CERN Multimedia

    2006-01-01

    A new phase for the ATLAS collaboration started with the first operation of a completed sub-system: the Central Solenoid. Teams monitoring the cooling and powering of the ATLAS solenoid in the control room. The solenoid was cooled down to 4.5 K from 17 to 23 May. The first current was established the same evening that the solenoid became cold and superconductive. 'This makes the ATLAS Central Solenoid the very first cold and superconducting magnet to be operated in the LHC underground areas!', said Takahiko Kondo, professor at KEK. Though the current was limited to 1 kA, the cool-down and powering of the solenoid was a major milestone for all of the control, cryogenic, power and vacuum systems-a milestone reached by the hard work and many long evenings invested by various teams from ATLAS, all of CERN's departments and several large and small companies. Since the Central Solenoid and the barrel liquid argon (LAr) calorimeter share the same cryostat vacuum vessel, this achievement was only possible in perfe...

  4. Design and test performance of the ATLAS Feature Extractor trigger boards for the Phase-1 Upgrade

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00222228; The ATLAS collaboration

    2017-01-01

    In Run 3, the ATLAS Level-1 Calorimeter Trigger will be augmented by an Electron Feature Extractor (eFEX), to identify isolated e/g and particles, and a Jet Feature Extractor (jFEX), to identify energetic jets and calculate various local energy sums. Each module accommodates more than 450 differential signals that can operate at up to 12.8 Gb/s, some of which are routed over 30 cm between FPGAs. Presented here are the module designs, the processes that have been adopted to meet the challenges associated with multi-Gb/s PCB design, and the results of tests that characterize the performance of these modules.

  5. Report to users of ATLAS

    International Nuclear Information System (INIS)

    Ahmad, I.; Glagola, B.

    1995-05-01

    This report contains discussing in the following areas: Status of the Atlas accelerator; highlights of recent research at Atlas; concept for an advanced exotic beam facility based on Atlas; program advisory committee; Atlas executive committee; and Atlas and ANL physics division on the world wide web

  6. ATLAS Distributed Computing Automation

    CERN Document Server

    Schovancova, J; The ATLAS collaboration; Borrego, C; Campana, S; Di Girolamo, A; Elmsheuser, J; Hejbal, J; Kouba, T; Legger, F; Magradze, E; Medrano Llamas, R; Negri, G; Rinaldi, L; Sciacca, G; Serfon, C; Van Der Ster, D C

    2012-01-01

    The ATLAS Experiment benefits from computing resources distributed worldwide at more than 100 WLCG sites. The ATLAS Grid sites provide over 100k CPU job slots, over 100 PB of storage space on disk or tape. Monitoring of status of such a complex infrastructure is essential. The ATLAS Grid infrastructure is monitored 24/7 by two teams of shifters distributed world-wide, by the ATLAS Distributed Computing experts, and by site administrators. In this paper we summarize automation efforts performed within the ATLAS Distributed Computing team in order to reduce manpower costs and improve the reliability of the system. Different aspects of the automation process are described: from the ATLAS Grid site topology provided by the ATLAS Grid Information System, via automatic site testing by the HammerCloud, to automatic exclusion from production or analysis activities.

  7. A cardiac contouring atlas for radiotherapy

    DEFF Research Database (Denmark)

    Duane, Frances; Aznar, Marianne C; Bartlett, Freddie

    2017-01-01

    defined from cardiology models and agreed by two cardiologists. Reference atlas contours were delineated and written guidelines prepared. Six radiation oncologists tested the atlas. Spatial variation was assessed using the DICE similarity coefficient (DSC) and the directed Hausdorff average distance (d→H,avg......-observer contour separation (mean d→H,avg) was 1.5-2.2mm for left ventricular segments and 1.3-5.1mm for coronary artery segments. This spatial variation resulted in

  8. Multi-Threaded Evolution of the Data-Logging System of the ATLAS Experiment at CERN

    CERN Document Server

    Colombo, T; The ATLAS collaboration

    2011-01-01

    The ATLAS experiment is currently observing proton-proton collisions delivered by the LHC accelerator at a centre of mass energy of 7 TeV with a peak luminosity of ~1033 cm-2 s-1. The ATLAS Trigger and Data Acquisition (TDAQ) system selects interesting events on-line in a three-level trigger system in order to store them at a budgeted rate of ~200 Hz for an event size of ~1.5 MB. This paper focuses on the TDAQ data-logging system. Its purpose is to receive events from the third level trigger, process them and stream the results into different raw data files according to the trigger decision. The data files are subsequently moved to the central mass storage facility at CERN. The system currently in production has been commissioned in 2007 and has been working smoothly since then. It is however based on an essentially single-threaded design that is anticipated not to cope with the increase in event rate and event size that is foreseen as part of the ATLAS and LHC upgrade programs. This design also severely limi...

  9. Multi-Threaded Evolution of the Data-Logging System of the ATLAS Experiment at CERN

    CERN Document Server

    Colombo, T; The ATLAS collaboration

    2011-01-01

    The ATLAS experiment observes proton-proton collisions delivered by the LHC accelerator at a centre of mass energy of 7 TeV with a peak luminosity of ~ 10^33 cm^-2 s^-1 in 2011. The ATLAS Trigger and Data Acquisition (TDAQ) system selects interesting events on-line in a three-level trigger system in order to store them at a budgeted average rate of ~ 400 Hz for an event size of ~1.2 MB. This paper focuses on the TDAQ data-logging system. Its purpose is to receive events from the third level trigger, process them and stream the data into different raw files according to the trigger decision. The system currently in production is based on an essentially single-threaded design that is anticipated not to cope with the increase in event rate and event size foreseen as part of the ATLAS and LHC upgrade programs. This design also severely limits the possibility of performing additional CPU-intensive tasks. Therefore, a novel design able to exploit the full power of multi-core architecture is needed. The main challen...

  10. A Hardware Fast Tracker for the ATLAS trigger

    International Nuclear Information System (INIS)

    Asbah, N.

    2016-01-01

    The trigger system of the ATLAS experiment is designed to reduce the event rate from the LHC nominal bunch crossing at 40 MHz to about 1 kHz, at the design luminosity of 10 34 cm -2 · s -1 . After a successful period of data taking from 2010 to early 2013, the LHC already started with much higher instantaneous luminosity. This will increase the load on High Level Trigger system, the second stage of the selection based on software algorithms. More sophisticated algorithms will be needed to achieve higher background rejection while maintaining good efficiency for interesting physics signals. The Fast TracKer (FTK) is part of the ATLAS trigger upgrade project. It is a hardware processor that will provide, at every Level-1 accepted event (100 kHz) and within 100 μs, full tracking information for tracks with momentum as low as 1 GeV. Providing fast, extensive access to tracking information, with resolution comparable to the offline reconstruction, FTK will help in precise detection of the primary and secondary vertices to ensure robust selections and improve the trigger performance. FTK exploits hardware technologies with massive parallelism, combining Associative Memory ASICs, FPGAs and high-speed communication links.

  11. An integrated overview of metadata in ATLAS

    International Nuclear Information System (INIS)

    Gallas, E J; Malon, D; Hawkings, R J; Albrand, S; Torrence, E

    2010-01-01

    Metadata (data about data) arise in many contexts, from many diverse sources, and at many levels in ATLAS. Familiar examples include run-level, luminosity-block-level, and event-level metadata, and, related to processing and organization, dataset-level and file-level metadata, but these categories are neither exhaustive nor orthogonal. Some metadata are known a priori, in advance of data taking or simulation; other metadata are known only after processing, and occasionally, quite late (e.g., detector status or quality updates that may appear after initial reconstruction is complete). Metadata that may seem relevant only internally to the distributed computing infrastructure under ordinary conditions may become relevant to physics analysis under error conditions ('What can I discover about data I failed to process?'). This talk provides an overview of metadata and metadata handling in ATLAS, and describes ongoing work to deliver integrated metadata services in support of physics analysis.

  12. The ATLAS detector control system

    International Nuclear Information System (INIS)

    Schlenker, S.; Arfaoui, S.; Franz, S.

    2012-01-01

    The ATLAS experiment is one of the multi-purpose experiments at the Large Hadron Collider (LHC), constructed to study elementary particle interactions in collisions of high-energy proton beams. Twelve different sub-detectors as well as the common experimental infrastructure are supervised by the Detector Control System (DCS). The DCS enables equipment supervision of all ATLAS sub-detectors by using a system of more that 130 server machines running the industrial SCADA product PVSS. This highly distributed system reads, processes and archives of the order of 10 6 operational parameters. Higher level control system layers allow for automatic control procedures, efficient error recognition and handling, and manage the communication with external systems such as the LHC. First, this contribution describes the status of the ATLAS DCS and the experience gained during the LHC commissioning and the first physics data taking operation period. Secondly, the future evolution and maintenance constraints for the coming years and the LHC high luminosity upgrades are outlined. (authors)

  13. The ATLAS Detector Control System

    CERN Document Server

    Schlenker, S; Kersten, S; Hirschbuehl, D; Braun, H; Poblaguev, A; Oliveira Damazio, D; Talyshev, A; Zimmermann, S; Franz, S; Gutzwiller, O; Hartert, J; Mindur, B; Tsarouchas, CA; Caforio, D; Sbarra, C; Olszowska, J; Hajduk, Z; Banas, E; Wynne, B; Robichaud-Veronneau, A; Nemecek, S; Thompson, PD; Mandic, I; Deliyergiyev, M; Polini, A; Kovalenko, S; Khomutnikov, V; Filimonov, V; Bindi, M; Stanecka, E; Martin, T; Lantzsch, K; Hoffmann, D; Huber, J; Mountricha, E; Santos, HF; Ribeiro, G; Barillari, T; Habring, J; Arabidze, G; Boterenbrood, H; Hart, R; Marques Vinagre, F; Lafarguette, P; Tartarelli, GF; Nagai, K; D'Auria, S; Chekulaev, S; Phillips, P; Ertel, E; Brenner, R; Leontsinis, S; Mitrevski, J; Grassi, V; Karakostas, K; Iakovidis, G.; Marchese, F; Aielli, G

    2011-01-01

    The ATLAS experiment is one of the multi-purpose experiments at the Large Hadron Collider (LHC), constructed to study elementary particle interactions in collisions of high-energy proton beams. Twelve different sub-detectors as well as the common experimental infrastructure are supervised by the Detector Control System (DCS). The DCS enables equipment supervision of all ATLAS sub-detectors by using a system of >130 server machines running the industrial SCADA product PVSS. This highly distributed system reads, processes and archives of the order of 106 operational parameters. Higher level control system layers allow for automatic control procedures, efficient error recognition and handling, and manage the communication with external systems such as the LHC. This contribution firstly describes the status of the ATLAS DCS and the experience gained during the LHC commissioning and the first physics data taking operation period. Secondly, the future evolution and maintenance constraints for the coming years an...

  14. The readiness of ATLAS Trigger-DAQ system for the second LHC run

    CERN Document Server

    Rammensee, Michael; The ATLAS collaboration

    2015-01-01

    After its first shutdown, LHC will provide pp collisions with increased luminosity and energy. In the ATLAS experiment, the Trigger and Data Acquisition (TDAQ) system has been upgraded to deal with the increased event rates. The updated system is radically different from the previous implementation, both in terms of architecture and expected performance. The main architecture has been reshaped in order to profit from the technological progress and to maximize the flexibility and efficiency of the data selection process. The trigger system in ATLAS consists of a hardware Level-1 (L1) and a software based high-level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. The pre-existing two-level software filtering, known as L2 and the Event Filter, are now merged into a single process, performing incremental data collection and analysis. This design has many advantages, among which are: the radical simplification of the architec...

  15. Repeatability of Brain Volume Measurements Made with the Atlas-based Method from T1-weighted Images Acquired Using a 0.4 Tesla Low Field MR Scanner.

    Science.gov (United States)

    Goto, Masami; Suzuki, Makoto; Mizukami, Shinya; Abe, Osamu; Aoki, Shigeki; Miyati, Tosiaki; Fukuda, Michinari; Gomi, Tsutomu; Takeda, Tohoru

    2016-10-11

    An understanding of the repeatability of measured results is important for both the atlas-based and voxel-based morphometry (VBM) methods of magnetic resonance (MR) brain volumetry. However, many recent studies that have investigated the repeatability of brain volume measurements have been performed using static magnetic fields of 1-4 tesla, and no study has used a low-strength static magnetic field. The aim of this study was to investigate the repeatability of measured volumes using the atlas-based method and a low-strength static magnetic field (0.4 tesla). Ten healthy volunteers participated in this study. Using a 0.4 tesla magnetic resonance imaging (MRI) scanner and a quadrature head coil, three-dimensional T 1 -weighted images (3D-T 1 WIs) were obtained from each subject, twice on the same day. VBM8 software was used to construct segmented normalized images [gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) images]. The regions-of-interest (ROIs) of GM, WM, CSF, hippocampus (HC), orbital gyrus (OG), and cerebellum posterior lobe (CPL) were generated using WFU PickAtlas. The percentage change was defined as[100 × (measured volume with first segmented image - mean volume in each subject)/(mean volume in each subject)]The average percentage change was calculated as the percentage change in the 6 ROIs of the 10 subjects. The mean of the average percentage changes for each ROI was as follows: GM, 0.556%; WM, 0.324%; CSF, 0.573%; HC, 0.645%; OG, 1.74%; and CPL, 0.471%. The average percentage change was higher for the orbital gyrus than for the other ROIs. We consider that repeatability of the atlas-based method is similar between 0.4 and 1.5 tesla MR scanners. To our knowledge, this is the first report to show that the level of repeatability with a 0.4 tesla MR scanner is adequate for the estimation of brain volume change by the atlas-based method.

  16. ATLAS-AWS

    International Nuclear Information System (INIS)

    Gehrcke, Jan-Philip; Stonjek, Stefan; Kluth, Stefan

    2010-01-01

    We show how the ATLAS offline software is ported on the Amazon Elastic Compute Cloud (EC2). We prepare an Amazon Machine Image (AMI) on the basis of the standard ATLAS platform Scientific Linux 4 (SL4). Then an instance of the SLC4 AMI is started on EC2 and we install and validate a recent release of the ATLAS offline software distribution kit. The installed software is archived as an image on the Amazon Simple Storage Service (S3) and can be quickly retrieved and connected to new SL4 AMI instances using the Amazon Elastic Block Store (EBS). ATLAS jobs can then configure against the release kit using the ATLAS configuration management tool (cmt) in the standard way. The output of jobs is exported to S3 before the SL4 AMI is terminated. Job status information is transferred to the Amazon SimpleDB service. The whole process of launching instances of our AMI, starting, monitoring and stopping jobs and retrieving job output from S3 is controlled from a client machine using python scripts implementing the Amazon EC2/S3 API via the boto library working together with small scripts embedded in the SL4 AMI. We report our experience with setting up and operating the system using standard ATLAS job transforms.

  17. EnviroAtlas

    Data.gov (United States)

    City and County of Durham, North Carolina — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The layers in this web...

  18. Validating atlas-guided DOT: a comparison of diffuse optical tomography informed by atlas and subject-specific anatomies.

    Science.gov (United States)

    Cooper, Robert J; Caffini, Matteo; Dubb, Jay; Fang, Qianqian; Custo, Anna; Tsuzuki, Daisuke; Fischl, Bruce; Wells, William; Dan, Ippeita; Boas, David A

    2012-09-01

    We describe the validation of an anatomical brain atlas approach to the analysis of diffuse optical tomography (DOT). Using MRI data from 32 subjects, we compare the diffuse optical images of simulated cortical activation reconstructed using a registered atlas with those obtained using a subject's true anatomy. The error in localization of the simulated cortical activations when using a registered atlas is due to a combination of imperfect registration, anatomical differences between atlas and subject anatomies and the localization error associated with diffuse optical image reconstruction. When using a subject-specific MRI, any localization error is due to diffuse optical image reconstruction only. In this study we determine that using a registered anatomical brain atlas results in an average localization error of approximately 18 mm in Euclidean space. The corresponding error when the subject's own MRI is employed is 9.1 mm. In general, the cost of using atlas-guided DOT in place of subject-specific MRI-guided DOT is a doubling of the localization error. Our results show that despite this increase in error, reasonable anatomical localization is achievable even in cases where the subject-specific anatomy is unavailable. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. Big Data tools as applied to ATLAS event data

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00225336; The ATLAS collaboration; Gardner, Robert; Bryant, Lincoln

    2017-01-01

    Big Data technologies have proven to be very useful for storage, processing and visualization of derived metrics associated with ATLAS distributed computing (ADC) services. Logfiles, database records, and metadata from a diversity of systems have been aggregated and indexed to create an analytics platform for ATLAS ADC operations analysis. Dashboards, wide area data access cost metrics, user analysis patterns, and resource utilization efficiency charts are produced flexibly through queries against a powerful analytics cluster. Here we explore whether these techniques and associated analytics ecosystem can be applied to add new modes of open, quick, and pervasive access to ATLAS event data. Such modes would simplify access and broaden the reach of ATLAS public data to new communities of users. An ability to efficiently store, filter, search and deliver ATLAS data at the event and/or sub-event level in a widely supported format would enable or significantly simplify usage of machine learning environments and to...

  20. Fabiola Gianotti, the newly elected Spokesperson of ATLAS

    CERN Multimedia

    2008-01-01

    On 11 July Fabiola Gianotti was elected by the ATLAS Collaboration as its future Spokesperson. Her term of office will start on 1 March 2009 and will last for two years. She will take over from Peter Jenni who has been ATLAS Spokesperson since its formalization in 1992. Three distinguished physicists stood as candidates for this election: Fabiola Gianotti (CERN), Marzio Nessi (CERN), and Leonardo Rossi (INFN Genova, Italy). The nomination process started on 30 October 2007, with a general email sent to the ATLAS collaboration calling for nominations, and closed on 25 January 2008. Any ATLAS physicist could nominate a candidate, and 24 nominees were proposed before the ATLAS search committee narrowed them to the final three. After the voting process, which concluded the ATLAS general meeting in Bern, the Collaboration Board greeted the result with warm applause.

  1. A digital 3D atlas of the marmoset brain based on multi-modal MRI.

    Science.gov (United States)

    Liu, Cirong; Ye, Frank Q; Yen, Cecil Chern-Chyi; Newman, John D; Glen, Daniel; Leopold, David A; Silva, Afonso C

    2018-04-01

    The common marmoset (Callithrix jacchus) is a New-World monkey of growing interest in neuroscience. Magnetic resonance imaging (MRI) is an essential tool to unveil the anatomical and functional organization of the marmoset brain. To facilitate identification of regions of interest, it is desirable to register MR images to an atlas of the brain. However, currently available atlases of the marmoset brain are mainly based on 2D histological data, which are difficult to apply to 3D imaging techniques. Here, we constructed a 3D digital atlas based on high-resolution ex-vivo MRI images, including magnetization transfer ratio (a T1-like contrast), T2w images, and multi-shell diffusion MRI. Based on the multi-modal MRI images, we manually delineated 54 cortical areas and 16 subcortical regions on one hemisphere of the brain (the core version). The 54 cortical areas were merged into 13 larger cortical regions according to their locations to yield a coarse version of the atlas, and also parcellated into 106 sub-regions using a connectivity-based parcellation method to produce a refined atlas. Finally, we compared the new atlas set with existing histology atlases and demonstrated its applications in connectome studies, and in resting state and stimulus-based fMRI. The atlas set has been integrated into the widely-distributed neuroimaging data analysis software AFNI and SUMA, providing a readily usable multi-modal template space with multi-level anatomical labels (including labels from the Paxinos atlas) that can facilitate various neuroimaging studies of marmosets. Published by Elsevier Inc.

  2. ATLAS Muon Drift Tube Electronics

    CERN Document Server

    Arai, Y; Beretta, M; Boterenbrood, H; Brandenburg, G W; Ceradini, F; Chapman, J W; Dai, T; Ferretti, C; Fries, T; Gregory, J; Guimarães da Costa, J; Harder, S; Hazen, E; Huth, J; Jansweijer, P P M; Kirsch, L E; König, A C; Lanza, A; Mikenberg, G; Oliver, J; Posch, C; Richter, R; Riegler, W; Spiriti, E; Taylor, F E; Vermeulen, J; Wadsworth, B; Wijnen, T A M

    2008-01-01

    This paper describes the electronics used for the ATLAS monitored drift tube (MDT) chambers. These chambers are the main component of the precision tracking system in the ATLAS muon spectrometer. The MDT detector system consists of 1,150 chambers containing a total of 354,000 drift tubes. It is capable of measuring the sagitta of muon tracks to an accuracy of 60 microns, which corresponds to a momentum accuracy of about 10% at pT = 1 TeV. The design and performance of the MDT readout electronics as well as the electronics for controlling, monitoring and powering the detector will be discussed. These electronics have been extensively tested under simulated running conditions and have undergone radiation testing certifying them for more than 10 years of LHC operation. They are now installed on the ATLAS detector and are operating during cosmic ray commissioning runs.

  3. 16 February 2012 - Chinese Taipei Ambassador to Switzerland F. Hsieh in the ATLAS visitor centre, ATLAS experimental area and LHC tunnel at Point 1 with Collaboration Deputy Sookesperson A. Lankford, throughout accompanied by International Relations Adviser R. Voss.

    CERN Multimedia

    Jean-Claude Gadmer

    2012-01-01

    16 February 2012 - Chinese Taipei Ambassador to Switzerland F. Hsieh in the ATLAS visitor centre, ATLAS experimental area and LHC tunnel at Point 1 with Collaboration Deputy Sookesperson A. Lankford, throughout accompanied by International Relations Adviser R. Voss.

  4. Dear ATLAS colleagues,

    CERN Multimedia

    PH Department

    2008-01-01

    We are collecting old pairs of glasses to take out to Mali, where they can be re-used by people there. The price for a pair of glasses can often exceed 3 months salary, so they are prohibitively expensive for many people. If you have any old spectacles you can donate, please put them in the special box in the ATLAS secretariat, bldg.40-4-D01 before the Christmas closure on 19 December so we can take them with us when we leave for Africa at the end of the month. (more details in ATLAS e-news edition of 29 September 2008: http://atlas-service-enews.web.cern.ch/atlas-service-enews/news/news_mali.php) many thanks! Katharine Leney co-driver of the ATLAS car on the Charity Run to Mali

  5. The ATLAS Tau Trigger

    CERN Document Server

    Dam, M; The ATLAS collaboration

    2009-01-01

    The ATLAS experiment at CERN’s LHC has implemented a dedicated tau trigger system to select hadronically decaying tau leptons from the enormous background of QCD jets. This promises a significant increase in the discovery potential to the Higgs boson and in searches for physics beyond the Standard Model. The three level trigger system has been optimised for effciency and good background rejection. The first level uses information from the calorimeters only, while the two higher levels include also information from the tracking detectors. Shower shape variables and the track multiplicity are important variables to distinguish taus from QCD jets. At the initial lumonosity of 10^31 cm^−2 s^−1, single tau triggers with a transverse energy threshold of 50 GeV or higher can be run standalone. Below this level, the tau signatures will be combined with other event signature

  6. Encoding atlases by randomized classification forests for efficient multi-atlas label propagation.

    Science.gov (United States)

    Zikic, D; Glocker, B; Criminisi, A

    2014-12-01

    We propose a method for multi-atlas label propagation (MALP) based on encoding the individual atlases by randomized classification forests. Most current approaches perform a non-linear registration between all atlases and the target image, followed by a sophisticated fusion scheme. While these approaches can achieve high accuracy, in general they do so at high computational cost. This might negatively affect the scalability to large databases and experimentation. To tackle this issue, we propose to use a small and deep classification forest to encode each atlas individually in reference to an aligned probabilistic atlas, resulting in an Atlas Forest (AF). Our classifier-based encoding differs from current MALP approaches, which represent each point in the atlas either directly as a single image/label value pair, or by a set of corresponding patches. At test time, each AF produces one probabilistic label estimate, and their fusion is done by averaging. Our scheme performs only one registration per target image, achieves good results with a simple fusion scheme, and allows for efficient experimentation. In contrast to standard forest schemes, in which each tree would be trained on all atlases, our approach retains the advantages of the standard MALP framework. The target-specific selection of atlases remains possible, and incorporation of new scans is straightforward without retraining. The evaluation on four different databases shows accuracy within the range of the state of the art at a significantly lower running time. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. The performance of the jet trigger for the ATLAS detector during 2011 data taking

    CERN Document Server

    Aad, Georges; Abdallah, Jalal; Abdinov, Ovsat; Abeloos, Baptiste; Aben, Rosemarie; Abolins, Maris; AbouZeid, Ossama; Abraham, Nicola; Abramowicz, Halina; Abreu, Henso; Abreu, Ricardo; Abulaiti, Yiming; Acharya, Bobby Samir; Adamczyk, Leszek; Adams, David; Adelman, Jahred; Adomeit, Stefanie; Adye, Tim; Affolder, Tony; Agatonovic-Jovin, Tatjana; Agricola, Johannes; Aguilar-Saavedra, Juan Antonio; Ahlen, Steven; Ahmadov, Faig; Aielli, Giulio; Akerstedt, Henrik; Åkesson, Torsten Paul Ake; Akimov, Andrei; Alberghi, Gian Luigi; Albert, Justin; Albrand, Solveig; Alconada Verzini, Maria Josefina; Aleksa, Martin; Aleksandrov, Igor; Alexa, Calin; Alexander, Gideon; Alexopoulos, Theodoros; Alhroob, Muhammad; Aliev, Malik; Alimonti, Gianluca; Alison, John; Alkire, Steven Patrick; Allbrooke, Benedict; Allen, Benjamin William; Allport, Phillip; Aloisio, Alberto; Alonso, Alejandro; Alonso, Francisco; Alpigiani, Cristiano; Alvarez Gonzalez, Barbara; Άlvarez Piqueras, Damián; Alviggi, Mariagrazia; Amadio, Brian Thomas; Amako, Katsuya; Amaral Coutinho, Yara; Amelung, Christoph; Amidei, Dante; Amor Dos Santos, Susana Patricia; Amorim, Antonio; Amoroso, Simone; Amram, Nir; Amundsen, Glenn; Anastopoulos, Christos; Ancu, Lucian Stefan; Andari, Nansi; Andeen, Timothy; Anders, Christoph Falk; Anders, Gabriel; Anders, John Kenneth; Anderson, Kelby; Andreazza, Attilio; Andrei, George Victor; Angelidakis, Stylianos; Angelozzi, Ivan; Anger, Philipp; Angerami, Aaron; Anghinolfi, Francis; Anisenkov, Alexey; Anjos, Nuno; Annovi, Alberto; Antonelli, Mario; Antonov, Alexey; Antos, Jaroslav; Anulli, Fabio; Aoki, Masato; Aperio Bella, Ludovica; Arabidze, Giorgi; Arai, Yasuo; Araque, Juan Pedro; Arce, Ayana; Arduh, Francisco Anuar; Arguin, Jean-Francois; Argyropoulos, Spyridon; Arik, Metin; Armbruster, Aaron James; Armitage, Lewis James; Arnaez, Olivier; Arnold, Hannah; Arratia, Miguel; Arslan, Ozan; Artamonov, Andrei; Artoni, Giacomo; Artz, Sebastian; Asai, Shoji; Asbah, Nedaa; Ashkenazi, Adi; Åsman, Barbro; Asquith, Lily; Assamagan, Ketevi; Astalos, Robert; Atkinson, Markus; Atlay, Naim Bora; Augsten, Kamil; Avolio, Giuseppe; Axen, Bradley; Ayoub, Mohamad Kassem; Azuelos, Georges; Baak, Max; Baas, Alessandra; Baca, Matthew John; Bachacou, Henri; Bachas, Konstantinos; Backes, Moritz; Backhaus, Malte; Bagiacchi, Paolo; Bagnaia, Paolo; Bai, Yu; Baines, John; Baker, Oliver Keith; Baldin, Evgenii; Balek, Petr; Balestri, Thomas; Balli, Fabrice; Balunas, William Keaton; Banas, Elzbieta; Banerjee, Swagato; Bannoura, Arwa A E; Barak, Liron; Barberio, Elisabetta Luigia; Barberis, Dario; Barbero, Marlon; Barillari, Teresa; Barklow, Timothy; Barlow, Nick; Barnes, Sarah Louise; Barnett, Bruce; Barnett, Michael; Barnovska, Zuzana; Baroncelli, Antonio; Barone, Gaetano; Barr, Alan; Barranco Navarro, Laura; Barreiro, Fernando; Barreiro Guimarães da Costa, João; Bartoldus, Rainer; Barton, Adam Edward; Bartos, Pavol; Basalaev, Artem; Bassalat, Ahmed; Basye, Austin; Bates, Richard; Batista, Santiago Juan; Batley, Richard; Battaglia, Marco; Bauce, Matteo; Bauer, Florian; Bawa, Harinder Singh; Beacham, James; Beattie, Michael David; Beau, Tristan; Beauchemin, Pierre-Hugues; Bechtle, Philip; Beck, Hans~Peter; Becker, Kathrin; Becker, Maurice; Beckingham, Matthew; Becot, Cyril; Beddall, Andrew; Beddall, Ayda; Bednyakov, Vadim; Bedognetti, Matteo; Bee, Christopher; Beemster, Lars; Beermann, Thomas; Begel, Michael; Behr, Janna Katharina; Belanger-Champagne, Camille; Bell, Andrew Stuart; Bella, Gideon; Bellagamba, Lorenzo; Bellerive, Alain; Bellomo, Massimiliano; Belotskiy, Konstantin; Beltramello, Olga; Belyaev, Nikita; Benary, Odette; Benchekroun, Driss; Bender, Michael; Bendtz, Katarina; Benekos, Nektarios; Benhammou, Yan; Benhar Noccioli, Eleonora; Benitez, Jose; Benitez Garcia, Jorge-Armando; Benjamin, Douglas; Bensinger, James; Bentvelsen, Stan; Beresford, Lydia; Beretta, Matteo; Berge, David; Bergeaas Kuutmann, Elin; Berger, Nicolas; Berghaus, Frank; Beringer, Jürg; Berlendis, Simon; Bernard, Nathan Rogers; Bernius, Catrin; Bernlochner, Florian Urs; Berry, Tracey; Berta, Peter; Bertella, Claudia; Bertoli, Gabriele; Bertolucci, Federico; Bertram, Iain Alexander; Bertsche, Carolyn; Bertsche, David; Besjes, Geert-Jan; Bessidskaia Bylund, Olga; Bessner, Martin Florian; Besson, Nathalie; Betancourt, Christopher; Bethke, Siegfried; Bevan, Adrian John; Bhimji, Wahid; Bianchi, Riccardo-Maria; Bianchini, Louis; Bianco, Michele; Biebel, Otmar; Biedermann, Dustin; Bielski, Rafal; Biesuz, Nicolo Vladi; Biglietti, Michela; Bilbao De Mendizabal, Javier; Bilokon, Halina; Bindi, Marcello; Binet, Sebastien; Bingul, Ahmet; Bini, Cesare; Biondi, Silvia; Bjergaard, David Martin; Black, Curtis; Black, James; Black, Kevin; Blackburn, Daniel; Blair, Robert; Blanchard, Jean-Baptiste; Blanco, Jacobo Ezequiel; Blazek, Tomas; Bloch, Ingo; Blocker, Craig; Blum, Walter; Blumenschein, Ulrike; Blunier, Sylvain; Bobbink, Gerjan; Bobrovnikov, Victor; Bocchetta, Simona Serena; Bocci, Andrea; Bock, Christopher; Boehler, Michael; Boerner, Daniela; Bogaerts, Joannes Andreas; Bogavac, Danijela; Bogdanchikov, Alexander; Bohm, Christian; Boisvert, Veronique; Bold, Tomasz; Boldea, Venera; Boldyrev, Alexey; Bomben, Marco; Bona, Marcella; Boonekamp, Maarten; Borisov, Anatoly; Borissov, Guennadi; Bortfeldt, Jonathan; Bortoletto, Daniela; Bortolotto, Valerio; Bos, Kors; Boscherini, Davide; Bosman, Martine; Bossio Sola, Jonathan David; Boudreau, Joseph; Bouffard, Julian; Bouhova-Thacker, Evelina Vassileva; Boumediene, Djamel Eddine; Bourdarios, Claire; Boutle, Sarah Kate; Boveia, Antonio; Boyd, James; Boyko, Igor; Bracinik, Juraj; Brandt, Andrew; Brandt, Gerhard; Brandt, Oleg; Bratzler, Uwe; Brau, Benjamin; Brau, James; Braun, Helmut; Breaden Madden, William Dmitri; Brendlinger, Kurt; Brennan, Amelia Jean; Brenner, Lydia; Brenner, Richard; Bressler, Shikma; Bristow, Timothy Michael; Britton, Dave; Britzger, Daniel; Brochu, Frederic; Brock, Ian; Brock, Raymond; Brooijmans, Gustaaf; Brooks, Timothy; Brooks, William; Brosamer, Jacquelyn; Brost, Elizabeth; Broughton, James; Bruckman de Renstrom, Pawel; Bruncko, Dusan; Bruneliere, Renaud; Bruni, Alessia; Bruni, Graziano; Brunt, Benjamin; Bruschi, Marco; Bruscino, Nello; Bryant, Patrick; Bryngemark, Lene; Buanes, Trygve; Buat, Quentin; Buchholz, Peter; Buckley, Andrew; Budagov, Ioulian; Buehrer, Felix; Bugge, Magnar Kopangen; Bulekov, Oleg; Bullock, Daniel; Burckhart, Helfried; Burdin, Sergey; Burgard, Carsten Daniel; Burghgrave, Blake; Burka, Klaudia; Burke, Stephen; Burmeister, Ingo; Busato, Emmanuel; Büscher, Daniel; Büscher, Volker; Bussey, Peter; Butler, John; Butt, Aatif Imtiaz; Buttar, Craig; Butterworth, Jonathan; Butti, Pierfrancesco; Buttinger, William; Buzatu, Adrian; Buzykaev, Aleksey; Cabrera Urbán, Susana; Caforio, Davide; Cairo, Valentina; Cakir, Orhan; Calace, Noemi; Calafiura, Paolo; Calandri, Alessandro; Calderini, Giovanni; Calfayan, Philippe; Caloba, Luiz; Calvet, David; Calvet, Samuel; Calvet, Thomas Philippe; Camacho Toro, Reina; Camarda, Stefano; Camarri, Paolo; Cameron, David; Caminal Armadans, Roger; Camincher, Clement; Campana, Simone; Campanelli, Mario; Campoverde, Angel; Canale, Vincenzo; Canepa, Anadi; Cano Bret, Marc; Cantero, Josu; Cantrill, Robert; Cao, Tingting; Capeans Garrido, Maria Del Mar; Caprini, Irinel; Caprini, Mihai; Capua, Marcella; Caputo, Regina; Carbone, Ryne Michael; Cardarelli, Roberto; Cardillo, Fabio; Carli, Ina; Carli, Tancredi; Carlino, Gianpaolo; Carminati, Leonardo; Caron, Sascha; Carquin, Edson; Carrillo-Montoya, German D; Carter, Janet; Carvalho, João; Casadei, Diego; Casado, Maria Pilar; Casolino, Mirkoantonio; Casper, David William; Castaneda-Miranda, Elizabeth; Castelli, Angelantonio; Castillo Gimenez, Victoria; Castro, Nuno Filipe; Catinaccio, Andrea; Catmore, James; Cattai, Ariella; Caudron, Julien; Cavaliere, Viviana; Cavallaro, Emanuele; Cavalli, Donatella; Cavalli-Sforza, Matteo; Cavasinni, Vincenzo; Ceradini, Filippo; Cerda Alberich, Leonor; Cerio, Benjamin; Santiago Cerqueira, Augusto; Cerri, Alessandro; Cerrito, Lucio; Cerutti, Fabio; Cerv, Matevz; Cervelli, Alberto; Cetin, Serkant Ali; Chafaq, Aziz; Chakraborty, Dhiman; Chan, Stephen Kam-wah; Chan, Yat Long; Chang, Philip; Chapman, John Derek; Charlton, Dave; Chatterjee, Avishek; Chau, Chav Chhiv; Chavez Barajas, Carlos Alberto; Che, Siinn; Cheatham, Susan; Chegwidden, Andrew; Chekanov, Sergei; Chekulaev, Sergey; Chelkov, Gueorgui; Chelstowska, Magda Anna; Chen, Chunhui; Chen, Hucheng; Chen, Karen; Chen, Shenjian; Chen, Shion; Chen, Xin; Chen, Ye; Cheng, Hok Chuen; Cheng, Huajie; Cheng, Yangyang; Cheplakov, Alexander; Cheremushkina, Evgenia; Cherkaoui El Moursli, Rajaa; Chernyatin, Valeriy; Cheu, Elliott; Chevalier, Laurent; Chiarella, Vitaliano; Chiarelli, Giorgio; Chiodini, Gabriele; Chisholm, Andrew; Chitan, Adrian; Chizhov, Mihail; Choi, Kyungeon; Chomont, Arthur Rene; Chouridou, Sofia; Chow, Bonnie Kar Bo; Christodoulou, Valentinos; Chromek-Burckhart, Doris; Chudoba, Jiri; Chuinard, Annabelle Julia; Chwastowski, Janusz; Chytka, Ladislav; Ciapetti, Guido; Ciftci, Abbas Kenan; Cinca, Diane; Cindro, Vladimir; Cioara, Irina Antonela; Ciocio, Alessandra; Cirotto, Francesco; Citron, Zvi Hirsh; Ciubancan, Mihai; Clark, Allan G; Clark, Brian Lee; Clark, Michael; Clark, Philip James; Clarke, Robert; Clement, Christophe; Coadou, Yann; Cobal, Marina; Coccaro, Andrea; Cochran, James H; Coffey, Laurel; Colasurdo, Luca; Cole, Brian; Cole, Stephen; Colijn, Auke-Pieter; Collot, Johann; Colombo, Tommaso; Compostella, Gabriele; Conde Muiño, Patricia; Coniavitis, Elias; Connell, Simon Henry; Connelly, Ian; Consorti, Valerio; Constantinescu, Serban; Conta, Claudio; Conti, Geraldine; Conventi, Francesco; Cooke, Mark; Cooper, Ben; Cooper-Sarkar, Amanda; Cornelissen, Thijs; Corradi, Massimo; Corriveau, Francois; Corso-Radu, Alina; Cortes-Gonzalez, Arely; Cortiana, Giorgio; Costa, Giuseppe; Costa, María José; Costanzo, Davide; Cottin, Giovanna; Cowan, Glen; Cox, Brian; Cranmer, Kyle; Crawley, Samuel Joseph; Cree, Graham; Crépé-Renaudin, Sabine; Crescioli, Francesco; Cribbs, Wayne Allen; Crispin Ortuzar, Mireia; Cristinziani, Markus; Croft, Vince; Crosetti, Giovanni; Cuhadar Donszelmann, Tulay; Cummings, Jane; Curatolo, Maria; Cúth, Jakub; Cuthbert, Cameron; Czirr, Hendrik; Czodrowski, Patrick; D'Auria, Saverio; D'Onofrio, Monica; Da Cunha Sargedas De Sousa, Mario Jose; Da Via, Cinzia; Dabrowski, Wladyslaw; Dai, Tiesheng; Dale, Orjan; Dallaire, Frederick; Dallapiccola, Carlo; Dam, Mogens; Dandoy, Jeffrey Rogers; Dang, Nguyen Phuong; Daniells, Andrew Christopher; Dann, Nicholas Stuart; Danninger, Matthias; Dano Hoffmann, Maria; Dao, Valerio; Darbo, Giovanni; Darmora, Smita; Dassoulas, James; Dattagupta, Aparajita; Davey, Will; David, Claire; Davidek, Tomas; Davies, Merlin; Davison, Peter; Davygora, Yuriy; Dawe, Edmund; Dawson, Ian; Daya-Ishmukhametova, Rozmin; De, Kaushik; de Asmundis, Riccardo; De Benedetti, Abraham; De Castro, Stefano; De Cecco, Sandro; De Groot, Nicolo; de Jong, Paul; De la Torre, Hector; De Lorenzi, Francesco; De Pedis, Daniele; De Salvo, Alessandro; De Sanctis, Umberto; De Santo, Antonella; De Vivie De Regie, Jean-Baptiste; Dearnaley, William James; Debbe, Ramiro; Debenedetti, Chiara; Dedovich, Dmitri; Deigaard, Ingrid; Del Peso, Jose; Del Prete, Tarcisio; Delgove, David; Deliot, Frederic; Delitzsch, Chris Malena; Deliyergiyev, Maksym; Dell'Acqua, Andrea; Dell'Asta, Lidia; Dell'Orso, Mauro; Della Pietra, Massimo; della Volpe, Domenico; Delmastro, Marco; Delsart, Pierre-Antoine; Deluca, Carolina; DeMarco, David; Demers, Sarah; Demichev, Mikhail; Demilly, Aurelien; Denisov, Sergey; Denysiuk, Denys; Derendarz, Dominik; Derkaoui, Jamal Eddine; Derue, Frederic; Dervan, Paul; Desch, Klaus Kurt; Deterre, Cecile; Dette, Karola; Deviveiros, Pier-Olivier; Dewhurst, Alastair; Dhaliwal, Saminder; Di Ciaccio, Anna; Di Ciaccio, Lucia; Di Clemente, William Kennedy; Di Donato, Camilla; Di Girolamo, Alessandro; Di Girolamo, Beniamino; Di Micco, Biagio; Di Nardo, Roberto; Di Simone, Andrea; Di Sipio, Riccardo; Di Valentino, David; Diaconu, Cristinel; Diamond, Miriam; Dias, Flavia; Diaz, Marco Aurelio; Diehl, Edward; Dietrich, Janet; Diglio, Sara; Dimitrievska, Aleksandra; Dingfelder, Jochen; Dita, Petre; Dita, Sanda; Dittus, Fridolin; Djama, Fares; Djobava, Tamar; Djuvsland, Julia Isabell; Barros do Vale, Maria Aline; Dobos, Daniel; Dobre, Monica; Doglioni, Caterina; Dohmae, Takeshi; Dolejsi, Jiri; Dolezal, Zdenek; Dolgoshein, Boris; Donadelli, Marisilvia; Donati, Simone; Dondero, Paolo; Donini, Julien; Dopke, Jens; Doria, Alessandra; Dova, Maria-Teresa; Doyle, Tony; Drechsler, Eric; Dris, Manolis; Du, Yanyan; Duarte-Campderros, Jorge; Duchovni, Ehud; Duckeck, Guenter; Ducu, Otilia Anamaria; Duda, Dominik; Dudarev, Alexey; Duflot, Laurent; Duguid, Liam; Dührssen, Michael; Dunford, Monica; Duran Yildiz, Hatice; Düren, Michael; Durglishvili, Archil; Duschinger, Dirk; Dutta, Baishali; Dyndal, Mateusz; Eckardt, Christoph; Ecker, Katharina Maria; Edgar, Ryan Christopher; Edson, William; Edwards, Nicholas Charles; Eifert, Till; Eigen, Gerald; Einsweiler, Kevin; Ekelof, Tord; El Kacimi, Mohamed; Ellajosyula, Venugopal; Ellert, Mattias; Elles, Sabine; Ellinghaus, Frank; Elliot, Alison; Ellis, Nicolas; Elmsheuser, Johannes; Elsing, Markus; Emeliyanov, Dmitry; Enari, Yuji; Endner, Oliver Chris; Endo, Masaki; Ennis, Joseph Stanford; Erdmann, Johannes; Ereditato, Antonio; Ernis, Gunar; Ernst, Jesse; Ernst, Michael; Errede, Steven; Ertel, Eugen; Escalier, Marc; Esch, Hendrik; Escobar, Carlos; Esposito, Bellisario; Etienvre, Anne-Isabelle; Etzion, Erez; Evans, Hal; Ezhilov, Alexey; Fabbri, Federica; Fabbri, Laura; Facini, Gabriel; Fakhrutdinov, Rinat; Falciano, Speranza; Falla, Rebecca Jane; Faltova, Jana; Fang, Yaquan; Fanti, Marcello; Farbin, Amir; Farilla, Addolorata; Farina, Christian; Farooque, Trisha; Farrell, Steven; Farrington, Sinead; Farthouat, Philippe; Fassi, Farida; Fassnacht, Patrick; Fassouliotis, Dimitrios; Faucci Giannelli, Michele; Favareto, Andrea; Fawcett, William James; Fayard, Louis; Fedin, Oleg; Fedorko, Wojciech; Feigl, Simon; Feligioni, Lorenzo; Feng, Cunfeng; Feng, Eric; Feng, Haolu; Fenyuk, Alexander; Feremenga, Last; Fernandez Martinez, Patricia; Fernandez Perez, Sonia; Ferrando, James; Ferrari, Arnaud; Ferrari, Pamela; Ferrari, Roberto; Ferreira de Lima, Danilo Enoque; Ferrer, Antonio; Ferrere, Didier; Ferretti, Claudio; Ferretto Parodi, Andrea; Fiedler, Frank; Filipčič, Andrej; Filipuzzi, Marco; Filthaut, Frank; Fincke-Keeler, Margret; Finelli, Kevin Daniel; Fiolhais, Miguel; Fiorini, Luca; Firan, Ana; Fischer, Adam; Fischer, Cora; Fischer, Julia; Fisher, Wade Cameron; Flaschel, Nils; Fleck, Ivor; Fleischmann, Philipp; Fletcher, Gareth Thomas; Fletcher, Gregory; Fletcher, Rob Roy MacGregor; Flick, Tobias; Floderus, Anders; Flores Castillo, Luis; Flowerdew, Michael; Forcolin, Giulio Tiziano; Formica, Andrea; Forti, Alessandra; Foster, Andrew Geoffrey; Fournier, Daniel; Fox, Harald; Fracchia, Silvia; Francavilla, Paolo; Franchini, Matteo; Francis, David; Franconi, Laura; Franklin, Melissa; Frate, Meghan; Fraternali, Marco; Freeborn, David; Fressard-Batraneanu, Silvia; Friedrich, Felix; Froidevaux, Daniel; Frost, James; Fukunaga, Chikara; Fullana Torregrosa, Esteban; Fusayasu, Takahiro; Fuster, Juan; Gabaldon, Carolina; Gabizon, Ofir; Gabrielli, Alessandro; Gabrielli, Andrea; Gach, Grzegorz; Gadatsch, Stefan; Gadomski, Szymon; Gagliardi, Guido; Gagnon, Louis Guillaume; Gagnon, Pauline; Galea, Cristina; Galhardo, Bruno; Gallas, Elizabeth; Gallop, Bruce; Gallus, Petr; Galster, Gorm Aske Gram Krohn; Gan, KK; Gao, Jun; Gao, Yanyan; Gao, Yongsheng; Garay Walls, Francisca; García, Carmen; García Navarro, José Enrique; Garcia-Sciveres, Maurice; Gardner, Robert; Garelli, Nicoletta; Garonne, Vincent; Gascon Bravo, Alberto; Gatti, Claudio; Gaudiello, Andrea; Gaudio, Gabriella; Gaur, Bakul; Gauthier, Lea; Gavrilenko, Igor; Gay, Colin; Gaycken, Goetz; Gazis, Evangelos; Gecse, Zoltan; Gee, Norman; Geich-Gimbel, Christoph; Geisler, Manuel Patrice; Gemme, Claudia; Genest, Marie-Hélène; Geng, Cong; Gentile, Simonetta; George, Simon; Gerbaudo, Davide; Gershon, Avi; Ghasemi, Sara; Ghazlane, Hamid; Ghneimat, Mazuza; Giacobbe, Benedetto; Giagu, Stefano; Giannetti, Paola; Gibbard, Bruce; Gibson, Stephen; Gignac, Matthew; Gilchriese, Murdock; Gillam, Thomas; Gillberg, Dag; Gilles, Geoffrey; Gingrich, Douglas; Giokaris, Nikos; Giordani, MarioPaolo; Giorgi, Filippo Maria; Giorgi, Francesco Michelangelo; Giraud, Pierre-Francois; Giromini, Paolo; Giugni, Danilo; Giuli, Francesco; Giuliani, Claudia; Giulini, Maddalena; Gjelsten, Børge Kile; Gkaitatzis, Stamatios; Gkialas, Ioannis; Gkougkousis, Evangelos Leonidas; Gladilin, Leonid; Glasman, Claudia; Glatzer, Julian; Glaysher, Paul; Glazov, Alexandre; Goblirsch-Kolb, Maximilian; Godlewski, Jan; Goldfarb, Steven; Golling, Tobias; Golubkov, Dmitry; Gomes, Agostinho; Gonçalo, Ricardo; Goncalves Pinto Firmino Da Costa, Joao; Gonella, Laura; Gongadze, Alexi; González de la Hoz, Santiago; Gonzalez Parra, Garoe; Gonzalez-Sevilla, Sergio; Goossens, Luc; Gorbounov, Petr Andreevich; Gordon, Howard; Gorelov, Igor; Gorini, Benedetto; Gorini, Edoardo; Gorišek, Andrej; Gornicki, Edward; Goshaw, Alfred; Gössling, Claus; Gostkin, Mikhail Ivanovitch; Goudet, Christophe Raymond; Goujdami, Driss; Goussiou, Anna; Govender, Nicolin; Gozani, Eitan; Graber, Lars; Grabowska-Bold, Iwona; Gradin, Per Olov Joakim; Grafström, Per; Gramling, Johanna; Gramstad, Eirik; Grancagnolo, Sergio; Gratchev, Vadim; Gray, Heather; Graziani, Enrico; Greenwood, Zeno Dixon; Grefe, Christian; Gregersen, Kristian; Gregor, Ingrid-Maria; Grenier, Philippe; Grevtsov, Kirill; Griffiths, Justin; Grillo, Alexander; Grimm, Kathryn; Grinstein, Sebastian; Gris, Philippe Luc Yves; Grivaz, Jean-Francois; Groh, Sabrina; Grohs, Johannes Philipp; Gross, Eilam; Grosse-Knetter, Joern; Grossi, Giulio Cornelio; Grout, Zara Jane; Guan, Liang; Guan, Wen; Guenther, Jaroslav; Guescini, Francesco; Guest, Daniel; Gueta, Orel; Guido, Elisa; Guillemin, Thibault; Guindon, Stefan; Gul, Umar; Gumpert, Christian; Guo, Jun; Guo, Yicheng; Gupta, Shaun; Gustavino, Giuliano; Gutierrez, Phillip; Gutierrez Ortiz, Nicolas Gilberto; Gutschow, Christian; Guyot, Claude; Gwenlan, Claire; Gwilliam, Carl; Haas, Andy; Haber, Carl; Hadavand, Haleh Khani; Haddad, Nacim; Hadef, Asma; Haefner, Petra; Hageböck, Stephan; Hajduk, Zbigniew; Hakobyan, Hrachya; Haleem, Mahsana; Haley, Joseph; Hall, David; Halladjian, Garabed; Hallewell, Gregory David; Hamacher, Klaus; Hamal, Petr; Hamano, Kenji; Hamilton, Andrew; Hamity, Guillermo Nicolas; Hamnett, Phillip George; Han, Liang; Hanagaki, Kazunori; Hanawa, Keita; Hance, Michael; Haney, Bijan; Hanke, Paul; Hanna, Remie; Hansen, Jørgen Beck; Hansen, Jorn Dines; Hansen, Maike Christina; Hansen, Peter Henrik; Hara, Kazuhiko; Hard, Andrew; Harenberg, Torsten; Hariri, Faten; Harkusha, Siarhei; Harrington, Robert; Harrison, Paul Fraser; Hartjes, Fred; Hasegawa, Makoto; Hasegawa, Yoji; Hasib, A; Hassani, Samira; Haug, Sigve; Hauser, Reiner; Hauswald, Lorenz; Havranek, Miroslav; Hawkes, Christopher; Hawkings, Richard John; Hawkins, Anthony David; Hayden, Daniel; Hays, Chris; Hays, Jonathan Michael; Hayward, Helen; Haywood, Stephen; Head, Simon; Heck, Tobias; Hedberg, Vincent; Heelan, Louise; Heim, Sarah; Heim, Timon; Heinemann, Beate; Heinrich, Jochen Jens; Heinrich, Lukas; Heinz, Christian; Hejbal, Jiri; Helary, Louis; Hellman, Sten; Helsens, Clement; Henderson, James; Henderson, Robert; Heng, Yang; Henkelmann, Steffen; Henriques Correia, Ana Maria; Henrot-Versille, Sophie; Herbert, Geoffrey Henry; Hernández Jiménez, Yesenia; Herten, Gregor; Hertenberger, Ralf; Hervas, Luis; Hesketh, Gavin Grant; Hessey, Nigel; Hetherly, Jeffrey Wayne; Hickling, Robert; Higón-Rodriguez, Emilio; Hill, Ewan; Hill, John; Hiller, Karl Heinz; Hillier, Stephen; Hinchliffe, Ian; Hines, Elizabeth; Hinman, Rachel Reisner; Hirose, Minoru; Hirschbuehl, Dominic; Hobbs, John; Hod, Noam; Hodgkinson, Mark; Hodgson, Paul; Hoecker, Andreas; Hoeferkamp, Martin; Hoenig, Friedrich; Hohlfeld, Marc; Hohn, David; Holmes, Tova Ray; Homann, Michael; Hong, Tae Min; Hooberman, Benjamin Henry; Hopkins, Walter; Horii, Yasuyuki; Horton, Arthur James; Hostachy, Jean-Yves; Hou, Suen; Hoummada, Abdeslam; Howard, Jacob; Howarth, James; Hrabovsky, Miroslav; Hristova, Ivana; Hrivnac, Julius; Hryn'ova, Tetiana; Hrynevich, Aliaksei; Hsu, Catherine; Hsu, Pai-hsien Jennifer; Hsu, Shih-Chieh; Hu, Diedi; Hu, Qipeng; Huang, Yanping; Hubacek, Zdenek; Hubaut, Fabrice; Huegging, Fabian; Huffman, Todd Brian; Hughes, Emlyn; Hughes, Gareth; Huhtinen, Mika; Hülsing, Tobias Alexander; Huseynov, Nazim; Huston, Joey; Huth, John; Iacobucci, Giuseppe; Iakovidis, Georgios; Ibragimov, Iskander; Iconomidou-Fayard, Lydia; Ideal, Emma; Idrissi, Zineb; Iengo, Paolo; Igonkina, Olga; Iizawa, Tomoya; Ikegami, Yoichi; Ikeno, Masahiro; Ilchenko, Iurii; Iliadis, Dimitrios; Ilic, Nikolina; Ince, Tayfun; Introzzi, Gianluca; Ioannou, Pavlos; Iodice, Mauro; Iordanidou, Kalliopi; Ippolito, Valerio; Irles Quiles, Adrian; Isaksson, Charlie; Ishino, Masaya; Ishitsuka, Masaki; Ishmukhametov, Renat; Issever, Cigdem; Istin, Serhat; Ito, Fumiaki; Iturbe Ponce, Julia Mariana; Iuppa, Roberto; Ivarsson, Jenny; Iwanski, Wieslaw; Iwasaki, Hiroyuki; Izen, Joseph; Izzo, Vincenzo; Jabbar, Samina; Jackson, Brett; Jackson, Matthew; Jackson, Paul; Jain, Vivek; Jakobi, Katharina Bianca; Jakobs, Karl; Jakobsen, Sune; Jakoubek, Tomas; Jamin, David Olivier; Jana, Dilip; Jansen, Eric; Jansky, Roland; Janssen, Jens; Janus, Michel; Jarlskog, Göran; Javadov, Namig; Javůrek, Tomáš; Jeanneau, Fabien; Jeanty, Laura; Jejelava, Juansher; Jeng, Geng-yuan; Jennens, David; Jenni, Peter; Jentzsch, Jennifer; Jeske, Carl; Jézéquel, Stéphane; Ji, Haoshuang; Jia, Jiangyong; Jiang, Hai; Jiang, Yi; Jiggins, Stephen; Jimenez Pena, Javier; Jin, Shan; Jinaru, Adam; Jinnouchi, Osamu; Johansson, Per; Johns, Kenneth; Johnson, William Joseph; Jon-And, Kerstin; Jones, Graham; Jones, Roger; Jones, Sarah; Jones, Tim; Jongmanns, Jan; Jorge, Pedro; Jovicevic, Jelena; Ju, Xiangyang; Juste Rozas, Aurelio; Köhler, Markus Konrad; Kaczmarska, Anna; Kado, Marumi; Kagan, Harris; Kagan, Michael; Kahn, Sebastien Jonathan; Kajomovitz, Enrique; Kalderon, Charles William; Kaluza, Adam; Kama, Sami; Kamenshchikov, Andrey; Kanaya, Naoko; Kaneti, Steven; Kantserov, Vadim; Kanzaki, Junichi; Kaplan, Benjamin; Kaplan, Laser Seymour; Kapliy, Anton; Kar, Deepak; Karakostas, Konstantinos; Karamaoun, Andrew; Karastathis, Nikolaos; Kareem, Mohammad Jawad; Karentzos, Efstathios; Karnevskiy, Mikhail; Karpov, Sergey; Karpova, Zoya; Karthik, Krishnaiyengar; Kartvelishvili, Vakhtang; Karyukhin, Andrey; Kasahara, Kota; Kashif, Lashkar; Kass, Richard; Kastanas, Alex; Kataoka, Yousuke; Kato, Chikuma; Katre, Akshay; Katzy, Judith; Kawagoe, Kiyotomo; Kawamoto, Tatsuo; Kawamura, Gen; Kazama, Shingo; Kazanin, Vassili; Keeler, Richard; Kehoe, Robert; Keller, John; Kempster, Jacob Julian; Kentaro, Kawade; Keoshkerian, Houry; Kepka, Oldrich; Kerševan, Borut Paul; Kersten, Susanne; Keyes, Robert; Khalil-zada, Farkhad; Khandanyan, Hovhannes; Khanov, Alexander; Kharlamov, Alexey; Khoo, Teng Jian; Khovanskiy, Valery; Khramov, Evgeniy; Khubua, Jemal; Kido, Shogo; Kim, Hee Yeun; Kim, Shinhong; Kim, Young-Kee; Kimura, Naoki; Kind, Oliver Maria; King, Barry; King, Matthew; King, Samuel Burton; Kirk, Julie; Kiryunin, Andrey; Kishimoto, Tomoe; Kisielewska, Danuta; Kiss, Florian; Kiuchi, Kenji; Kivernyk, Oleh; Kladiva, Eduard; Klein, Matthew Henry; Klein, Max; Klein, Uta; Kleinknecht, Konrad; Klimek, Pawel; Klimentov, Alexei; Klingenberg, Reiner; Klinger, Joel Alexander; Klioutchnikova, Tatiana; Kluge, Eike-Erik; Kluit, Peter; Kluth, Stefan; Knapik, Joanna; Kneringer, Emmerich; Knoops, Edith; Knue, Andrea; Kobayashi, Aine; Kobayashi, Dai; Kobayashi, Tomio; Kobel, Michael; Kocian, Martin; Kodys, Peter; Koffas, Thomas; Koffeman, Els; Kogan, Lucy Anne; Koi, Tatsumi; Kolanoski, Hermann; Kolb, Mathis; Koletsou, Iro; Komar, Aston; Komori, Yuto; Kondo, Takahiko; Kondrashova, Nataliia; Köneke, Karsten; König, Adriaan; Kono, Takanori; Konoplich, Rostislav; Konstantinidis, Nikolaos; Kopeliansky, Revital; Koperny, Stefan; Köpke, Lutz; Kopp, Anna Katharina; Korcyl, Krzysztof; Kordas, Kostantinos; Korn, Andreas; Korol, Aleksandr; Korolkov, Ilya; Korolkova, Elena; Kortner, Oliver; Kortner, Sandra; Kosek, Tomas; Kostyukhin, Vadim; Kotwal, Ashutosh; Kourkoumeli-Charalampidi, Athina; Kourkoumelis, Christine; Kouskoura, Vasiliki; Koutsman, Alex; Kowalewska, Anna Bozena; Kowalewski, Robert Victor; Kowalski, Tadeusz; Kozanecki, Witold; Kozhin, Anatoly; Kramarenko, Viktor; Kramberger, Gregor; Krasnopevtsev, Dimitriy; Krasny, Mieczyslaw Witold; Krasznahorkay, Attila; Kraus, Jana; Kravchenko, Anton; Kretz, Moritz; Kretzschmar, Jan; Kreutzfeldt, Kristof; Krieger, Peter; Krizka, Karol; Kroeninger, Kevin; Kroha, Hubert; Kroll, Joe; Kroseberg, Juergen; Krstic, Jelena; Kruchonak, Uladzimir; Krüger, Hans; Krumnack, Nils; Kruse, Amanda; Kruse, Mark; Kruskal, Michael; Kubota, Takashi; Kucuk, Hilal; Kuday, Sinan; Kuechler, Jan Thomas; Kuehn, Susanne; Kugel, Andreas; Kuger, Fabian; Kuhl, Andrew; Kuhl, Thorsten; Kukhtin, Victor; Kukla, Romain; Kulchitsky, Yuri; Kuleshov, Sergey; Kuna, Marine; Kunigo, Takuto; Kupco, Alexander; Kurashige, Hisaya; Kurochkin, Yurii; Kus, Vlastimil; Kuwertz, Emma Sian; Kuze, Masahiro; Kvita, Jiri; Kwan, Tony; Kyriazopoulos, Dimitrios; La Rosa, Alessandro; La Rosa Navarro, Jose Luis; La Rotonda, Laura; Lacasta, Carlos; Lacava, Francesco; Lacey, James; Lacker, Heiko; Lacour, Didier; Lacuesta, Vicente Ramón; Ladygin, Evgueni; Lafaye, Remi; Laforge, Bertrand; Lagouri, Theodota; Lai, Stanley; Lammers, Sabine; Lampl, Walter; Lançon, Eric; Landgraf, Ulrich; Landon, Murrough; Lang, Valerie Susanne; Lange, J örn Christian; Lankford, Andrew; Lanni, Francesco; Lantzsch, Kerstin; Lanza, Agostino; Laplace, Sandrine; Lapoire, Cecile; Laporte, Jean-Francois; Lari, Tommaso; Lasagni Manghi, Federico; Lassnig, Mario; Laurelli, Paolo; Lavrijsen, Wim; Law, Alexander; Laycock, Paul; Lazovich, Tomo; Lazzaroni, Massimo; Le Dortz, Olivier; Le Guirriec, Emmanuel; Le Menedeu, Eve; Le Quilleuc, Eloi; LeBlanc, Matthew Edgar; LeCompte, Thomas; Ledroit-Guillon, Fabienne Agnes Marie; Lee, Claire Alexandra; Lee, Shih-Chang; Lee, Lawrence; Lefebvre, Guillaume; Lefebvre, Michel; Legger, Federica; Leggett, Charles; Lehan, Allan; Lehmann Miotto, Giovanna; Lei, Xiaowen; Leight, William Axel; Leisos, Antonios; Leister, Andrew Gerard; Leite, Marco Aurelio Lisboa; Leitner, Rupert; Lellouch, Daniel; Lemmer, Boris; Leney, Katharine; Lenz, Tatjana; Lenzi, Bruno; Leone, Robert; Leone, Sandra; Leonidopoulos, Christos; Leontsinis, Stefanos; Lerner, Giuseppe; Leroy, Claude; Lesage, Arthur; Lester, Christopher; Levchenko, Mikhail; Levêque, Jessica; Levin, Daniel; Levinson, Lorne; Levy, Mark; Leyko, Agnieszka; Leyton, Michael; Li, Bing; Li, Haifeng; Li, Ho Ling; Li, Lei; Li, Liang; Li, Qi; Li, Shu; Li, Xingguo; Li, Yichen; Liang, Zhijun; Liao, Hongbo; Liberti, Barbara; Liblong, Aaron; Lichard, Peter; Lie, Ki; Liebal, Jessica; Liebig, Wolfgang; Limbach, Christian; Limosani, Antonio; Lin, Simon; Lin, Tai-Hua; Lindquist, Brian Edward; Lipeles, Elliot; Lipniacka, Anna; Lisovyi, Mykhailo; Liss, Tony; Lissauer, David; Lister, Alison; Litke, Alan; Liu, Bo; Liu, Dong; Liu, Hao; Liu, Hongbin; Liu, Jian; Liu, Jianbei; Liu, Kun; Liu, Lulu; Liu, Miaoyuan; Liu, Minghui; Liu, Yanlin; Liu, Yanwen; Livan, Michele; Lleres, Annick; Llorente Merino, Javier; Lloyd, Stephen; Lo Sterzo, Francesco; Lobodzinska, Ewelina; Loch, Peter; Lockman, William; Loebinger, Fred; Loevschall-Jensen, Ask Emil; Loew, Kevin Michael; Loginov, Andrey; Lohse, Thomas; Lohwasser, Kristin; Lokajicek, Milos; Long, Brian Alexander; Long, Jonathan David; Long, Robin Eamonn; Longo, Luigi; Looper, Kristina Anne; Lopes, Lourenco; Lopez Mateos, David; Lopez Paredes, Brais; Lopez Paz, Ivan; Lopez Solis, Alvaro; Lorenz, Jeanette; Lorenzo Martinez, Narei; Losada, Marta; L{ö}sel, Philipp Jonathan; Lou, XinChou; Lounis, Abdenour; Love, Jeremy; Love, Peter; Lu, Haonan; Lu, Nan; Lubatti, Henry; Luci, Claudio; Lucotte, Arnaud; Luedtke, Christian; Luehring, Frederick; Lukas, Wolfgang; Luminari, Lamberto; Lundberg, Olof; Lund-Jensen, Bengt; Lynn, David; Lysak, Roman; Lytken, Else; Lyubushkin, Vladimir; Ma, Hong; Ma, Lian Liang; Ma, Yanhui; Maccarrone, Giovanni; Macchiolo, Anna; Macdonald, Calum Michael; Maček, Boštjan; Machado Miguens, Joana; Madaffari, Daniele; Madar, Romain; Maddocks, Harvey Jonathan; Mader, Wolfgang; Madsen, Alexander; Maeda, Junpei; Maeland, Steffen; Maeno, Tadashi; Maevskiy, Artem; Magradze, Erekle; Mahlstedt, Joern; Maiani, Camilla; Maidantchik, Carmen; Maier, Andreas Alexander; Maier, Thomas; Maio, Amélia; Majewski, Stephanie; Makida, Yasuhiro; Makovec, Nikola; Malaescu, Bogdan; Malecki, Pawel; Maleev, Victor; Malek, Fairouz; Mallik, Usha; Malon, David; Malone, Caitlin; Maltezos, Stavros; Malyukov, Sergei; Mamuzic, Judita; Mancini, Giada; Mandelli, Beatrice; Mandelli, Luciano; Mandić, Igor; Maneira, José; Manhaes de Andrade Filho, Luciano; Manjarres Ramos, Joany; Mann, Alexander; Mansoulie, Bruno; Mantifel, Rodger; Mantoani, Matteo; Manzoni, Stefano; Mapelli, Livio; Marceca, Gino; March, Luis; Marchiori, Giovanni; Marcisovsky, Michal; Marjanovic, Marija; Marley, Daniel; Marroquim, Fernando; Marsden, Stephen Philip; Marshall, Zach; Marti, Lukas Fritz; Marti-Garcia, Salvador; Martin, Brian Thomas; Martin, Tim; Martin, Victoria Jane; Martin dit Latour, Bertrand; Martinez, Mario; Martin-Haugh, Stewart; Martoiu, Victor Sorin; Martyniuk, Alex; Marx, Marilyn; Marzano, Francesco; Marzin, Antoine; Masetti, Lucia; Mashimo, Tetsuro; Mashinistov, Ruslan; Masik, Jiri; Maslennikov, Alexey; Massa, Ignazio; Massa, Lorenzo; Mastrandrea, Paolo; Mastroberardino, Anna; Masubuchi, Tatsuya; Mättig, Peter; Mattmann, Johannes; Maurer, Julien; Maxfield, Stephen; Maximov, Dmitriy; Mazini, Rachid; Mazza, Simone Michele; Mc Fadden, Neil Christopher; Mc Goldrick, Garrin; Mc Kee, Shawn Patrick; McCarn, Allison; McCarthy, Robert; McCarthy, Tom; McClymont, Laurie; McFarlane, Kenneth; Mcfayden, Josh; Mchedlidze, Gvantsa; McMahon, Steve; McPherson, Robert; Medinnis, Michael; Meehan, Samuel; Mehlhase, Sascha; Mehta, Andrew; Meier, Karlheinz; Meineck, Christian; Meirose, Bernhard; Mellado Garcia, Bruce Rafael; Meloni, Federico; Mengarelli, Alberto; Menke, Sven; Meoni, Evelin; Mercurio, Kevin Michael; Mergelmeyer, Sebastian; Mermod, Philippe; Merola, Leonardo; Meroni, Chiara; Merritt, Frank; Messina, Andrea; Metcalfe, Jessica; Mete, Alaettin Serhan; Meyer, Carsten; Meyer, Christopher; Meyer, Jean-Pierre; Meyer, Jochen; Meyer Zu Theenhausen, Hanno; Middleton, Robin; Miglioranzi, Silvia; Mijović, Liza; Mikenberg, Giora; Mikestikova, Marcela; Mikuž, Marko; Milesi, Marco; Milic, Adriana; Miller, David; Mills, Corrinne; Milov, Alexander; Milstead, David; Minaenko, Andrey; Minami, Yuto; Minashvili, Irakli; Mincer, Allen; Mindur, Bartosz; Mineev, Mikhail; Ming, Yao; Mir, Lluisa-Maria; Mistry, Khilesh; Mitani, Takashi; Mitrevski, Jovan; Mitsou, Vasiliki A; Miucci, Antonio; Miyagawa, Paul; Mjörnmark, Jan-Ulf; Moa, Torbjoern; Mochizuki, Kazuya; Mohapatra, Soumya; Mohr, Wolfgang; Molander, Simon; Moles-Valls, Regina; Monden, Ryutaro; Mondragon, Matthew Craig; Mönig, Klaus; Monk, James; Monnier, Emmanuel; Montalbano, Alyssa; Montejo Berlingen, Javier; Monticelli, Fernando; Monzani, Simone; Moore, Roger; Morange, Nicolas; Moreno, Deywis; Moreno Llácer, María; Morettini, Paolo; Mori, Daniel; Mori, Tatsuya; Morii, Masahiro; Morinaga, Masahiro; Morisbak, Vanja; Moritz, Sebastian; Morley, Anthony Keith; Mornacchi, Giuseppe; Morris, John; Mortensen, Simon Stark; Morvaj, Ljiljana; Mosidze, Maia; Moss, Josh; Motohashi, Kazuki; Mount, Richard; Mountricha, Eleni; Mouraviev, Sergei; Moyse, Edward; Muanza, Steve; Mudd, Richard; Mueller, Felix; Mueller, James; Mueller, Ralph Soeren Peter; Mueller, Thibaut; Muenstermann, Daniel; Mullen, Paul; Mullier, Geoffrey; Munoz Sanchez, Francisca Javiela; Murillo Quijada, Javier Alberto; Murray, Bill; Musheghyan, Haykuhi; Muškinja, Miha; Myagkov, Alexey; Myska, Miroslav; Nachman, Benjamin Philip; Nackenhorst, Olaf; Nadal, Jordi; Nagai, Koichi; Nagai, Ryo; Nagano, Kunihiro; Nagasaka, Yasushi; Nagata, Kazuki; Nagel, Martin; Nagy, Elemer; Nairz, Armin Michael; Nakahama, Yu; Nakamura, Koji; Nakamura, Tomoaki; Nakano, Itsuo; Namasivayam, Harisankar; Naranjo Garcia, Roger Felipe; Narayan, Rohin; Narrias Villar, Daniel Isaac; Naryshkin, Iouri; Naumann, Thomas; Navarro, Gabriela; Nayyar, Ruchika; Neal, Homer; Nechaeva, Polina; Neep, Thomas James; Nef, Pascal Daniel; Negri, Andrea; Negrini, Matteo; Nektarijevic, Snezana; Nellist, Clara; Nelson, Andrew; Nemecek, Stanislav; Nemethy, Peter; Nepomuceno, Andre Asevedo; Nessi, Marzio; Neubauer, Mark; Neumann, Manuel; Neves, Ricardo; Nevski, Pavel; Newman, Paul; Nguyen, Duong Hai; Nickerson, Richard; Nicolaidou, Rosy; Nicquevert, Bertrand; Nielsen, Jason; Nikiforov, Andriy; Nikolaenko, Vladimir; Nikolic-Audit, Irena; Nikolopoulos, Konstantinos; Nilsen, Jon Kerr; Nilsson, Paul; Ninomiya, Yoichi; Nisati, Aleandro; Nisius, Richard; Nobe, Takuya; Nodulman, Lawrence; Nomachi, Masaharu; Nomidis, Ioannis; Nooney, Tamsin; Norberg, Scarlet; Nordberg, Markus; Norjoharuddeen, Nurfikri; Novgorodova, Olga; Nowak, Sebastian; Nozaki, Mitsuaki; Nozka, Libor; Ntekas, Konstantinos; Nurse, Emily; Nuti, Francesco; O'grady, Fionnbarr; O'Neil, Dugan; O'Rourke, Abigail Alexandra; O'Shea, Val; Oakham, Gerald; Oberlack, Horst; Obermann, Theresa; Ocariz, Jose; Ochi, Atsuhiko; Ochoa, Ines; Ochoa-Ricoux, Juan Pedro; Oda, Susumu; Odaka, Shigeru; Ogren, Harold; Oh, Alexander; Oh, Seog; Ohm, Christian; Ohman, Henrik; Oide, Hideyuki; Okawa, Hideki; Okumura, Yasuyuki; Okuyama, Toyonobu; Olariu, Albert; Oleiro Seabra, Luis Filipe; Olivares Pino, Sebastian Andres; Oliveira Damazio, Denis; Olszewski, Andrzej; Olszowska, Jolanta; Onofre, António; Onogi, Kouta; Onyisi, Peter; Oram, Christopher; Oreglia, Mark; Oren, Yona; Orestano, Domizia; Orlando, Nicola; Orr, Robert; Osculati, Bianca; Ospanov, Rustem; Otero y Garzon, Gustavo; Otono, Hidetoshi; Ouchrif, Mohamed; Ould-Saada, Farid; Ouraou, Ahmimed; Oussoren, Koen Pieter; Ouyang, Qun; Ovcharova, Ana; Owen, Mark; Owen, Rhys Edward; Ozcan, Veysi Erkcan; Ozturk, Nurcan; Pachal, Katherine; Pacheco Pages, Andres; Padilla Aranda, Cristobal; Pagáčová, Martina; Pagan Griso, Simone; Paige, Frank; Pais, Preema; Pajchel, Katarina; Palacino, Gabriel; Palestini, Sandro; Palka, Marek; Pallin, Dominique; Palma, Alberto; Panagiotopoulou, Evgenia; Pandini, Carlo Enrico; Panduro Vazquez, William; Pani, Priscilla; Panitkin, Sergey; Pantea, Dan; Paolozzi, Lorenzo; Papadopoulou, Theodora; Papageorgiou, Konstantinos; Paramonov, Alexander; Paredes Hernandez, Daniela; Parker, Adam Jackson; Parker, Michael Andrew; Parker, Kerry Ann; Parodi, Fabrizio; Parsons, John; Parzefall, Ulrich; Pascuzzi, Vincent; Pasqualucci, Enrico; Passaggio, Stefano; Pastore, Fernanda; Pastore, Francesca; Pásztor, Gabriella; Pataraia, Sophio; Patel, Nikhul; Pater, Joleen; Pauly, Thilo; Pearce, James; Pearson, Benjamin; Pedersen, Lars Egholm; Pedersen, Maiken; Pedraza Lopez, Sebastian; Pedro, Rute; Peleganchuk, Sergey; Pelikan, Daniel; Penc, Ondrej; Peng, Cong; Peng, Haiping; Penwell, John; Peralva, Bernardo; Perego, Marta Maria; Perepelitsa, Dennis; Perez Codina, Estel; Perini, Laura; Pernegger, Heinz; Perrella, Sabrina; Peschke, Richard; Peshekhonov, Vladimir; Peters, Krisztian; Peters, Yvonne; Petersen, Brian; Petersen, Troels; Petit, Elisabeth; Petridis, Andreas; Petridou, Chariclia; Petroff, Pierre; Petrolo, Emilio; Petrov, Mariyan; Petrucci, Fabrizio; Pettersson, Nora Emilia; Peyaud, Alan; Pezoa, Raquel; Phillips, Peter William; Piacquadio, Giacinto; Pianori, Elisabetta; Picazio, Attilio; Piccaro, Elisa; Piccinini, Maurizio; Pickering, Mark Andrew; Piegaia, Ricardo; Pilcher, James; Pilkington, Andrew; Pin, Arnaud Willy J; Pina, João Antonio; Pinamonti, Michele; Pinfold, James; Pingel, Almut; Pires, Sylvestre; Pirumov, Hayk; Pitt, Michael; Plazak, Lukas; Pleier, Marc-Andre; Pleskot, Vojtech; Plotnikova, Elena; Plucinski, Pawel; Pluth, Daniel; Poettgen, Ruth; Poggioli, Luc; Pohl, David-leon; Polesello, Giacomo; Poley, Anne-luise; Policicchio, Antonio; Polifka, Richard; Polini, Alessandro; Pollard, Christopher Samuel; Polychronakos, Venetios; Pommès, Kathy; Pontecorvo, Ludovico; Pope, Bernard; Popeneciu, Gabriel Alexandru; Popovic, Dragan; Poppleton, Alan; Pospisil, Stanislav; Potamianos, Karolos; Potrap, Igor; Potter, Christina; Potter, Christopher; Poulard, Gilbert; Poveda, Joaquin; Pozdnyakov, Valery; Pozo Astigarraga, Mikel Eukeni; Pralavorio, Pascal; Pranko, Aliaksandr; Prell, Soeren; Price, Darren; Price, Lawrence; Primavera, Margherita; Prince, Sebastien; Proissl, Manuel; Prokofiev, Kirill; Prokoshin, Fedor; Protopopescu, Serban; Proudfoot, James; Przybycien, Mariusz; Puddu, Daniele; Puldon, David; Purohit, Milind; Puzo, Patrick; Qian, Jianming; Qin, Gang; Qin, Yang; Quadt, Arnulf; Quayle, William; Queitsch-Maitland, Michaela; Quilty, Donnchadha; Raddum, Silje; Radeka, Veljko; Radescu, Voica; Radhakrishnan, Sooraj Krishnan; Radloff, Peter; Rados, Pere; Ragusa, Francesco; Rahal, Ghita; Raine, John Andrew; Rajagopalan, Srinivasan; Rammensee, Michael; Rangel-Smith, Camila; Ratti, Maria Giulia; Rauscher, Felix; Rave, Stefan; Ravenscroft, Thomas; Raymond, Michel; Read, Alexander Lincoln; Readioff, Nathan Peter; Rebuzzi, Daniela; Redelbach, Andreas; Redlinger, George; Reece, Ryan; Reeves, Kendall; Rehnisch, Laura; Reichert, Joseph; Reisin, Hernan; Rembser, Christoph; Ren, Huan; Rescigno, Marco; Resconi, Silvia; Rezanova, Olga; Reznicek, Pavel; Rezvani, Reyhaneh; Richter, Robert; Richter, Stefan; Richter-Was, Elzbieta; Ricken, Oliver; Ridel, Melissa; Rieck, Patrick; Riegel, Christian Johann; Rieger, Julia; Rifki, Othmane; Rijssenbeek, Michael; Rimoldi, Adele; Rinaldi, Lorenzo; Ristić, Branislav; Ritsch, Elmar; Riu, Imma; Rizatdinova, Flera; Rizvi, Eram; Rizzi, Chiara; Robertson, Steven; Robichaud-Veronneau, Andree; Robinson, Dave; Robinson, James; Robson, Aidan; Roda, Chiara; Rodina, Yulia; Rodriguez Perez, Andrea; Rodriguez Rodriguez, Daniel; Roe, Shaun; Rogan, Christopher Sean; Røhne, Ole; Romaniouk, Anatoli; Romano, Marino; Romano Saez, Silvestre Marino; Romero Adam, Elena; Rompotis, Nikolaos; Ronzani, Manfredi; Roos, Lydia; Ros, Eduardo; Rosati, Stefano; Rosbach, Kilian; Rose, Peyton; Rosenthal, Oliver; Rossetti, Valerio; Rossi, Elvira; Rossi, Leonardo Paolo; Rosten, Jonatan; Rosten, Rachel; Rotaru, Marina; Roth, Itamar; Rothberg, Joseph; Rousseau, David; Royon, Christophe; Rozanov, Alexandre; Rozen, Yoram; Ruan, Xifeng; Rubbo, Francesco; Rubinskiy, Igor; Rud, Viacheslav; Rudolph, Matthew Scott; Rühr, Frederik; Ruiz-Martinez, Aranzazu; Rurikova, Zuzana; Rusakovich, Nikolai; Ruschke, Alexander; Russell, Heather; Rutherfoord, John; Ruthmann, Nils; Ryabov, Yury; Rybar, Martin; Rybkin, Grigori; Ryu, Soo; Ryzhov, Andrey; Saavedra, Aldo; Sabato, Gabriele; Sacerdoti, Sabrina; Sadrozinski, Hartmut; Sadykov, Renat; Safai Tehrani, Francesco; Saha, Puja; Sahinsoy, Merve; Saimpert, Matthias; Saito, Tomoyuki; Sakamoto, Hiroshi; Sakurai, Yuki; Salamanna, Giuseppe; Salamon, Andrea; Salazar Loyola, Javier Esteban; Salek, David; Sales De Bruin, Pedro Henrique; Salihagic, Denis; Salnikov, Andrei; Salt, José; Salvatore, Daniela; Salvatore, Pasquale Fabrizio; Salvucci, Antonio; Salzburger, Andreas; Sammel, Dirk; Sampsonidis, Dimitrios; Sanchez, Arturo; Sánchez, Javier; Sanchez Martinez, Victoria; Sandaker, Heidi; Sandbach, Ruth Laura; Sander, Heinz Georg; Sanders, Michiel; Sandhoff, Marisa; Sandoval, Carlos; Sandstroem, Rikard; Sankey, Dave; Sannino, Mario; Sansoni, Andrea; Santoni, Claudio; Santonico, Rinaldo; Santos, Helena; Santoyo Castillo, Itzebelt; Sapp, Kevin; Sapronov, Andrey; Saraiva, João; Sarrazin, Bjorn; Sasaki, Osamu; Sasaki, Yuichi; Sato, Koji; Sauvage, Gilles; Sauvan, Emmanuel; Savage, Graham; Savard, Pierre; Sawyer, Craig; Sawyer, Lee; Saxon, James; Sbarra, Carla; Sbrizzi, Antonio; Scanlon, Tim; Scannicchio, Diana; Scarcella, Mark; Scarfone, Valerio; Schaarschmidt, Jana; Schacht, Peter; Schaefer, Douglas; Schaefer, Ralph; Schaeffer, Jan; Schaepe, Steffen; Schaetzel, Sebastian; Schäfer, Uli; Schaffer, Arthur; Schaile, Dorothee; Schamberger, R Dean; Scharf, Veit; Schegelsky, Valery; Scheirich, Daniel; Schernau, Michael; Schiavi, Carlo; Schillo, Christian; Schioppa, Marco; Schlenker, Stefan; Schmieden, Kristof; Schmitt, Christian; Schmitt, Stefan; Schmitz, Simon; Schneider, Basil; Schnellbach, Yan Jie; Schnoor, Ulrike; Schoeffel, Laurent; Schoening, Andre; Schoenrock, Bradley Daniel; Schopf, Elisabeth; Schorlemmer, Andre Lukas; Schott, Matthias; Schovancova, Jaroslava; Schramm, Steven; Schreyer, Manuel; Schuh, Natascha; Schultens, Martin Johannes; Schultz-Coulon, Hans-Christian; Schulz, Holger; Schumacher, Markus; Schumm, Bruce; Schune, Philippe; Schwanenberger, Christian; Schwartzman, Ariel; Schwarz, Thomas Andrew; Schwegler, Philipp; Schweiger, Hansdieter; Schwemling, Philippe; Schwienhorst, Reinhard; Schwindling, Jerome; Schwindt, Thomas; Sciolla, Gabriella; Scuri, Fabrizio; Scutti, Federico; Searcy, Jacob; Seema, Pienpen; Seidel, Sally; Seiden, Abraham; Seifert, Frank; Seixas, José; Sekhniaidze, Givi; Sekhon, Karishma; Sekula, Stephen; Seliverstov, Dmitry; Semprini-Cesari, Nicola; Serfon, Cedric; Serin, Laurent; Serkin, Leonid; Sessa, Marco; Seuster, Rolf; Severini, Horst; Sfiligoj, Tina; Sforza, Federico; Sfyrla, Anna; Shabalina, Elizaveta; Shaikh, Nabila Wahab; Shan, Lianyou; Shang, Ruo-yu; Shank, James; Shapiro, Marjorie; Shatalov, Pavel; Shaw, Kate; Shaw, Savanna Marie; Shcherbakova, Anna; Shehu, Ciwake Yusufu; Sherwood, Peter; Shi, Liaoshan; Shimizu, Shima; Shimmin, Chase Owen; Shimojima, Makoto; Shiyakova, Mariya; Shmeleva, Alevtina; Shoaleh Saadi, Diane; Shochet, Mel; Shojaii, Seyedruhollah; Shrestha, Suyog; Shulga, Evgeny; Shupe, Michael; Sicho, Petr; Sidebo, Per Edvin; Sidiropoulou, Ourania; Sidorov, Dmitri; Sidoti, Antonio; Siegert, Frank; Sijacki, Djordje; Silva, José; Silverstein, Samuel; Simak, Vladislav; Simard, Olivier; Simic, Ljiljana; Simion, Stefan; Simioni, Eduard; Simmons, Brinick; Simon, Dorian; Simon, Manuel; Sinervo, Pekka; Sinev, Nikolai; Sioli, Maximiliano; Siragusa, Giovanni; Sivoklokov, Serguei; Sjölin, Jörgen; Sjursen, Therese; Skinner, Malcolm Bruce; Skottowe, Hugh Philip; Skubic, Patrick; Slater, Mark; Slavicek, Tomas; Slawinska, Magdalena; Sliwa, Krzysztof; Slovak, Radim; Smakhtin, Vladimir; Smart, Ben; Smestad, Lillian; Smirnov, Sergei; Smirnov, Yury; Smirnova, Lidia; Smirnova, Oxana; Smith, Matthew; Smith, Russell; Smizanska, Maria; Smolek, Karel; Snesarev, Andrei; Snidero, Giacomo; Snyder, Scott; Sobie, Randall; Socher, Felix; Soffer, Abner; Soh, Dart-yin; Sokhrannyi, Grygorii; Solans Sanchez, Carlos; Solar, Michael; Soldatov, Evgeny; Soldevila, Urmila; Solodkov, Alexander; Soloshenko, Alexei; Solovyanov, Oleg; Solovyev, Victor; Sommer, Philip; Son, Hyungsuk; Song, Hong Ye; Sood, Alexander; Sopczak, Andre; Sopko, Vit; Sorin, Veronica; Sosa, David; Sotiropoulou, Calliope Louisa; Soualah, Rachik; Soukharev, Andrey; South, David; Sowden, Benjamin; Spagnolo, Stefania; Spalla, Margherita; Spangenberg, Martin; Spanò, Francesco; Sperlich, Dennis; Spettel, Fabian; Spighi, Roberto; Spigo, Giancarlo; Spiller, Laurence Anthony; Spousta, Martin; St Denis, Richard Dante; Stabile, Alberto; Stahlman, Jonathan; Stamen, Rainer; Stamm, Soren; Stanecka, Ewa; Stanek, Robert; Stanescu, Cristian; Stanescu-Bellu, Madalina; Stanitzki, Marcel Michael; Stapnes, Steinar; Starchenko, Evgeny; Stark, Giordon; Stark, Jan; Staroba, Pavel; Starovoitov, Pavel; Stärz, Steffen; Staszewski, Rafal; Steinberg, Peter; Stelzer, Bernd; Stelzer, Harald Joerg; Stelzer-Chilton, Oliver; Stenzel, Hasko; Stewart, Graeme; Stillings, Jan Andre; Stockton, Mark; Stoebe, Michael; Stoicea, Gabriel; Stolte, Philipp; Stonjek, Stefan; Stradling, Alden; Straessner, Arno; Stramaglia, Maria Elena; Strandberg, Jonas; Strandberg, Sara; Strandlie, Are; Strauss, Michael; Strizenec, Pavol; Ströhmer, Raimund; Strom, David; Stroynowski, Ryszard; Strubig, Antonia; Stucci, Stefania Antonia; Stugu, Bjarne; Styles, Nicholas Adam; Su, Dong; Su, Jun; Subramaniam, Rajivalochan; Suchek, Stanislav; Sugaya, Yorihito; Suk, Michal; Sulin, Vladimir; Sultansoy, Saleh; Sumida, Toshi; Sun, Siyuan; Sun, Xiaohu; Sundermann, Jan Erik; Suruliz, Kerim; Susinno, Giancarlo; Sutton, Mark; Suzuki, Shota; Svatos, Michal; Swiatlowski, Maximilian; Sykora, Ivan; Sykora, Tomas; Ta, Duc; Taccini, Cecilia; Tackmann, Kerstin; Taenzer, Joe; Taffard, Anyes; Tafirout, Reda; Taiblum, Nimrod; Takai, Helio; Takashima, Ryuichi; Takeda, Hiroshi; Takeshita, Tohru; Takubo, Yosuke; Talby, Mossadek; Talyshev, Alexey; Tam, Jason; Tan, Kong Guan; Tanaka, Junichi; Tanaka, Reisaburo; Tanaka, Shuji; Tannenwald, Benjamin Bordy; Tapia Araya, Sebastian; Tapprogge, Stefan; Tarem, Shlomit; Tartarelli, Giuseppe Francesco; Tas, Petr; Tasevsky, Marek; Tashiro, Takuya; Tassi, Enrico; Tavares Delgado, Ademar; Tayalati, Yahya; Taylor, Aaron; Taylor, Geoffrey; Taylor, Pierre Thor Elliot; Taylor, Wendy; Teischinger, Florian Alfred; Teixeira-Dias, Pedro; Temming, Kim Katrin; Temple, Darren; Ten Kate, Herman; Teng, Ping-Kun; Teoh, Jia Jian; Tepel, Fabian-Phillipp; Terada, Susumu; Terashi, Koji; Terron, Juan; Terzo, Stefano; Testa, Marianna; Teuscher, Richard; Theveneaux-Pelzer, Timothée; Thomas, Juergen; Thomas-Wilsker, Joshuha; Thompson, Emily; Thompson, Paul; Thompson, Ray; Thompson, Stan; Thomsen, Lotte Ansgaard; Thomson, Evelyn; Thomson, Mark; Tibbetts, Mark James; Ticse Torres, Royer Edson; Tikhomirov, Vladimir; Tikhonov, Yury; Timoshenko, Sergey; Tipton, Paul; Tisserant, Sylvain; Todome, Kazuki; Todorov, Theodore; Todorova-Nova, Sharka; Tojo, Junji; Tokár, Stanislav; Tokushuku, Katsuo; Tolley, Emma; Tomlinson, Lee; Tomoto, Makoto; Tompkins, Lauren; Toms, Konstantin; Tong, Baojia(Tony); Torrence, Eric; Torres, Heberth; Torró Pastor, Emma; Toth, Jozsef; Touchard, Francois; Tovey, Daniel; Trefzger, Thomas; Tricoli, Alessandro; Trigger, Isabel Marian; Trincaz-Duvoid, Sophie; Tripiana, Martin; Trischuk, William; Trocmé, Benjamin; Trofymov, Artur; Troncon, Clara; Trottier-McDonald, Michel; Trovatelli, Monica; Truong, Loan; Trzebinski, Maciej; Trzupek, Adam; Tseng, Jeffrey; Tsiareshka, Pavel; Tsipolitis, Georgios; Tsirintanis, Nikolaos; Tsiskaridze, Shota; Tsiskaridze, Vakhtang; Tskhadadze, Edisher; Tsui, Ka Ming; Tsukerman, Ilya; Tsulaia, Vakhtang; Tsuno, Soshi; Tsybychev, Dmitri; Tudorache, Alexandra; Tudorache, Valentina; Tuna, Alexander Naip; Tupputi, Salvatore; Turchikhin, Semen; Turecek, Daniel; Turgeman, Daniel; Turra, Ruggero; Turvey, Andrew John; Tuts, Michael; Tyndel, Mike; Ucchielli, Giulia; Ueda, Ikuo; Ueno, Ryuichi; Ughetto, Michael; Ukegawa, Fumihiko; Unal, Guillaume; Undrus, Alexander; Unel, Gokhan; Ungaro, Francesca; Unno, Yoshinobu; Unverdorben, Christopher; Urban, Jozef; Urquijo, Phillip; Urrejola, Pedro; Usai, Giulio; Usanova, Anna; Vacavant, Laurent; Vacek, Vaclav; Vachon, Brigitte; Valderanis, Chrysostomos; Valdes Santurio, Eduardo; Valencic, Nika; Valentinetti, Sara; Valero, Alberto; Valery, Loic; Valkar, Stefan; Vallecorsa, Sofia; Valls Ferrer, Juan Antonio; Van Den Wollenberg, Wouter; Van Der Deijl, Pieter; van der Geer, Rogier; van der Graaf, Harry; van Eldik, Niels; van Gemmeren, Peter; Van Nieuwkoop, Jacobus; van Vulpen, Ivo; van Woerden, Marius Cornelis; Vanadia, Marco; Vandelli, Wainer; Vanguri, Rami; Vaniachine, Alexandre; Vankov, Peter; Vardanyan, Gagik; Vari, Riccardo; Varnes, Erich; Varol, Tulin; Varouchas, Dimitris; Vartapetian, Armen; Varvell, Kevin; Vasquez, Jared Gregory; Vazeille, Francois; Vazquez Schroeder, Tamara; Veatch, Jason; Veloce, Laurelle Maria; Veloso, Filipe; Veneziano, Stefano; Ventura, Andrea; Venturi, Manuela; Venturi, Nicola; Venturini, Alessio; Vercesi, Valerio; Verducci, Monica; Verkerke, Wouter; Vermeulen, Jos; Vest, Anja; Vetterli, Michel; Viazlo, Oleksandr; Vichou, Irene; Vickey, Trevor; Vickey Boeriu, Oana Elena; Viehhauser, Georg; Viel, Simon; Vigani, Luigi; Vigne, Ralph; Villa, Mauro; Villaplana Perez, Miguel; Vilucchi, Elisabetta; Vincter, Manuella; Vinogradov, Vladimir; Vittori, Camilla; Vivarelli, Iacopo; Vlachos, Sotirios; Vlasak, Michal; Vogel, Marcelo; Vokac, Petr; Volpi, Guido; Volpi, Matteo; von der Schmitt, Hans; von Toerne, Eckhard; Vorobel, Vit; Vorobev, Konstantin; Vos, Marcel; Voss, Rudiger; Vossebeld, Joost; Vranjes, Nenad; Vranjes Milosavljevic, Marija; Vrba, Vaclav; Vreeswijk, Marcel; Vuillermet, Raphael; Vukotic, Ilija; Vykydal, Zdenek; Wagner, Peter; Wagner, Wolfgang; Wahlberg, Hernan; Wahrmund, Sebastian; Wakabayashi, Jun; Walder, James; Walker, Rodney; Walkowiak, Wolfgang; Wallangen, Veronica; Wang, Chao; Wang, Chao; Wang, Fuquan; Wang, Haichen; Wang, Hulin; Wang, Jike; Wang, Jin; Wang, Kuhan; Wang, Rui; Wang, Song-Ming; Wang, Tan; Wang, Tingting; Wang, Xiaoxiao; Wanotayaroj, Chaowaroj; Warburton, Andreas; Ward, Patricia; Wardrope, David Robert; Washbrook, Andrew; Watkins, Peter; Watson, Alan; Watson, Ian; Watson, Miriam; Watts, Gordon; Watts, Stephen; Waugh, Ben; Webb, Samuel; Weber, Michele; Weber, Stefan Wolf; Webster, Jordan S; Weidberg, Anthony; Weinert, Benjamin; Weingarten, Jens; Weiser, Christian; Weits, Hartger; Wells, Phillippa; Wenaus, Torre; Wengler, Thorsten; Wenig, Siegfried; Wermes, Norbert; Werner, Matthias; Werner, Per; Wessels, Martin; Wetter, Jeffrey; Whalen, Kathleen; Whallon, Nikola Lazar; Wharton, Andrew Mark; White, Andrew; White, Martin; White, Ryan; White, Sebastian; Whiteson, Daniel; Wickens, Fred; Wiedenmann, Werner; Wielers, Monika; Wienemann, Peter; Wiglesworth, Craig; Wiik-Fuchs, Liv Antje Mari; Wildauer, Andreas; Wilk, Fabian; Wilkens, Henric George; Williams, Hugh; Williams, Sarah; Willis, Christopher; Willocq, Stephane; Wilson, John; Wingerter-Seez, Isabelle; Winklmeier, Frank; Winston, Oliver James; Winter, Benedict Tobias; Wittgen, Matthias; Wittkowski, Josephine; Wollstadt, Simon Jakob; Wolter, Marcin Wladyslaw; Wolters, Helmut; Wosiek, Barbara; Wotschack, Jorg; Woudstra, Martin; Wozniak, Krzysztof; Wu, Mengqing; Wu, Miles; Wu, Sau Lan; Wu, Xin; Wu, Yusheng; Wyatt, Terry Richard; Wynne, Benjamin; Xella, Stefania; Xu, Da; Xu, Lailin; Yabsley, Bruce; Yacoob, Sahal; Yakabe, Ryota; Yamaguchi, Daiki; Yamaguchi, Yohei; Yamamoto, Akira; Yamamoto, Shimpei; Yamanaka, Takashi; Yamauchi, Katsuya; Yamazaki, Yuji; Yan, Zhen; Yang, Haijun; Yang, Hongtao; Yang, Yi; Yang, Zongchang; Yao, Weiming; Yap, Yee Chinn; Yasu, Yoshiji; Yatsenko, Elena; Yau Wong, Kaven Henry; Ye, Jingbo; Ye, Shuwei; Yeletskikh, Ivan; Yen, Andy L; Yildirim, Eda; Yorita, Kohei; Yoshida, Rikutaro; Yoshihara, Keisuke; Young, Charles; Young, Christopher John; Youssef, Saul; Yu, David Ren-Hwa; Yu, Jaehoon; Yu, Jiaming; Yu, Jie; Yuan, Li; Yuen, Stephanie P; Yusuff, Imran; Zabinski, Bartlomiej; Zaidan, Remi; Zaitsev, Alexander; Zakharchuk, Nataliia; Zalieckas, Justas; Zaman, Aungshuman; Zambito, Stefano; Zanello, Lucia; Zanzi, Daniele; Zeitnitz, Christian; Zeman, Martin; Zemla, Andrzej; Zeng, Jian Cong; Zeng, Qi; Zengel, Keith; Zenin, Oleg; Ženiš, Tibor; Zerwas, Dirk; Zhang, Dongliang; Zhang, Fangzhou; Zhang, Guangyi; Zhang, Huijun; Zhang, Jinlong; Zhang, Lei; Zhang, Rui; Zhang, Ruiqi; Zhang, Xueyao; Zhang, Zhiqing; Zhao, Xiandong; Zhao, Yongke; Zhao, Zhengguo; Zhemchugov, Alexey; Zhong, Jiahang; Zhou, Bing; Zhou, Chen; Zhou, Lei; Zhou, Li; Zhou, Mingliang; Zhou, Ning; Zhu, Cheng Guang; Zhu, Hongbo; Zhu, Junjie; Zhu, Yingchun; Zhuang, Xuai; Zhukov, Konstantin; Zibell, Andre; Zieminska, Daria; Zimine, Nikolai; Zimmermann, Christoph; Zimmermann, Stephanie; Zinonos, Zinonas; Zinser, Markus; Ziolkowski, Michael; Živković, Lidija; Zobernig, Georg; Zoccoli, Antonio; zur Nedden, Martin; Zurzolo, Giovanni; Zwalinski, Lukasz

    2016-09-27

    The performance of the jet trigger for the ATLAS detector at the LHC during the 2011 data taking period is described. During 2011 the LHC provided proton–proton collisions with a centre-of-mass energy of 7 TeV and heavy ion collisions with a 2.76 TeV per nucleon–nucleon collision energy. The ATLAS trigger is a three level system designed to reduce the rate of events from the 40 MHz nominal maximum bunch crossing rate to the approximate 400 Hz which can be written to offline storage. The ATLAS jet trigger is the primary means for the online selection of events containing jets. Events are accepted by the trigger if they contain one or more jets above some transverse energy threshold. During 2011 data taking the jet trigger was fully efficient for jets with transverse energy above 25 GeV for triggers seeded randomly at Level 1. For triggers which require a jet to be identified at each of the three trigger levels, full efficiency is reached for offline jets with transverse energy above 60 GeV. Jets reconstruc...

  8. Upgrade of the ATLAS Monitored Drift Tube Frontend Electronics for the HL-LHC

    CERN Document Server

    Zhu, Junjie; The ATLAS collaboration

    2017-01-01

    The ATLAS monitored drift tube (MDT) chambers are the main component of the precision tracking system in the ATLAS muon spectrometer. The MDT system is capable of measuring the sagitta of muon tracks to an accuracy of 60 μm, which corresponds to a momentum accuracy of about 10% at pT=1 TeV. To cope with large amount of data and high event rate expected from the High-Luminosity LHC (HL-LHC) upgrade, ATLAS plans to use the MDT detector at the first-trigger level to improve the muon transverse momentum resolution and reduce the trigger rate. The new MDT trigger and readout system will have an output event rate of 1 MHz and a latency of 6 us at the first-level trigger. The signals from MDT tubes are first processed by an Amplifier/Shaper/Discriminator (ASD) ASIC, and the binary differential signals output by the ASDs are then router to the Time-to-Digital Converter (TDC) ASIC, where the arrival times of leading and trailing edges are digitized in a time bin of 0.78 ns which leads to an RMS timing error of 0.25 n...

  9. Creation of RTOG compliant patient CT-atlases for automated atlas based contouring of local regional breast and high-risk prostate cancers.

    Science.gov (United States)

    Velker, Vikram M; Rodrigues, George B; Dinniwell, Robert; Hwee, Jeremiah; Louie, Alexander V

    2013-07-25

    Increasing use of IMRT to treat breast and prostate cancers at high risk of regional nodal spread relies on accurate contouring of targets and organs at risk, which is subject to significant inter- and intra-observer variability. This study sought to evaluate the performance of an atlas based deformable registration algorithm to create multi-patient CT based atlases for automated contouring. Breast and prostate multi-patient CT atlases (n = 50 and 14 respectively) were constructed to be consistent with RTOG consensus contouring guidelines. A commercially available software algorithm was evaluated by comparison of atlas-predicted contours against manual contours using Dice Similarity coefficients. High levels of agreement were demonstrated for prediction of OAR contours of lungs, heart, femurs, and minor editing required for the CTV breast/chest wall. CTVs generated for axillary nodes, supraclavicular nodes, prostate, and pelvic nodes demonstrated modest agreement. Small and highly variable structures, such as internal mammary nodes, lumpectomy cavity, rectum, penile bulb, and seminal vesicles had poor agreement. A method to construct and validate performance of CT-based multi-patient atlases for automated atlas based auto-contouring has been demonstrated, and can be adopted for clinical use in planning of local regional breast and high-risk prostate radiotherapy.

  10. Creation of RTOG compliant patient CT-atlases for automated atlas based contouring of local regional breast and high-risk prostate cancers

    International Nuclear Information System (INIS)

    Velker, Vikram M; Rodrigues, George B; Dinniwell, Robert; Hwee, Jeremiah; Louie, Alexander V

    2013-01-01

    Increasing use of IMRT to treat breast and prostate cancers at high risk of regional nodal spread relies on accurate contouring of targets and organs at risk, which is subject to significant inter- and intra-observer variability. This study sought to evaluate the performance of an atlas based deformable registration algorithm to create multi-patient CT based atlases for automated contouring. Breast and prostate multi-patient CT atlases (n = 50 and 14 respectively) were constructed to be consistent with RTOG consensus contouring guidelines. A commercially available software algorithm was evaluated by comparison of atlas-predicted contours against manual contours using Dice Similarity coefficients. High levels of agreement were demonstrated for prediction of OAR contours of lungs, heart, femurs, and minor editing required for the CTV breast/chest wall. CTVs generated for axillary nodes, supraclavicular nodes, prostate, and pelvic nodes demonstrated modest agreement. Small and highly variable structures, such as internal mammary nodes, lumpectomy cavity, rectum, penile bulb, and seminal vesicles had poor agreement. A method to construct and validate performance of CT-based multi-patient atlases for automated atlas based auto-contouring has been demonstrated, and can be adopted for clinical use in planning of local regional breast and high-risk prostate radiotherapy

  11. Recent ATLAS Articles on WLAP

    CERN Multimedia

    Goldfarb, S

    2005-01-01

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project is a system for the archiving and publishing of multimedia presentations, using the Web as medium. We list here newly available WLAP items relating to ATLAS: Atlas Software Week Plenary 6-10 December 2004 North American ATLAS Physics Workshop (Tucson) 20-21 December 2004 (17 talks) Physics Analysis Tools Tutorial (Tucson) 19 December 2004 Full Chain Tutorial 21 September 2004 ATLAS Plenary Sessions, 17-18 February 2005 (17 talks) Coming soon: ATLAS Tutorial on Electroweak Physics, 14 Feb. 2005 Software Workshop, 21-22 February 2005 Click here to browse WLAP for all ATLAS lectures.

  12. Atlas use in teaching geography in higher education in the U.S. and Canada

    Directory of Open Access Journals (Sweden)

    Jerry Green

    2017-10-01

    Full Text Available Skills in map use and interpretation are important in geography education. Atlases represent special collections of maps that can be beneficial for developing map use and interpretation and spatial analysis skills in geography students. In this study, we examine the utilization of atlases in geographic coursework. We surveyed 295 geography instructors in the U.S.and Canada about their usage of both print and digital atlases in geography courses of different level. The survey generated 54 responses. The findings indicated that about 39 percent of instructors use atlases in instruction, most of those use print atlases rather than digital atlases. It was found that most of the instructors who use atlases in their instruction teach upper-level Human Geography courses. Some other general courses, in which atlases were used are: Introduction to GIS, Remote Sensing, World Regional Geography, and Introduction to Physical Geography. As indicated by the survey responses, atlases are widely used in special topic courses such as World Forests, Geography of North America, Research Methods in Geography, Natural Hazards, Geography of Europe, History and Theory of Geography, Current World Affairs, Geography of Pennsylvania, Political Geography, Geography of Russia, North American House Types, and Geography of Consumption. In addition to analyzing the survey responses, we also provide examples of atlas use in a variety of courses. We conclude that atlases are useful for studies of spatial associations and geographic patterns, as a background information or context resource, as a source that helps to learn geographic locations, and to learn cartographic methods and map design.

  13. Multi-atlas pancreas segmentation: Atlas selection based on vessel structure.

    Science.gov (United States)

    Karasawa, Ken'ichi; Oda, Masahiro; Kitasaka, Takayuki; Misawa, Kazunari; Fujiwara, Michitaka; Chu, Chengwen; Zheng, Guoyan; Rueckert, Daniel; Mori, Kensaku

    2017-07-01

    Automated organ segmentation from medical images is an indispensable component for clinical applications such as computer-aided diagnosis (CAD) and computer-assisted surgery (CAS). We utilize a multi-atlas segmentation scheme, which has recently been used in different approaches in the literature to achieve more accurate and robust segmentation of anatomical structures in computed tomography (CT) volume data. Among abdominal organs, the pancreas has large inter-patient variability in its position, size and shape. Moreover, the CT intensity of the pancreas closely resembles adjacent tissues, rendering its segmentation a challenging task. Due to this, conventional intensity-based atlas selection for pancreas segmentation often fails to select atlases that are similar in pancreas position and shape to those of the unlabeled target volume. In this paper, we propose a new atlas selection strategy based on vessel structure around the pancreatic tissue and demonstrate its application to a multi-atlas pancreas segmentation. Our method utilizes vessel structure around the pancreas to select atlases with high pancreatic resemblance to the unlabeled volume. Also, we investigate two types of applications of the vessel structure information to the atlas selection. Our segmentations were evaluated on 150 abdominal contrast-enhanced CT volumes. The experimental results showed that our approach can segment the pancreas with an average Jaccard index of 66.3% and an average Dice overlap coefficient of 78.5%. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. ATLAS: Now under new management

    CERN Multimedia

    Katarina Anthony

    2013-01-01

    On 1 March, the ATLAS Collaboration welcomed a new spokesperson, Dave Charlton (University of Birmingham), and two new deputy spokespersons, Thorsten Wengler (CERN) and Beate Heinemann (University of California, Berkeley and LBNL). The Bulletin takes a look at what’s in store for one of the world’s largest scientific collaborations.   ATLAS members at the 2010 collaboration meeting in Copenhagen. Image: Rune Johansen and Troels Petersen. ATLAS spokesperson Dave Charlton has seen the collaboration through countless milestones: from construction to start-up to the 4 July 2012 announcement, he’s been an integral part of the team. Now, after twelve years with the collaboration, Dave is moving into the main office for the next two years. “2012 was a landmark year for ATLAS,” says Dave. “We spent a lot of time in the limelight and, in many ways, all eyes are still on us. But with the shutdown now under way, our focus is ...

  15. ATLAS people can run!

    CERN Multimedia

    Claudia Marcelloni de Oliveira; Pauline Gagnon

    It must be all the training we are getting every day, running around trying to get everything ready for the start of the LHC next year. This year, the ATLAS runners were in fine form and came in force. Nine ATLAS teams signed up for the 37th Annual CERN Relay Race with six runners per team. Under a blasting sun on Wednesday 23rd May 2007, each team covered the distances of 1000m, 800m, 800m, 500m, 500m and 300m taking the runners around the whole Meyrin site, hills included. A small reception took place in the ATLAS secretariat a week later to award the ATLAS Cup to the best ATLAS team. For the details on this complex calculation which takes into account the age of each runner, their gender and the color of their shoes, see the July 2006 issue of ATLAS e-news. The ATLAS Running Athena Team, the only all-women team enrolled this year, won the much coveted ATLAS Cup for the second year in a row. In fact, they are so good that Peter Schmid and Patrick Fassnacht are wondering about reducing the women's bonus in...

  16. The ATLAS b-jet Trigger

    CERN Document Server

    Ferreira de Lima, D E; The ATLAS collaboration

    2011-01-01

    The ATLAS detector, at the LHC, has a three-level trigger, which selects events relevant for the physics goals of the experiment. The identification of jets arising from bottom quark production is important in many analyses. The b-tagging at the ATLAS Trigger relies on the fragmentation of the b quark, which generates a B hadron, that retains most of the parent quark’s momentum (∼ 70%). Furthermore, the high b quark mass results in decay products with high momenta with respect to the jet axis. The lifetime tagger relies on the the relatively long lifetime of the B hadrons (∼ 1.6 ps in their rest frame), which allows them to have a long decay length. Due to the large mass of the B hadron, the tracks reconstructed from this decay often have large impact parameters, compared to prompt jets. The algorithms exploit this by identifying tracks from the B hadron decay which are displaced from the primary interaction vertex and thus, indicate that a long-lived particle was present. The latest performance results...

  17. Analysis and predictive modeling of the performance of the ATLAS TDAQ network

    CERN Document Server

    Leahu, Lucian; Buzuloiu, V; Martin, B

    After almost twenty years of research, development and installation, the Large Hadron Collider (LHC) accelerator at CERN produced its first collisions in 2008, planning to run until the end of 2012. ATLAS (A Torroidal LHC ApparatuS) is the biggest exper- iment built and operated on the LHC ring. Being a general purpose detector, it studies a wide range of physics aspects, out of which the search for the “God particle” - Higgs boson - is its most significant mission. In 2012 ATLAS already recorded collisions data, called events, which were, with a big probability, candidates for proving the ex- istence of this particle. Capturing this type of “interesting” events is the task of the ATLAS detector, however filtering them from the huge amount of data being generated is the purpose of the Trigger and Data Acquisition system (TDAQ). ATLAS TDAQ is implemented as a three layer filter, reducing in real-time the rates of the events (1.6 Mbytes big) down to a level which can be written to mass storage: from 40 ...

  18. Recent ATLAS Articles on WLAP

    CERN Multimedia

    J. Herr

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project is a system for the archiving and publishing of multimedia presentations, using the Web as medium. We list here newly available WLAP items relating to ATLAS: Atlas Physics Workshop 6-11 June 2005 June 2005 ATLAS Week Plenary Session Click here to browse WLAP for all ATLAS lectures.

  19. ATLAS EventIndex Data Collection Supervisor and Web Interface

    CERN Document Server

    Garcia Montoro, Carlos; The ATLAS collaboration; Sanchez, Javier

    2016-01-01

    The EventIndex project consists in the development and deployment of a complete catalogue of events for the ATLAS experiment [1][2] at the LHC accelerator at CERN. In 2015 the ATLAS experiment has produced 12 billion real events in 1 million files, and 5 billion simulated events in 8 million files. The ATLAS EventIndex is running in production since mid-2015, reliably collecting information worldwide about all produced events and storing them in a central Hadoop infrastructure. A subset of this information is copied to an Oracle relational database. This paper presents two components of the ATLAS EventIndex [3]: its data collection supervisor and its web interface partner.

  20. Hidden Valley Search at ATLAS

    CERN Document Server

    Verducci, M; The ATLAS collaboration

    2011-01-01

    A number of extensions of the Standard Model result in neutral and weakly-coupled particles that decay to multi hadrons or multi leptons with macroscopic decay lengths. These particles with decay paths that can be comparable with ATLAS detector dimensions represent, from an experimental point of view, a challenge both for the trigger and for the reconstruction capabilities of the ATLAS detector. We will present a set of signature driven triggers for the ATLAS detector that target such displaced decays and evaluate their performances for some benchmark models. and describe analysis strategies and limits on the production of such long-lived particles that can be achieved with the first 100 pb-1.

  1. EnviroAtlas Green Space Proximity Gradient Web Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). In any given 1-square meter...

  2. Recent Development in the ATLAS Control Room

    CERN Document Server

    Armen Vartapetian

    Only recently the name ATLAS Control Room (ACR) was more associated with the building at Point 1 (SCX1) than with the real thing. But just within the last several months, with the installation of the ACR hardware, that perception has changed significantly. The recently furnished ATLAS control room. But first of all, if you are not familiar with the ATLAS experimental site and are interested in visiting the ATLAS control room to see the place that in the near future will become the brain of the detector operations, it is quite easy to do so. You don't even need safety helmet or shoes! The ACR is located on the ground floor of a not so typical, glass-covered building in Point 1. The building number on the CERN map is 3162, or SCX1 as we call it. It is also easy to recognize that building by its shiny appearance within the cluster of Point 1 buildings if you are driving from Geneva. Final design and prototyping of the ACR hardware started at the beginning of 2006. Evaluation of the chosen hardware confi...

  3. Fine-grain Parallel Processing On A Commodity Platform: A Solution For The Atlas Second-level Trigger

    CERN Document Server

    Boosten, M

    2003-01-01

    From 2005 on, CERN expects to have a new accelerator available for experiments: the Large Hadron Collider (LHC), with a circumference of 27 kilometres. The ATLAS detector produces 40 TeraBytes/s of data. Only a fraction of all data is interesting. A computer system, called the trigger, selects the interesting data through real-time data analysis. The trigger consists of three subsequent filtering levels: LVL1, LVL2, and LVL3. LVL1 will be implemented using special-purpose hardware. LVL2 and LVL3 will be implemented using a Network Of Workstations (NOW). A major problem is to make efficient use of the computing power available in each workstation. The major contribution of this designer's project is an infrastructure named MESH. MESH enables CERN to cost- effectively implement the LVL2 trigger. Furthermore, due to the use of commodity technology, MESH enables the LVL2 trigger to be cost-effectively upgraded and supported during its 20 year lifecycle. MESH facilitates efficient parallel processing on PCs interc...

  4. Atlas – a data warehouse for integrative bioinformatics

    Directory of Open Access Journals (Sweden)

    Yuen Macaire MS

    2005-02-01

    Full Text Available Abstract Background We present a biological data warehouse called Atlas that locally stores and integrates biological sequences, molecular interactions, homology information, functional annotations of genes, and biological ontologies. The goal of the system is to provide data, as well as a software infrastructure for bioinformatics research and development. Description The Atlas system is based on relational data models that we developed for each of the source data types. Data stored within these relational models are managed through Structured Query Language (SQL calls that are implemented in a set of Application Programming Interfaces (APIs. The APIs include three languages: C++, Java, and Perl. The methods in these API libraries are used to construct a set of loader applications, which parse and load the source datasets into the Atlas database, and a set of toolbox applications which facilitate data retrieval. Atlas stores and integrates local instances of GenBank, RefSeq, UniProt, Human Protein Reference Database (HPRD, Biomolecular Interaction Network Database (BIND, Database of Interacting Proteins (DIP, Molecular Interactions Database (MINT, IntAct, NCBI Taxonomy, Gene Ontology (GO, Online Mendelian Inheritance in Man (OMIM, LocusLink, Entrez Gene and HomoloGene. The retrieval APIs and toolbox applications are critical components that offer end-users flexible, easy, integrated access to this data. We present use cases that use Atlas to integrate these sources for genome annotation, inference of molecular interactions across species, and gene-disease associations. Conclusion The Atlas biological data warehouse serves as data infrastructure for bioinformatics research and development. It forms the backbone of the research activities in our laboratory and facilitates the integration of disparate, heterogeneous biological sources of data enabling new scientific inferences. Atlas achieves integration of diverse data sets at two levels. First

  5. New format for ATLAS e-news

    CERN Multimedia

    Pauline Gagnon

    ATLAS e-news got a new look! As of November 30, 2007, we have a new format for ATLAS e-news. Please go to: http://atlas-service-enews.web.cern.ch/atlas-service-enews/index.html . ATLAS e-news will now be published on a weekly basis. If you are not an ATLAS colaboration member but still want to know how the ATLAS experiment is doing, we will soon have a version of ATLAS e-news intended for the general public. Information will be sent out in due time.

  6. Deploying the ATLAS Metadata Interface (AMI) on the cloud with Jenkins

    Science.gov (United States)

    Lambert, F.; Odier, J.; Fulachier, J.; ATLAS Collaboration

    2017-10-01

    The ATLAS Metadata Interface (AMI) is a mature application of more than 15 years of existence. Mainly used by the ATLAS experiment at CERN, it consists of a very generic tool ecosystem for metadata aggregation and cataloguing. AMI is used by the ATLAS production system, therefore the service must guarantee a high level of availability. We describe our monitoring and administration systems, and the Jenkins-based strategy used to dynamically test and deploy cloud OpenStack nodes on demand.

  7. Deploying the ATLAS Metadata Interface (AMI) on the cloud with Jenkins.

    CERN Document Server

    AUTHOR|(SzGeCERN)637120; The ATLAS collaboration; Odier, Jerome; Fulachier, Jerome

    2017-01-01

    The ATLAS Metadata Interface (AMI) is a mature application of more than 15 years of existence. Mainly used by the ATLAS experiment at CERN, it consists of a very generic tool ecosystem for metadata aggregation and cataloguing. AMI is used by the ATLAS production system, therefore the service must guarantee a high level of availability. We describe our monitoring and administration systems, and the Jenkins-based strategy used to dynamically test and deploy cloud OpenStack nodes on demand.

  8. ATLAS Facility and Instrumentation Description Report

    International Nuclear Information System (INIS)

    Kang, Kyoung Ho; Moon, Sang Ki; Park, Hyun Sik

    2009-06-01

    A thermal-hydraulic integral effect test facility, ATLAS (Advanced Thermal-hydraulic Test Loop for Accident Simulation), has been constructed at KAERI (Korea Atomic Energy Research Institute). The ATLAS is a half-height and 1/288-volume scaled test facility with respect to the APR1400. The fluid system of the ATLAS consists of a primary system, a secondary system, a safety injection system, a break simulating system, a containment simulating system, and auxiliary systems. The primary system includes a reactor vessel, two hot legs, four cold legs, a pressurizer, four reactor coolant pumps, and two steam generators. The secondary system of the ATLAS is simplified to be of a circulating looptype. Most of the safety injection features of the APR1400 and the OPR1000 are incorporated into the safety injection system of the ATLAS. In the ATLAS test facility, about 1300 instrumentations are installed to precisely investigate the thermal-hydraulic behavior in simulation of the various test scenarios. This report describes the scaling methodology, the geometric data of the individual component, and the specification and the location of the instrumentations which are specific to the simulation of 50% DVI line break accident of the APR1400 for supporting the 50 th OECD/NEA International Standard Problem Exercise (ISP-50)

  9. Experimental Results of OECD-ATLAS A3.1 Test

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung Wook; Kang, Kyung Ho; Bae, Byoung Uhn; Park, Yu Sun; Choi, Nam Hyun; Kim, Kyung Doo; Choi, Ki Yong [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    After the Fukushima accident, design extension conditions (DECs) such as a station black-out (SBO) and a TLOFW attracted a wide international attention in a sense that such high-risk multiple failure accidents should be revisited from a viewpoint of the reinforcement of the 'defense in depth' concept. In particular, a TLOFW event has been considered as one of the typical beyond design basis accident (bDBA) in a safety analysis of the pressurized water reactors. From a conservative point of view, however, failure of the active safety components needs to be considered in the safety analysis. During a TLOFW accident the most effective safety-related active components are the safety injection pumps (SIPs) and the pilot-operated safety relief valves (POSRVs) which are used in a feed and bleed operation. OECD-ATLAS A3.1 test was performed to simulate a TLOFW with additional failures such as partial failure of the SIPs and POSRVs. This test was performed with two temporal phases with an aim of investigating the effect of feed and bleed operation as an accident mitigation measure. Major findings of the A3.1 test are summarized as follows: - Following the termination of the feedwater supply, the SGs became dried out due to the cyclic opening and closing of the MSSVs. However, the coolant discharge from the secondary side of steam generators through MSSVs resulted in removing the decay heat and establishing the natural circulation in the primary system. - A large coolant inventory loss of the primary system through the POSRV during a feed and bleed operation resulted in a reduction of the core collapsed level but the minimum core level was still above the top of the active core. As a result, the excursion of the maximum PCT was not observed.

  10. Task Management in the New ATLAS Production System

    CERN Document Server

    De, K; The ATLAS collaboration; Klimentov, A; Potekhin, M; Vaniachine, A

    2013-01-01

    The ATLAS Production System is the top level workflow manager which translates physicists' needs for production level processing into actual workflows executed across about a hundred processing sites used globally by ATLAS. As the production workload increased in volume and complexity in recent years (the ATLAS production tasks count is above one million, with each task containing hundreds or thousands of jobs) there is a need to upgrade the Production System to meet the challenging requirements of the next LHC run while minimizing the operating costs. Providing a front-end and a management layer for petascale data processing and analysis, the new Production System contains generic subsystems that can be used in a wider range of applications. The main subsystems are the Database Engine for Tasks (DEFT) and the Job Execution and Definition Interface (JEDI). Based on users' requests, the DEFT subsystem manages inter-dependent groups of tasks (Meta-Tasks) and generates corresponding data processing workflows. Th...

  11. Task Management in the New ATLAS Production System

    CERN Document Server

    De, K; The ATLAS collaboration; Klimentov, A; Potekhin, M; Vaniachine, A

    2014-01-01

    The ATLAS Production System is the top level workflow manager which translates physicists' needs for production level processing into actual workflows executed across about a hundred processing sites used globally by ATLAS. As the production workload increased in volume and complexity in recent years (the ATLAS production tasks count is above one million, with each task containing hundreds or thousands of jobs) there is a need to upgrade the Production System to meet the challenging requirements of the next LHC run while minimizing the operating costs. Providing a front-end and a management layer for petascale data processing and analysis, the new Production System contains generic subsystems that can be used in a wider range of applications. The main subsystems are the Database Engine for Tasks (DEFT) and the Job Execution and Definition Interface (JEDI). Based on users' requests, the DEFT subsystem manages inter-dependent groups of tasks (Meta-Tasks) and generates corresponding data processing workflows. Th...

  12. Low-Level Test of the New Read-Out-Driver (ROD) Module and Back-of-Crate (BOC) Module for ATLAS IBL Data Acquisition System Upgrade

    CERN Document Server

    Hanindhito, Bagus

    2014-01-01

    During first long shutdown of The Large Hadron Collider, most of experiment infrastructures at CERN will be upgraded for preparation to operate at higher energy thus can open new possibilities to discover the unknown in particle physics. ATLAS, which is the biggest particle detector at CERN, will also be upgraded by constructing new pixel sensor layer. This new pixel sensor layer is called ATLAS Insertable B-Layer (IBL). IBL will be installed between the existing pixel sensor and new, smaller radius beam pipe. The installation of IBL will introduce new level of radiation and pixel occupancy. Therefore, it requires development of new technologies to supports the ATLAS IBL upgrade and also improve the physics performance of the existing pixel sensor. One of the important key technologies that must be upgraded is data acquisition system. The development of new front-end ASIC, the FE-I4, to answer the challenge in data acquisition system will require new off-detector electronics. The new off-detector electronics ...

  13. ATLAS: Full power for the toroid magnet

    CERN Multimedia

    2006-01-01

    The 9th of November was a memorable day for ATLAS. Just before midnight, the gigantic Barrel toroid magnet reached its nominal field of 4 teslas in the coil windings, with an electrical current of 21000 amperes (21 kA) passing through the eight superconducting coils (as seen on the graph). This achievement was obtained after several weeks of commissioning. The ATLAS Barrel Toroid was first cooled down for about six weeks in July-August to -269°C (4.8 K) and then powered up step-by-step in successive test sessions to 21 kA. This is 0.5 kA above the current required to produce the nominal magnetic field. Afterwards, the current was safely switched off and the stored magnetic energy of 1.1 gigajoules was dissipated in the cold mass, raising its temperature to a safe -218°C (55 K). 'We can now say that the ATLAS Barrel Toroid is ready for physics,' said Herman ten Kate, project leader for the ATLAS magnet system. The ATLAS barrel toroid magnet is the result of a close collaboration between the magnet la...

  14. Online precision gas evaluation of the ATLAS Muon Spectrometer during LHC Run1

    CERN Document Server

    AUTHOR|(CDS)2092735; The ATLAS collaboration

    2016-01-01

    The ATLAS Muon Spectrometer, a six story structure embedded in a toroidal magnetic field, is constructed of nearly 1200 Monitored Drift Tube chambers (MDTs) containing 354,000 aluminum drift tubes. The operating gas is 93% Ar + 7% CO${_2}$ with a small amount of water vapor at a pressure of 3 bar. The momentum resolution required for ATLAS physics demands that MDT gas quality and the associated gas dependent calibrations be determined with a rapid feedback cycle. During the LHC Run1, more than 2 billion liters of gas flowed through the detector at a rate 100,000 l/hr. Online evaluation of MDT gas in real time and the associated contribution to the determination of the time-to-space functions was conducted by the dedicated Gas Monitor Chamber (GMC). We report on the operation and results of the GMC over the first three years of LHC running. During this period, the GMC has operated with a nearly 100% duty cycle, providing hourly measurements of the MDT drift times with 1 ns precision, corresponding to minute ch...

  15. ATLAS: triggers for B-physics

    International Nuclear Information System (INIS)

    George, Simon

    2000-01-01

    The LHC will produce bb-bar events at an unprecedented rate. The number of events recorded by ATLAS will be limited by the rate at which they can be stored offline and subsequently analysed. Despite the huge number of events, the small branching ratios mean that analysis of many of the most interesting channels for CP violation and other measurements will be limited by statistics. The challenge for the Trigger and Data Acquisition (DAQ) system is therefore to maximise the fraction of interesting B decays in the B-physics data stream. The ATLAS Trigger/DAQ system is split into three levels. The initial B-physics selection is made in the first-level trigger by an inclusive low-p T muon trigger (∼6 GeV). The second-level trigger strategy is based on identifying classes of final states by their partial reconstruction. The muon trigger is confirmed before proceeding to a track search. Electron/hadron separation is given by the transition radiation tracking detector and the Electromagnetic calorimeter. Muon identification is possible using the muon detectors and the hadronic calorimeter. From silicon strips, pixels and straw tracking, precise track reconstruction is used to make selections based on invariant mass, momentum and impact parameter. The ATLAS trigger group is currently engaged in algorithm development and performance optimisation for the B-physics trigger. This is closely coupled to the R and D programme for the higher-level triggers. Together the two programmes of work will optimise the hardware, architecture and algorithms to meet the challenging requirements. This paper describes the current status and progress of this work

  16. ATLAS Muon Drift Tube Electronics

    Energy Technology Data Exchange (ETDEWEB)

    Arai, Y [KEK, High Energy Accelerator Research Organisation, Tsukuba (Japan); Ball, B; Chapman, J W; Dai, T; Ferretti, C; Gregory, J [University of Michigan, Department of Physics, Ann Arbor, MI (United States); Beretta, M [INFN Laboratori Nazionali di Frascati, Frascati (Italy); Boterenbrood, H; Jansweijer, P P M [Nikhef National Institute for Subatomic Physics, Amsterdam (Netherlands); Brandenburg, G W; Fries, T; Costa, J Guimaraes da; Harder, S; Huth, J [Harvard University, Laboratory for Particle Physics and Cosmology, Cambridge, MA (United States); Ceradini, F [INFN Roma Tre and Universita Roma Tre, Dipartimento di Fisica, Roma (Italy); Hazen, E [Boston University, Physics Department, Boston, MA (United States); Kirsch, L E [Brandeis University, Department of Physics, Waltham, MA (United States); Koenig, A C [Radboud University Nijmegen/Nikhef, Dept. of Exp. High Energy Physics, Nijmegen (Netherlands); Lanza, A [INFN Pavia, Pavia (Italy); Mikenberg, G [Weizmann Institute of Science, Department of Particle Physics, Rehovot (Israel)], E-mail: brandenburg@physics.harvard.edu (and others)

    2008-09-15

    This paper describes the electronics used for the ATLAS monitored drift tube (MDT) chambers. These chambers are the main component of the precision tracking system in the ATLAS muon spectrometer. The MDT detector system consists of 1,150 chambers containing a total of 354,000 drift tubes. It is capable of measuring the sagitta of muon tracks to an accuracy of 60 {mu}m, which corresponds to a momentum accuracy of about 10% at p{sub T}= 1 TeV. The design and performance of the MDT readout electronics as well as the electronics for controlling, monitoring and powering the detector will be discussed. These electronics have been extensively tested under simulated running conditions and have undergone radiation testing certifying them for more than 10 years of LHC operation. They are now installed on the ATLAS detector and are operating during cosmic ray commissioning runs.

  17. The FTK to Level-2 Interface Card (FLIC)

    CERN Document Server

    Anderson, John Thomas; The ATLAS collaboration; Drake, Gary; Love, Jeremy; Proudfoot, James; Wang, Rui; Zhang, Jinlong; Auerbach, Benjamin

    2015-01-01

    The FTK to Level-2 Interface Card (FLIC) of the ATLAS Fast TracKer (FTK) trigger upgrade is the final component in the FTK chain of custom electronics. The FTK performs full event tracking using the ATLAS Silicon detectors for every Level-1 accepted event at 100 kHz. The FLIC is a custom Advanced Telecommunications Architecture (ATCA) card that interfaces the upstream FTK system with the ATLAS trigger and data acquisition (TDAQ) system, and allows for event processing on commercial PC blades making use of the 10 GB Ethernet full mesh ATCA back-plane. The FLIC receives data on 8 optical links at a bandwidth of ~1 Gbps per channel, reformats the data to the ATLAS standard record format, and performs the conversion from local to global module identifier using look up tables in SRAM. After processing, the event records are sent out to the TDAQ system using the S-LINK protocol at 2 Gbps, with a latency of O(10 microseconds). The data processing is handled in two Xilinx Virtex-6 FPGAs, with two additional Virtex-6 ...

  18. ATLAS Virtual Visits bringing the world into the ATLAS control room

    CERN Document Server

    AUTHOR|(CDS)2051192; The ATLAS collaboration; Yacoob, Sahal

    2016-01-01

    ATLAS Virtual Visits is a project initiated in 2011 for the Education & Outreach program of the ATLAS Experiment at CERN. Its goal is to promote public appreciation of the LHC physics program and particle physics, in general, through direct dialogue between ATLAS physicists and remote audiences. A Virtual Visit is an IP-based videoconference, coupled with a public webcast and video recording, between ATLAS physicists and remote locations around the world, that typically include high school or university classrooms, Masterclasses, science fairs, or other special events, usually hosted by collaboration members. Over the past two years, more than 10,000 people, from all of the world’s continents, have actively participated in ATLAS Virtual Visits, with many more enjoying the experience from the publicly available webcasts and recordings. We present an overview of our experience and discuss potential development for the future.

  19. The design and performance of the ATLAS Inner Detector trigger for Run 2

    CERN Document Server

    Penc, Ondrej; The ATLAS collaboration

    2016-01-01

    The design and performance of the ATLAS Inner Detector (ID) trigger algorithms running online on the high level trigger (HLT) processor farm with the early LHC Run 2 data are discussed. The redesign of the ID trigger, which took place during the 2013-15 long shutdown, in order to satisfy the demands of the higher energy LHC Run 2 operation is described. The ID trigger HLT algorithms are essential for nearly all trigger signatures within the ATLAS trigger. The detailed performance of the tracking algorithms with the early Run 2 data for the different trigger signatures is presented, including the detailed timing performance for the algorithms running on the redesigned single stage ATLAS HLT Farm. Comparison with the Run 1 strategy are made and demonstrate the superior performance of the strategy adopted for Run 2.

  20. High-Performance Scalable Information Service for the ATLAS Experiment

    International Nuclear Information System (INIS)

    Kolos, S; Boutsioukis, G; Hauser, R

    2012-01-01

    The ATLAS[1] experiment is operated by a highly distributed computing system which is constantly producing a lot of status information which is used to monitor the experiment operational conditions as well as to assess the quality of the physics data being taken. For example the ATLAS High Level Trigger(HLT) algorithms are executed on the online computing farm consisting from about 1500 nodes. Each HLT algorithm is producing few thousands histograms, which have to be integrated over the whole farm and carefully analyzed in order to properly tune the event rejection. In order to handle such non-physics data the Information Service (IS) facility has been developed in the scope of the ATLAS Trigger and Data Acquisition (TDAQ)[2] project. The IS provides a high-performance scalable solution for information exchange in distributed environment. In the course of an ATLAS data taking session the IS handles about a hundred gigabytes of information which is being constantly updated with the update interval varying from a second to a few tens of seconds. IS provides access to any information item on request as well as distributing notification to all the information subscribers. In the latter case IS subscribers receive information within a few milliseconds after it was updated. IS can handle arbitrary types of information, including histograms produced by the HLT applications, and provides C++, Java and Python API. The Information Service is a unique source of information for the majority of the online monitoring analysis and GUI applications used to control and monitor the ATLAS experiment. Information Service provides streaming functionality allowing efficient replication of all or part of the managed information. This functionality is used to duplicate the subset of the ATLAS monitoring data to the CERN public network with a latency of a few milliseconds, allowing efficient real-time monitoring of the data taking from outside the protected ATLAS network. Each information

  1. Bone age assessment in Hispanic children: digital hand atlas compared with the Greulich and Pyle (G&P) atlas

    Science.gov (United States)

    Fernandez, James Reza; Zhang, Aifeng; Vachon, Linda; Tsao, Sinchai

    2008-03-01

    Bone age assessment is most commonly performed with the use of the Greulich and Pyle (G&P) book atlas, which was developed in the 1950s. The population of theUnited States is not as homogenous as the Caucasian population in the Greulich and Pyle in the 1950s, especially in the Los Angeles, California area. A digital hand atlas (DHA) based on 1,390 hand images of children of different racial backgrounds (Caucasian, African American, Hispanic, and Asian) aged 0-18 years was collected from Children's Hospital Los Angeles. Statistical analysis discovered significant discrepancies exist between Hispanic and the G&P atlas standard. To validate the usage of DHA as a clinical standard, diagnostic radiologists performed reads on Hispanic pediatric hand and wrist computed radiography images using either the G&P pediatric radiographic atlas or the Children's Hospital Los Angeles Digital Hand Atlas (DHA) as reference. The order in which the atlas is used (G&P followed by DHA or vice versa) for each image was prepared before actual reading begins. Statistical analysis of the results was then performed to determine if a discrepancy exists between the two readings.

  2. First Results from the Online Radiation Dose Monitoring System in ATLAS experiment

    CERN Document Server

    Mandić, I; The ATLAS collaboration; Deliyergiyev, M; Gorišek, A; Kramberger, G; Mikuž, M; Franz, S; Hartert, J; Dawson, I; Miyagawa, P; Nicolas, L

    2011-01-01

    High radiation doses which will accumulate in components of ATLAS experiment during data taking will causes damage to detectors and readout electronics. It is therefore important to continuously monitor the doses to estimate the level of degradation caused by radiation. Online radiation monitoring system measures ionizing dose in SiO2 , displacement damage in silicon in terms of 1-MeV(Si) equivalent neutron fluence and fluence of thermal neutrons at several locations in ATLAS detector. In this paper design of the system, results of measurements and comparison of measured integrated doses and fluences with predictions from FLUKA simulation will be shown.

  3. Searches for beyond the Standard Model physics with boosted topologies in the ATLAS experiment using the Grid-based Tier-3 facility at IFIC-Valencia

    CERN Document Server

    Villaplana Pérez, Miguel; Vos, Marcel

    Both the LHC and ATLAS have been performing well beyond expectation since the start of the data taking by the end of 2009. Since then, several thousands of millions of collision events have been recorded by the ATLAS experiment. With a data taking efficiency higher than 95% and more than 99% of its channels working, ATLAS supplies data with an unmatched quality. In order to analyse the data, the ATLAS Collaboration has designed a distributed computing model based on GRID technologies. The ATLAS computing model and its evolution since the start of the LHC is discussed in section 3.1. The ATLAS computing model groups the different types of computing centers of the ATLAS Collaboration in a tiered hierarchy that ranges from Tier-0 at CERN, down to the 11 Tier-1 centers and the nearly 80 Tier-2 centres distributed world wide. The Spanish Tier-2 activities during the first years of data taking are described in section 3.2. Tier-3 are institution-level non-ATLAS funded or controlled centres that participate presuma...

  4. Fast pattern recognition with the ATLAS L1 track trigger for the HL-LHC

    CERN Document Server

    Martensson, Mikael; The ATLAS collaboration

    2016-01-01

    A fast hardware based track trigger for high luminosity upgrade of the Large Hadron Collider (HL- LHC) is being developed in ATLAS. The goal is to achieve trigger levels in high pileup collisions that are similar or even better than those achieved at low pile-up running of LHC by adding tracking information to the ATLAS hardware trigger which is currently based on information from calorimeters and muon trigger chambers only. Two methods for fast pattern recognition are investigated. The first is based on matching tracker hits to pattern banks of simulated high momentum tracks which are stored in a custom made Associative Memory (AM) ASIC. The second is based on the Hough transform where detector hits are transformed into 2D Hough space with one variable related to track pt and one to track direction. Hits found by pattern recognition will be sent to a track fitting step which calculates the track parameters . The speed and precision of the track fitting depends on the quality of the hits selected by the patte...

  5. The ATLAS Run-2 Trigger Menu for higher luminosities: Design, Performance and Operational Aspects

    CERN Document Server

    Montejo Berlingen, Javier; The ATLAS collaboration

    2017-01-01

    The LHC, at design capacity, has a bunch-crossing rate of 40 MHz whereas the ATLAS experiment has an average recording rate of about 1 kHz. To reduce the rate of events, but maintain high selection efficiency for rare events such as physics signals beyond the Standard Model, a two-level trigger system is used. Events are selected based on physics signatures such as presence of energetic leptons, photons, jets or large missing energy. Despite the limited time available for processing collision events the trigger system is able to exploit topological information, as well as using multi-variate methods. In total, the ATLAS trigger systems consists of thousands of different individual triggers. The ATLAS trigger menu specifies which triggers are used during data taking and how much rate a given trigger is allocated. This menu reflects not only the physics goals of the collaboration but also takes into consideration the instantaneous luminosity of the LHC and the design limits of the ATLAS detector and offline pro...

  6. ATLAS Distributed Computing Experience and Performance During the LHC Run-2

    Science.gov (United States)

    Filipčič, A.; ATLAS Collaboration

    2017-10-01

    ATLAS Distributed Computing during LHC Run-1 was challenged by steadily increasing computing, storage and network requirements. In addition, the complexity of processing task workflows and their associated data management requirements led to a new paradigm in the ATLAS computing model for Run-2, accompanied by extensive evolution and redesign of the workflow and data management systems. The new systems were put into production at the end of 2014, and gained robustness and maturity during 2015 data taking. ProdSys2, the new request and task interface; JEDI, the dynamic job execution engine developed as an extension to PanDA; and Rucio, the new data management system, form the core of Run-2 ATLAS distributed computing engine. One of the big changes for Run-2 was the adoption of the Derivation Framework, which moves the chaotic CPU and data intensive part of the user analysis into the centrally organized train production, delivering derived AOD datasets to user groups for final analysis. The effectiveness of the new model was demonstrated through the delivery of analysis datasets to users just one week after data taking, by completing the calibration loop, Tier-0 processing and train production steps promptly. The great flexibility of the new system also makes it possible to execute part of the Tier-0 processing on the grid when Tier-0 resources experience a backlog during high data-taking periods. The introduction of the data lifetime model, where each dataset is assigned a finite lifetime (with extensions possible for frequently accessed data), was made possible by Rucio. Thanks to this the storage crises experienced in Run-1 have not reappeared during Run-2. In addition, the distinction between Tier-1 and Tier-2 disk storage, now largely artificial given the quality of Tier-2 resources and their networking, has been removed through the introduction of dynamic ATLAS clouds that group the storage endpoint nucleus and its close-by execution satellite sites. All stable

  7. The Locomotive is running full speed in the ATLAS MUONs

    CERN Multimedia

    Mikenberg, G.

    The ATLAS MUON Spectrometer is, like most of the ATLAS systems, a large collection of detectors that operate at the limit of the technology. They have to provide the MUON trigger for the ATLAS detector over very large surfaces (7000m2) and measure the passage of MUONs over distances ranging between 5 to 13m, with relative precisions between the various measurement planes of few tenths of microns, while controlling various external parameters ranging from the relative positions of the detectors (alignment systems controlled to the level of 20 microns) to the magnetic field (to be reconstructed at the level of 20 Gauss). Although many of the integration problems with the rest of the ATLAS detectors have not been fully clarified, one needs to start production, in order to be ready on time to enjoy the Physics of the LHC. This means to start the coordinated work in more than 25 production and testing sites, located all around the world, that have to produce precision detectors at industrial speed, which sho...

  8. Evolution of the ReadOut System of the ATLAS experiment

    CERN Document Server

    Borga, A; The ATLAS collaboration; Joos, M; Schumacher, J; Tremblet, L; Vandelli, W; Vermeulen, J; Werner, P; Wickens, F

    2014-01-01

    The ReadOut System (ROS) is a central and essential part of the ATLAS data-acquisition system. It receives and buffers event data accepted from all sub-detectors and first-level trigger subsystems. Event data are subsequently forwarded to the High-Level Trigger system and Event Builder via a GbE-based network. The ATLAS ROS will be completely renewed in view of the demanding conditions expected during LHC Run 2 and Run 3. The new ROS will consist of roughly 100 Linux-based 2U-high rack-mounted server PCs, each equipped with 2 PCIe I/O cards and four 10GbE interfaces. The FPGA-based PCIe I/O cards, developed by the ALICE collaboration, will be configured with ATLAS-specific firmware, called RobinNP. They will provide connectivity to about 2000 point-to-point optical links conveying the ATLAS event data. This dense configuration provides an excellent test bench for studying I/O efficiency and challenges in current COTS PC architectures with non-uniform memory and I/O access paths. In this paper the requirements...

  9. Performance of a Real-time Multipurpose 2-Dimensional Clustering Algorithm Developed for the ATLAS Experiment

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00372074; The ATLAS collaboration; Sotiropoulou, Calliope Louisa; Annovi, Alberto; Kordas, Kostantinos

    2016-01-01

    In this paper the performance of the 2D pixel clustering algorithm developed for the Input Mezzanine card of the ATLAS Fast TracKer system is presented. Fast TracKer is an approved ATLAS upgrade that has the goal to provide a complete list of tracks to the ATLAS High Level Trigger for each level-1 accepted event, at up to 100 kHz event rate with a very small latency, in the order of 100µs. The Input Mezzanine card is the input stage of the Fast TracKer system. Its role is to receive data from the silicon detector and perform real time clustering, thus to reduce the amount of data propagated to the subsequent processing levels with minimal information loss. We focus on the most challenging component on the Input Mezzanine card, the 2D clustering algorithm executed on the pixel data. We compare two different implementations of the algorithm. The first is one called the ideal one which searches clusters of pixels in the whole silicon module at once and calculates the cluster centroids exploiting the whole avail...

  10. Performance of a Real-time Multipurpose 2-Dimensional Clustering Algorithm Developed for the ATLAS Experiment

    CERN Document Server

    Gkaitatzis, Stamatios; The ATLAS collaboration

    2016-01-01

    In this paper the performance of the 2D pixel clustering algorithm developed for the Input Mezzanine card of the ATLAS Fast TracKer system is presented. Fast TracKer is an approved ATLAS upgrade that has the goal to provide a complete list of tracks to the ATLAS High Level Trigger for each level-1 accepted event, at up to 100 kHz event rate with a very small latency, in the order of 100 µs. The Input Mezzanine card is the input stage of the Fast TracKer system. Its role is to receive data from the silicon detector and perform real time clustering, thus to reduce the amount of data propagated to the subsequent processing levels with minimal information loss. We focus on the most challenging component on the Input Mezzanine card, the 2D clustering algorithm executed on the pixel data. We compare two different implementations of the algorithm. The first is one called the ideal one which searches clusters of pixels in the whole silicon module at once and calculates the cluster centroids exploiting the whole avai...

  11. A Highly Selective First-Level Muon Trigger With MDT Chamber Data for ATLAS at HL-LHC

    CERN Document Server

    INSPIRE-00390105

    2016-07-11

    Highly selective triggers are essential for the physics programme of the ATLAS experiment at HL-LHC where the instantaneous luminosity will be about an order of magnitude larger than the LHC instantaneous luminosity in Run 1. The first level muon trigger rate is dominated by low momentum muons below the nominal trigger threshold due to the moderate momentum resolution of the Resistive Plate and Thin Gap trigger chambers. The resulting high trigger rates at HL-LHC can be su?ciently reduced by using the data of the precision Muon Drift Tube chambers for the trigger decision. This requires the implementation of a fast MDT read-out chain and of a fast MDT track reconstruction algorithm with a latency of at most 6 microseconds. A hardware demonstrator of the fast read-out chain has been successfully tested at the HL-LHC operating conditions at the CERN Gamma Irradiation Facility. The fast track reconstruction algorithm has been implemented on a fast trigger processor.

  12. A Highly Selective First-Level Muon Trigger With MDT Chamber Data for ATLAS at HL-LHC

    CERN Document Server

    Nowak, Sebastian; The ATLAS collaboration

    2015-01-01

    Highly selective triggers are essential for the physics programme of the ATLAS experiment at HL-LHC where the instantaneous luminosity will be about an order of magnitude larger than the LHC design luminosity. The Level-1 muon trigger rate is dominated by low momentum muons below the nominal trigger threshold due to the limited momentum resolution of the Resistive Plate and Thin Gap trigger chambers. The resulting high trigger rates at HL-LHC can be sufficient reduced by using the data of the precision Muon Drift Tube chambers for the trigger decision. This requires the implementation of a fast MDT read-out chain and of a fast MDT track reconstruction algorithm with a latency of at most 6~$\\mu$s. A hardware demonstrator of the fast read-out chain has been successfully tested at the high HL-LHC background rates at the CERN Gamma Irradiation Facility. The fast track reconstruction algorithm has been implemented on a fas trigger processor.

  13. Performance of a First-Level Muon Trigger with High Momentum Resolution Based on the ATLAS MDT Chambers for HL-LHC

    CERN Document Server

    Gadow, P.; Kortner, S.; Kroha, H.; Müller, F.; Richter, R.

    2016-01-01

    Highly selective first-level triggers are essential to exploit the full physics potential of the ATLAS experiment at High-Luminosity LHC (HL-LHC). The concept for a new muon trigger stage using the precision monitored drift tube (MDT) chambers to significantly improve the selectivity of the first-level muon trigger is presented. It is based on fast track reconstruction in all three layers of the existing MDT chambers, made possible by an extension of the first-level trigger latency to six microseconds and a new MDT read-out electronics required for the higher overall trigger rates at the HL-LHC. Data from $pp$-collisions at $\\sqrt{s} = 8\\,\\mathrm{TeV}$ is used to study the minimal muon transverse momentum resolution that can be obtained using the MDT precision chambers, and to estimate the resolution and efficiency of the MDT-based trigger. A resolution of better than $4.1\\%$ is found in all sectors under study. With this resolution, a first-level trigger with a threshold of $18\\,\\mathrm{GeV}$ becomes fully e...

  14. Report to users of Atlas

    International Nuclear Information System (INIS)

    Ahmad, I.; Glagola, B.

    1996-06-01

    This report contains the following topics: Status of the ATLAS Accelerator; Highlights of Recent Research at ATLAS; Program Advisory Committee; ATLAS User Group Executive Committee; FMA Information Available On The World Wide Web; Conference on Nuclear Structure at the Limits; and Workshop on Experiments with Gammasphere at ATLAS

  15. The ATLAS IBL CO2 Cooling System

    CERN Document Server

    Verlaat, Bartholomeus; The ATLAS collaboration

    2016-01-01

    The Atlas Pixel detector has been equipped with an extra B-layer in the space obtained by a reduced beam pipe. This new pixel detector called the ATLAS Insertable B-Layer (IBL) is installed in 2014 and is operational in the current ATLAS data taking. The IBL detector is cooled with evaporative CO2 and is the first of its kind in ATLAS. The ATLAS IBL CO2 cooling system is designed for lower temperature operation (<-35⁰C) than the previous developed CO2 cooling systems in High Energy Physics experiments. The cold temperatures are required to protect the pixel sensors for the high expected radiation dose up to 550 fb^-1 integrated luminosity. This paper describes the design, development, construction and commissioning of the IBL CO2 cooling system. It describes the challenges overcome and the important lessons learned for the development of future systems which are now under design for the Phase-II upgrade detectors.

  16. A Neural Network Approach to Muon Triggering in ATLAS

    CERN Document Server

    Livneh, Ran; CERN. Geneva

    2007-01-01

    The extremely high rate of events that will be produced in the future Large Hadron Collider requires the triggering mechanism to make precise decisions in a few nano-seconds. This poses a complicated inverse problem, arising from the inhomogeneous nature of the magnetic fields in ATLAS. This thesis presents a study of an application of Artificial Neural Networks to the muon triggering problem in the ATLAS end-cap. A comparison with realistic results from the ATLAS first level trigger simulation was in favour of the neural network, but this is mainly due to superior resolution available off-line. Other options for applying a neural network to this problem are discussed.

  17. The ATLAS Trigger Menu design for higher luminosities in Run 2

    CERN Document Server

    Torro Pastor, Emma; The ATLAS collaboration

    2018-01-01

    The ATLAS experiment aims at recording about 1 kHz of physics collisions, starting with an LHC design bunch crossing rate of 40 MHz. To reduce the large background rate while maintaining a high selection efficiency for rare physics events (such as beyond the Standard Model physics), a two-level trigger system is used. Events are selected based on physics signatures such as the presence of energetic leptons, photons, jets or large missing energy. The trigger system exploits topological information, as well as multivariate methods to carry out the necessary physics filtering for the many analyses that are pursued by the ATLAS community. In total, the ATLAS online selection consists of nearly two thousand individual triggers. A Trigger Menu is the compilation of these triggers, it specifies the physics selection algorithms to be used during data taking and the rate and bandwidth a given trigger is allocated. Trigger menus must reflect the physics goals of the collaboration for a given run, but also take into con...

  18. Two-stage atlas subset selection in multi-atlas based image segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu [The Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States)

    2015-06-15

    Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors

  19. Two-stage atlas subset selection in multi-atlas based image segmentation.

    Science.gov (United States)

    Zhao, Tingting; Ruan, Dan

    2015-06-01

    Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. The authors have developed a novel two-stage atlas

  20. Two-stage atlas subset selection in multi-atlas based image segmentation

    International Nuclear Information System (INIS)

    Zhao, Tingting; Ruan, Dan

    2015-01-01

    Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors