WorldWideScience

Sample records for atlas readout system

  1. The ATLAS liquid Argon calorimeters read-out system

    CERN Document Server

    Blondel, A; Fayard, L; La Marra, D; Léger, A; Matricon, P; Perrot, G; Poggioli, L; Prast, J; Riu, I; Simion, S

    2004-01-01

    The calorimetry of the ATLAS experiment takes advantage of different detectors based on the liquid Argon (LAr) technology. Signals from the LAr calorimeters are processed by various stages before being delivered to the Data Acquisition system. The calorimeter cell signals are received by the front-end boards, which digitize a predetermined number of samples of the bipolar waveform and sends them to the Read-Out Driver (ROD) boards. The ROD board receives triggered data from 1028 calorimeter cells, and determines the precise energy and timing of the signals by processing the discrete samplings of the pulse. In addition, it formats the digital stream for the following elements of the DAQ chain, and performs monitoring. The architecture and functionality of the ATLAS LAr ROD board are discussed, along with the final design of the Processing Unit boards housing the Digital Signal Processors (DSP). (9 refs).

  2. Development of a read out driver for ATLAS micromegas based on the Scalable Readout System

    International Nuclear Information System (INIS)

    With future LHC luminosity upgrades, part of the ATLAS muon spectrometer has to be changed, to cope with the increased flux of uncorrelated neutron and gamma particles. Micromegas detectors were chosen as precision tracker for the New Small Wheels, that will replace the current Small Wheel muon detector stations during the LHC shutdown foreseen for 2018. To read out these detectors together with all other ATLAS subsystems, a readout driver was developed to integrate these micromegas detectors into the ATLAS data acquisition infrastructure. The readout driver is based on the Scalable Readout System, and its tasks include trigger handling, slow control, event building and data transmission to the high-level readout systems. This article describes the layout and functionalities of this readout driver and its components, as well as a test of its functionalities in the cosmic ray facility of Ludwig-Maximilians University Munich

  3. Development and test of the DAQ system to readout a Micromegas prototype installed into the ATLAS experiment

    International Nuclear Information System (INIS)

    The Micromegas chambers have been chosen for the upgrade of the innermost forward muon tracking systems (Small Wheels) of the ATLAS detector in 2018-2019. A Micromegas quadruplet with an active area of 1 m x 0.5m has been built at CERN as a prototype of the future Small Wheels detectors and is going to be tested in the ATLAS cavern environment during the LHC RUN-II period 2015-2017. For the integration of this prototype detector into the ATLAS data acquisition system, an ATLAS compatible ReadOut Driver (ROD) based on the Scalable Readout System (SRS), the Scalable Readout Unit (SRU), will be used. A dedicated Micromegas segment has been implemented, in the framework of the ATLAS TDAQ online software, in order to include the detector inside the main ATLAS DAQ partition. A full set of tests, on the hardware and software aspects, is presented.

  4. FELIX - the new detector readout system for the ATLAS experiment

    CERN Document Server

    AUTHOR|(SzGeCERN)754725; The ATLAS collaboration; Anderson, John Thomas; Borga, Andrea; Boterenbrood, Hendrik; Chen, Hucheng; Chen, Kai; Drake, Gary; Donszelmann, Mark; Francis, David; Gorini, Benedetto; Guest, Daniel; Lanni, Francesco; Lehmann Miotto, Giovanna; Levinson, Lorne; Roich, Alexander; Schreuder, Frans Philip; Schumacher, J\\"orn; Vandelli, Wainer; Vermeulen, Jos; Wu, Weihao; Zhang, Jinlong

    2016-01-01

    From the ATLAS Phase-I upgrade and onward, new or upgraded detectors and trigger systems will be interfaced to the data acquisition, detector control and timing (TTC) systems by the Front-End Link eXchange (FELIX). FELIX is the core of the new ATLAS Trigger/DAQ architecture. Functioning as a router between custom serial links and a commodity network, FELIX is implemented by server PCs with commodity network interfaces and PCIe cards with large FPGAs and many high speed serial fiber transceivers. By separating data transport from data manipulation, the latter can be done by software in commodity servers attached to the network. Replacing traditional point-to-point links between Front-end components and the DAQ system by a switched network, FELIX provides scaling, flexibility uniformity and upgradability. Different Front-end data types or different data sources can be routed to different network endpoints that handle that data type or source: e.g. event data, configuration, calibration, detector control, monito...

  5. Analog pipeline readout for ATLAS calorimetry

    International Nuclear Information System (INIS)

    This paper presents the design and prototype testing of an analog pipeline readout module suitable for readout of the LAr calorimetry at the large hadron collider. The design has been driven by the readout requirements of the ATLAS electromagnetic liquid argon calorimeter and the ATLAS trigger design parameters. The results indicate that an analog pipeline readout system meeting the ATLAS requirements can be built using our modules. The SCA-chip employed has resolution approaching 13-bits (using the full range of the SCA) and can achieve a 16-bit dynamic range using a dual-range scheme. The module is based on switched capacitor array chips. A brief description of the design of the pipeline controller development, that will enable the SCA readout system to run as a deadtimeless analog RAM, is also given. (orig.)

  6. Improved performance for the ATLAS ReadOut System with the switchbased architecture

    CERN Document Server

    Schroer, N; Della Volpec, D; Gorini, B; Green, B; Joos, M; Kieft, G; Kordas, K; Kugel, A; Misiejuk, A; TeixeiraDias, P; Tremblet, L; Vermeulen, J; Werner, P; Wickens, F

    2009-01-01

    About 600 custom-built ReadOut Buffer INput (ROBIN) PCI boards are used in the DataCollection system of the ATLAS experiment at CERN. They are plugged into the PCI slots of about 150 PCs of the ReadOut system (ROS). In the standard bus-based setup of the ROS requests and event data are passed via the PCI interfaces. The performance meets the requirements, but may need to be enhanced for more demanding use cases. Modifications in the software and firmware of the ROBINs have made it possible to improve the performance by using the onboard Gigabit Ethernet interfaces for passing part of the requests and of the data in the so called switch-based scenario. Details of these modifications as well as measurement results will be presented in this paper.

  7. A new ATLAS muon CSC readout system with system on chip technology on ATCA platform

    Science.gov (United States)

    Bartoldus, R.; Claus, R.; Garelli, N.; Herbst, R. T.; Huffer, M.; Iakovidis, G.; Iordanidou, K.; Kwan, K.; Kocian, M.; Lankford, A. J.; Moschovakos, P.; Nelson, A.; Ntekas, K.; Ruckman, L.; Russell, J.; Schernau, M.; Schlenker, S.; Su, D.; Valderanis, C.; Wittgen, M.; Yildiz, S. C.

    2016-01-01

    The ATLAS muon Cathode Strip Chamber (CSC) backend readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run-2 luminosity. The readout design is based on the Reconfigurable Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the Advanced Telecommunication Computing Architecture (ATCA) platform. The RCE design is based on the new System on Chip XILINX ZYNQ series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources. Together with auxiliary memories, all these components form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the ZYNQ for high speed input and output fiberoptic links and TTC allowed the full system of 320 input links from the 32 chambers to be processed by 6 COBs in one ATCA shelf. The full system was installed in September 2014. We will present the RCE/COB design concept, the firmware and software processing architecture, and the experience from the intense commissioning for LHC Run 2.

  8. A new ATLAS muon CSC readout system with system on chip technology on ATCA platform

    International Nuclear Information System (INIS)

    The ATLAS muon Cathode Strip Chamber (CSC) backend readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run-2 luminosity. The readout design is based on the Reconfigurable Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the Advanced Telecommunication Computing Architecture (ATCA) platform. The RCE design is based on the new System on Chip XILINX ZYNQ series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources. Together with auxiliary memories, all these components form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the ZYNQ for high speed input and output fiberoptic links and TTC allowed the full system of 320 input links from the 32 chambers to be processed by 6 COBs in one ATCA shelf. The full system was installed in September 2014. We will present the RCE/COB design concept, the firmware and software processing architecture, and the experience from the intense commissioning for LHC Run 2

  9. A new ATLAS muon CSC readout system with system on chip technology on ATCA platform

    Science.gov (United States)

    Claus, R.

    2016-07-01

    The ATLAS muon Cathode Strip Chamber (CSC) back-end readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run 2 luminosity. The readout design is based on the Reconfiguration Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the ATCA platform. The RCE design is based on the new System on Chip Xilinx Zynq series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources together with auxiliary memories to form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the Zynq for G-link, S-link and TTC allowed the full system of 320 G-links from the 32 chambers to be processed by 6 COBs in one ATCA shelf through software waveform feature extraction to output 32 S-links. The full system was installed in Sept. 2014. We will present the RCE/COB design concept, the firmware and software processing architecture, and the experience from the intense commissioning towards LHC Run 2.

  10. A New ATLAS Muon CSC Readout System with System on Chip Technology on ATCA Platform

    CERN Document Server

    Yildiz, Suleyman Cenk; The ATLAS collaboration

    2015-01-01

    The ATLAS muon Cathode Strip Chamber (CSC) backend readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run 2 luminosity. The readout design is based on the Reconfigurable Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the Advanced Telecommunication Computing Architecture (ATCA) platform. The RCE design is based on the new System on Chip XILINX ZYNQ series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources together with auxiliary memories to form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the ZYNQ for G-link, S-link and TTC allowed the full system of 320 G-links from the 32 chambe...

  11. A New ATLAS Muon CSC Readout System with System on Chip Technology on ATCA Platform

    CERN Document Server

    ATLAS CSC Collaboration; The ATLAS collaboration

    2016-01-01

    The ATLAS muon Cathode Strip Chamber (CSC) backend readout system has been upgrade during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run 2 luminosity. The readout design is based on the Reconfigurable Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the Advanced Telecommunication Computing Architecture (ATCA) platform. The RCE design is based on the new System on Chip XILINX ZYNQ series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources together with auxiliary memories to form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the ZYNQ for G-link, S-link and TTC allowed the full system of 320 G-links from the 32 chamber...

  12. A New ATLAS Muon CSC Readout System with System on Chip Technology on ATCA Platform

    CERN Document Server

    Claus, Richard; The ATLAS collaboration

    2015-01-01

    The ATLAS muon Cathode Strip Chamber (CSC) back-end readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run 2 luminosity. The readout design is based on the Reconfiguration Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the ATCA platform. The RCE design is based on the new System on Chip Xilinx Zynq series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources together with auxiliary memories to form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the Zynq for G-link, S-link and TTC allowed the full system of 320 G-links from the 32 chambers to be processed by 6 COBs in one ATCA shelf thr...

  13. A New ATLAS Muon CSC Readout System with System on Chip Technology on ATCA Platform

    CERN Document Server

    Claus, Richard; The ATLAS collaboration

    2015-01-01

    The ATLAS muon Cathode Strip Chamber (CSC) backend readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run 2 luminosity. The readout design is based on the Reconfiguration Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the ATCA platform. The RCE design is based on the new System on Chip XILINX ZYNQ series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources together with auxiliary memories to form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the ZYNQ for G-link, S-link and TTC allowed the full system of 320 G-links from the 32 chambers to be processed by 6 COBs in one ATCA shelf thro...

  14. The ATLAS ReadOut System-Performance with first data and perspective for the future

    Energy Technology Data Exchange (ETDEWEB)

    Crone, G. [University College London (United Kingdom); Della Volpe, D. [Universita and INFN, Napoli (Italy); Gorini, B. [CERN (Switzerland); Green, B. [Royal Holloway University of London (United Kingdom); Joos, M., E-mail: markus.joos@cern.c [CERN (Switzerland); Kieft, G. [Nikhef, Amsterdam (Netherlands); Kordas, K. [University Bern (Switzerland); Kugel, A. [Ruprecht-Karls-Universitaet Heidelberg (Germany); Misiejuk, A. [Royal Holloway University of London (United Kingdom); Schroer, N. [Ruprecht-Karls-Universitaet Heidelberg (Germany); Teixeira-Dias, P. [Royal Holloway University of London (United Kingdom); Tremblet, L. [CERN (Switzerland); Vermeulen, J. [Nikhef, Amsterdam (Netherlands); Wickens, F. [Rutherford Appleton Laboratory (United Kingdom); Werner, P. [CERN (Switzerland)

    2010-11-01

    The ATLAS ReadOut System (ROS) receives data fragments from {approx}1600 detector readout links, buffers them and provides them on demand to the second-level trigger or to the event building system. The ROS is implemented with {approx}150PCs. Each PC houses a few, typically 4, custom-built PCI boards (ROBIN) and a 4-port PCIe Gigabit Ethernet NIC. The PCs run a multi-threaded object-oriented application managing the requests for data retrieval and for data deletion coming through the NIC, and the collection and output of data from the ROBINs. At a nominal event fragment arrival rate of 75 kHz the ROS has to concurrently service up to approximately 20 kHz of data requests from the second-level trigger and up to 3.5 kHz of requests from event building nodes. The full system has been commissioned in 2007. Performance of the system in terms of stability and reliability, results of laboratory rate capability measurements and upgrade scenarios are discussed in this paper.

  15. The ATLAS ReadOut System performance with first data and perspective for the future

    CERN Document Server

    Crone, G; Gorini, B; Green, B; Joos, M; Kieft, G; Kordas, K; Kugel, A; Misiejuk, A; Schroer, N; Teixeira-Dias, P; Tremblet, L; Vermeulen, J; Wickens, F; Werner, P

    2010-01-01

    The ATLAS ReadOut System (ROS) receives data fragments from ~1600 detector readout links, buffers them and provides them on demand to the second-level trigger or to the event building system. The ROS is implemented with ~150 PCs. Each PC houses a few, typically 4, custom-built PCI boards (ROBIN) and a 4-port PCIe Gigabit Ethernet NIC. The PCs run a multi-threaded object-oriented application managing the requests for data retrieval and for data deletion coming through the NIC, and the collection and output of data from the ROBINs. At a nominal event fragment arrival rate of 75 kHz the ROS has to concurrently service up to approximately 20 kHz of data requests from the second-level trigger and up to 3.5 kHz of requests from event building nodes. The full system has been commissioned in 2007. Performance of the system in terms of stability and reliability, results of laboratory rate capability measurements and upgrade scenarios are discussed in this paper.

  16. The ATLAS Read-Out System Performance with first data and perspective for the future

    CERN Document Server

    Crone, G; Gorini, B; Green, B; Joos, M; Kieft, G; Kordas, K; Kugel, A; Misiejuk, A; Schroer, N; Teixeira-Dias, P; Tremblet, L; Vermeulen, J; Wickens, F; Werner, P

    2009-01-01

    The Readout System (ROS) is the ATLAS DAQ element that receives the data fragments from the ~1600 detector readout links, buffers them and provides them on demand to the second level trigger processor or to the event building system. The ROS system is implemented with ~150 PCs each one housing in average 4 custom-built PCI mezzanine boards (ROBIN) and a 4-port PCIe NIC. Each PC runs a multithreaded OO-software framework managing the requests for data coming through the NIC and collecting the corresponding fragments from the physical buffers. At LHC luminosity of 10^33 cm-2s-1, corresponding to an average Level1 trigger rate of 75 kHz, the ROS has to concurrently service up to approximately 20 kHz of data requests from the Level2 trigger and up to 3.5 kHz of requests from event building nodes. The system has been commissioned in 2007 and since then has been working smoothly. For the most of 2008 the main activity has been data taking with cosmics in which the Level1 trigger rate is much lower with respect to L...

  17. Development of a Standardised Readout System for Active Pixel Sensors in HV/HR-CMOS Technologies for ATLAS Inner Detector Upgrades

    International Nuclear Information System (INIS)

    The LHC Phase-II Upgrade results in new challenges for tracking detectors for example in terms of cost effectiveness, resolution and radiation hardness. Active Pixel Sensors in HV/HR-CMOS technologies show promising results coping with these challenges. In order to demonstrate the feasibility of hybrid modules with active CMOS sensors and readout chips for the future ATLAS Inner Tracker, ATLAS R and D activities have started. After introducing the basic concepts and the demonstrator program, the development of an ATLAS compatible readout system will be presented as well as tuning procedures and measurements with demonstrator modules to test the readout system

  18. Demonstrator System for the Phase-I Upgrade of the Trigger Readout Electronics of the ATLAS Liquid Argon Calorimeters

    CERN Document Server

    FRAGNAUD, J; The ATLAS collaboration

    2014-01-01

    The trigger readout electronics of the ATLAS LAr Calorimeters will be improved for the Phase-I luminosity upgrade of the LHC to enhance the trigger feature extraction. Signals with higher spatial granularity will be digitized and processed by newly developed front-end and back-end components. In order to evaluate technical and performance aspects, a demonstrator system is being set up which is planned to be installed on the ATLAS detector during the upcoming LHC run. Results from system tests of the analog signal treatment, the trigger digitizer, the optical signal transmission and the FPGA-based back-end are reported.

  19. The ATLAS ReadOut System - improved performance for the switchbased setup

    CERN Document Server

    Schroer, N; The ATLAS collaboration; Della Volpec, D; Gorini, B; Green, B; Joos, M; Kieft, G; Kordas, K; Kugel, A; Misiejuk, A; TeixeiraDias, P; Tremblet, L; Vermeulen, J; Werner, P; Wickens, F

    2009-01-01

    About 600 custom built ReadOut Buffer INput (ROBIN) PCI boards are used in the DataCollection of the ATLAS experiment at CERN. In the standard setup requests and event data are passed via the PCI interfaces. The performance meets the requirements, but may need to be enhanced for more demanding use cases. Modifications in the software and firmware of the ROBINs have made it possible to improve the performance by using the onboard Gigabit Ethernet interfaces for passing part of the requests and of the data.

  20. Readout electronics for the ATLAS semiconductor tracker

    International Nuclear Information System (INIS)

    The binary readout architecture as a base-line and the analogue one as a fall-forward option have been adopted recently by the ATLAS semiconductor tracker group for the readout of silicon strip detectors. A brief overview of different architectures considered before as well as the status of the binary readout development will be presented. A new idea of the binary readout architecture employing a dual threshold scheme will be discussed and new results obtained for the full analogue readout chip realised in the DMILL technology will be reported. (orig.)

  1. Performance of the Demonstrator System for the Phase-I Upgrade of the Trigger Readout Electronics of the ATLAS Liquid Argon Calorimeters

    International Nuclear Information System (INIS)

    For the Phase-I luminosity upgrade of the LHC a higher granularity trigger readout of the ATLAS LAr Calorimeters is foreseen to enhance the trigger feature extraction and background rejection. The new readout system digitizes the detector signals, which are grouped into 34000 so-called Super Cells, with 12 bit precision at 40 MHz and transfers the data on optical links to the digital processing system, which extracts the Super Cell energies. A demonstrator version of the complete system has now been installed and operated on the ATLAS detector. Results from the commissioning and performance measurements are reported

  2. Performance of the Demonstrator System for the Phase-I Upgrade of the Trigger Readout Electronics of the ATLAS Liquid-Argon Calorimeters

    CERN Document Server

    Dumont Dayot, Nicolas; The ATLAS collaboration

    2015-01-01

    For the Phase-I luminosity upgrade of the LHC a higher granularity trigger readout of the ATLAS LAr Calorimeters is foreseen in order to enhance the trigger feature extraction and background rejection. The new readout system digitizes the detector signals, which are grouped into 34000 so-called Super Cells, with 12 bit precision at 40 MHz and transfers the data on optical links to the digital processing system, which extracts the Super Cell energies. A demonstrator version of the complete system has now been installed and operated on the ATLAS detector. Results from the commissioning and performance measurements will be reported.

  3. Performance of the Demonstrator System for the Phase-I Upgrade of the Trigger Readout Electronics of the ATLAS Liquid Argon Calorimeters

    CERN Document Server

    Dumont Dayot, Nicolas; The ATLAS collaboration

    2015-01-01

    For the Phase-I luminosity upgrade of the LHC a higher granularity trigger readout of the ATLAS LAr Calorimeters is foreseen in order to enhance the trigger feature extraction and background rejection. The new readout system digitizes the detector signals, which are grouped into 34000 so-called Super Cells, with 12 bit precision at 40 MHz and transfers the data on optical links to the digital processing system, which extracts the Super Cell energies. A demonstrator version of the complete system has now been installed and operated on the ATLAS detector. Results from the commissioning and performance measurements will be reported.

  4. Development of the Trigger Readout System for Phase-I Upgrade of the ATLAS Liquid Argon Calorimeters

    CERN Document Server

    Xu, Hao; The ATLAS collaboration

    2015-01-01

    The ATLAS Liquid Argon (LAr) Calorimeters were designed and built to measure electromagnetic and hadronic energy in proton-proton collisions produced at the LHC at centre-of-mass energies up to 14 TeV and at instantaneous luminosities up to 10^34 cm^-2s^-1. An LHC upgrade is planned to enhance the luminosities to 2-3 x 10^34 cm^-2 s^-1 and to deliver an integrated luminosity of about 300 fb^-1 during Run 3 from 2019 through 2021. In order to improve the identification performance for electrons, photons, taus, jets, missing energy at high background rejection rates, an improved spatial granularity of the trigger primitives has been proposed. Therefore, a new trigger readout system is being designed to digitize and process the signals with higher spatial granularity. A demonstrator system has been developed and installed on the ATLAS detector to evaluate the technical and performance aspects. Analog signal parameters including noise and cross-talk have been analyzed. The performance of the new readout system is...

  5. Development of the Trigger Readout System for the Phase-I Upgrade of the ATLAS Liquid Argon Calorimeters

    CERN Document Server

    Xu, Hao; The ATLAS collaboration

    2015-01-01

    The ATLAS Liquid Argon (LAr) Calorimeters were designed and built to measure electromagnetic and hadronic energy in proton-proton collisions produced at the LHC at centre-of-mass energies up to 14 TeV and at instantaneous luminosities up to 1034cm-2s-1. An LHC upgrade is planned to enhance the luminosities to 2-3 x 1034cm-2s-1 and to deliver an integrated luminosity of about 300 fb-1 during Run 3 from 2019 through 2021. In order to improve the identification performance for electrons, photons, taus, jets, missing energy at high background rejection rates, an improved spatial granularity of the trigger primitives has been proposed. Therefore, a new trigger readout system is being designed to digitize and process the signals with higher spatial granularity. A demonstrator system has been developed and installed on the ATLAS detector to evaluate the technical and performance aspects. Analog signal parameters including noise and cross-talk have been analyzed. The performance of the new demonstrator system in the ...

  6. Replacing full custom DAQ test system by COTS DAQ components on example of ATLAS SCT readout

    CERN Document Server

    Dwuznik, M

    2009-01-01

    A test system developed for ABCN-25 for ATLAS Inner Detector Upgrade is presented. The system is based on commercial off the shelf DAQ components by National Instruments and foreseen to aid in chip characterization and hybrid/module development complementing full custom VME based setups. The key differences from the point of software development are presented, together with guidelines for developing high performance LabVIEW code. Some real-world benchmarks will also be presented together with chip test results. The presented tests show good agreement of test results between the test setups used in different sites, as well as agreement with design specifications of the chip.

  7. Research and Development for a Free-Running Readout System for the ATLAS LAr Calorimeters at the High Luminosity LHC

    CERN Document Server

    Hils, Maximilian; The ATLAS collaboration

    2015-01-01

    The ATLAS Liquid Argon (LAr) Calorimeters were designed and built to measure electromagnetic and hadronic energy in proton-proton collisions produced at the Large Hadron Collider (LHC) at centre-of-mass energies up to \\SI{14}{\\tera\\electronvolt} and instantaneous luminosities up to \\SI{d34}{\\per\\centi\\meter\\squared\\per\\second}. The High Luminosity LHC (HL-LHC) programme is now developed for up to 5-7 times the design luminosity, with the goal of accumulating an integrated luminosity of \\SI{3000}{\\per\\femto\\barn}. In the HL-LHC phase, the increased radiation levels require a replacement of the front-end (FE) electronics of the LAr Calorimeters. Furthermore, the ATLAS trigger system is foreseen to increase the trigger accept rate and the trigger latency which requires a larger data volume to be buffered. Therefore, the LAr Calorimeter read-out will be exchanged with a new FE and a high bandwidth back-end (BE) system for receiving data from all \

  8. Research and Development for a Free-Running Readout System for the ATLAS LAr Calorimeters at the High Luminosity LHC

    CERN Document Server

    Hils, Maximilian; The ATLAS collaboration

    2015-01-01

    The ATLAS Liquid Argon (LAr) Calorimeters were designed and built to measure electromagnetic and hadronic energy in proton-proton collisions produced at the LHC at centre-of-mass energies up to 14 TeV and instantaneous luminosities up to $10^{34} \\text{cm}^{-2} \\text{s}^{-1}$. The High Luminosity LHC (HL-LHC) programme is now developed for up to 5-7 times the design luminosity, with the goal of accumulating an integrated luminosity of $3000~\\text{fb}^{-1}$. In the HL-LHC phase, the increased radiation levels require a replacement of the front-end electronics of the LAr Calorimeters. Furthermore, the ATLAS trigger system is foreseen to increase the trigger accept rate by a factor 10 to 1 MHz and the trigger latency by a factor of 20 which requires a larger data volume to be buffered. Therefore, the LAr Calorimeter read-out will be exchanged with a new front-end and a high bandwidth back-end system for receiving data from all 186.000 channels at 40 MHz LHC bunch-crossing frequency and for off-detector buffering...

  9. Upgrade of the Trigger Readout System of the ATLAS Liquid Argon Calorimeters

    CERN Document Server

    Marino, CP; The ATLAS collaboration

    2013-01-01

    The ATLAS detector was designed and built to study proton-proton collisions produced at the LHC at centre-of-mass energies up to 14 TeV and instantaneous luminosities up to 10^34 cm^-2 s^-1. Liquid argon (LAr) sampling calorimeters are employed for all electromagnetic calorimetry in the pseudorapidity region |eta|<3.2, and for hadronic calorimetry in the region from |eta|=1.5 to |eta|=4.9. The ATLAS Liquid Argon (LAr) calorimeters produce a total of 182,486 signals which are digitizedand processed by the front-end and back-end electronics at every triggered event. In addition, the front-end electronics sums analog signals to provide coarsely grained energy sums, called trigger towers, to the first-level trigger system, which is optimized for nominal LHC luminosities. In 2018, an instantaneous luminosity of 2-3 x 10^34 cm^-2 s^-1 is expected, far beyond the nominal one for which the detector was designed. In order to cope with this increased trigger rate, an improved spatial granularity of the trigger primi...

  10. Upgrade of the Trigger Readout System of the ATLAS Liquid Argon Calorimeters

    CERN Document Server

    Marino, CP; The ATLAS collaboration

    2014-01-01

    The ATLAS detector was designed and built to study proton-proton collisions produced at the LHC at centre-of-mass energies up to 14 TeV and instantaneous luminosities up to $10^{34} \\rm{cm}^{-2} \\rm{s}^{-1}$. Liquid argon (LAr) sampling calorimeters are employed for all electromagnetic calorimetry in the pseudorapidity region $|\\eta|$ < 3.2, and for hadronic calorimetry in the region from $|\\eta|=$1.5 to $|\\eta|=$4.9. The ATLAS Liquid Argon (LAr) calorimeters produce a total of 182,486 signals which are digitized and processed by the front-end and back-end electronics at every triggered event. In addition, the front-end electronics sums analog signals to provide coarsely grained energy sums, called trigger towers, to the first-level trigger system, which is optimized for nominal LHC luminosities. In 2018, an instantaneous luminosity of 2-3 $\\times 10^{34} \\rm{cm}^{-2} \\rm{s}^{-1}$ is expected, far beyond the nominal one for which the detector was designed. In order to cope with this increased trigger rate,...

  11. Integrator based readout in Tile Calorimeter of the ATLAS experiment

    CERN Document Server

    Gonzalez Parra, G

    2012-01-01

    TileCal is the hadronic tile calorimeter of the ATLAS experiment at LHC/CERN. To equalize the response of individual TileCal cells with a precision better than 1 % and to monitor the response of each cell over time, a calibration and monitoring system based on a Cs137 radioactive source driven through the calorimeter volume by liquid flow has been implemented. This calibration system relies on dedicated readout chain based on a slow integrators that read currents from the TileCal photomultipliers integrating over milliseconds during the calibration runs. Moreover, during the LHC collisions the TileCal integrator based readout provides the signal coming from inelastic proton- proton collisions at low momentum transfer (MB) which is used to monitor ATLAS instantaneously luminosity and to continuously monitor the response of all calorimeter cells during data-taking.

  12. Performance of the Electronic Readout of the ATLAS Liquid Argon Calorimeters

    CERN Document Server

    Abreu, H; Aleksa, M; Aperio Bella, L; Archambault, JP; Arfaoui, S; Arnaez, O; Auge, E; Aurousseau, M; Bahinipati, S; Ban, J; Banfi, D; Barajas, A; Barillari, T; Bazan, A; Bellachia, F; Beloborodova, O; Benchekroun, D; Benslama, K; Berger, N; Berghaus, F; Bernat, P; Bernier, R; Besson, N; Binet, S; Blanchard, JB; Blondel, A; Bobrovnikov, V; Bohner, O; Boonekamp, M; Bordoni, S; Bouchel, M; Bourdarios, C; Bozzone, A; Braun, HM; Breton, D; Brettel, H; Brooijmans, G; Caputo, R; Carli, T; Carminati, L; Caughron, S; Cavalleri, P; Cavalli, D; Chareyre, E; Chase, RL; Chekulaev, SV; Chen, H; Cheplakov, A; Chiche, R; Citterio, M; Cojocaru, C; Colas, J; Collard, C; Collot, J; Consonni, M; Cooke, M; Copic, K; Costa, GC; Courneyea, L; Cuisy, D; Cwienk, WD; Damazio, D; Dannheim, D; De Cecco, S; De La Broise, X; De La Taille, C; de Vivie, JB; Debennerot, B; Delagnes, E; Delmastro, M; Derue, F; Dhaliwal, S; Di Ciaccio, L; Doan, O; Dudziak, F; Duflot, L; Dumont-Dayot, N; Dzahini, D; Elles, S; Ertel, E; Escalier, M; Etienvre, AI; Falleau, I; Fanti, M; Farooque, T; Favre, P; Fayard, Louis; Fent, J; Ferencei, J; Fischer, A; Fournier, D; Fournier, L; Fras, M; Froeschl, R; Gadfort, T; Gallin-Martel, ML; Gibson, A; Gillberg, D; Gingrich, DM; Göpfert, T; Goodson, J; Gouighri, M; Goy, C; Grassi, V; Gray, J; Guillemin, T; Guo, B; Habring, J; Handel, C; Heelan, L; Heintz, H; Helary, L; Henrot-Versille, S; Hervas, L; Hobbs, J; Hoffman, J; Hostachy, JY; Hoummada, A; Hrivnac, J; Hrynova, T; Hubaut, F; Huber, J; Iconomidou-Fayard, L; Iengo, P; Imbert, P; Ishmukhametov, R; Jantsch, A; Javadov, N; Jezequel, S; Jimenez Belenguer, M; Ju, XY; Kado, M; Kalinowski, A; Kar, D; Karev, A; Katsanos, I; Kazarinov, M; Kerschen, N; Kierstead, J; Kim, MS; Kiryunin, A; Kladiva, E; Knecht, N; Kobel, M; Koletsou, I; König, S; Krieger, P; Kukhtin, V; Kuna, M; Kurchaninov, L; Labbe, J; Lacour, D; Ladygin, E; Lafaye, R; Laforge, B; Lamarra, D; Lampl, W; Lanni, F; Laplace, S; Laskus, H; Le Coguie, A; Le Dortz, O; Le Maner, C; Lechowski, M; Lee, SC; Lefebvre, M; Leonhardt, K; Lethiec, L; Leveque, J; Liang, Z; Liu, C; Liu, T; Liu, Y; Loch, P; Lu, J; Ma, H; Mader, W; Majewski, S; Makovec, N; Makowiecki, D; Mandelli, L; Mangeard, PS; Mansoulie, B; Marchand, JF; Marchiori, G; Martin, D; Martin-Chassard, G; Martin dit Latour, B; Marzin, A; Maslennikov, A; Massol, N; Matricon, P; Maximov, D; Mazzanti, M; McCarthy, T; McPherson, R; Menke, S; Meyer, JP; Ming, Y; Monnier, E; Mooshofer, P; Neganov, A; Niedercorn, F; Nikolic-Audit, I; Nugent, IM; Oakham, G; Oberlack, H; Ocariz, J; Odier, J; Oram, CJ; Orlov, I; Orr, R; Parsons, JA; Peleganchuk, S; Penson, A; Perini, L; Perrodo, P; Perrot, G; Perus, A; Petit, E; Pisarev, I; Plamondon, M; Poffenberger, P; Poggioli, L; Pospelov, G; Pralavorio, P; Prast, J; Prudent, X; Przysiezniak, H; Puzo, P; Quentin, M; Radeka, V; Rajagopalan, S; Rauter, E; Reimann, O; Rescia, S; Resende, B; Richer, JP; Ridel, M; Rios, R; Roos, L; Rosenbaum, G; Rosenzweig, H; Rossetto, O; Roudil, W; Rousseau, D; Ruan, X; Rudert, A; Rusakovich, N; Rusquart, P; Rutherfoord, J; Sauvage, G; Savine, A; Schaarschmidt, J; Schacht, P; Schaffer, A; Schram, M; Schwemling, P; Seguin Moreau, N; Seifert, F; Serin, L; Seuster, R; Shalyugin, A; Shupe, M; Simion, S; Sinervo, P; Sippach, W; Skovpen, K; Sliwa, R; Soukharev, A; Spano, F; Stavina, P; Straessner, A; Strizenec, P; Stroynowski, R; Talyshev, A; Tapprogge, S; Tarrade, F; Tartarelli, GF; Teuscher, R; Tikhonov, Yu; Tocut, V; Tompkins, D; Thompson, P; Tisserant, S; Todorov, T; Tomasz, F; Trincaz-Duvoid, S; Trinh, Thi N; Trochet, S; Trocme, B; Tschann-Grimm, K; Tsionou, D; Ueno, R; Unal, G; Urbaniec, D; Usov, Y; Voss, K; Veillet, JJ; Vincter, M; Vogt, S; Weng, Z; Whalen, K; Wicek, F; Wilkens, H; Wingerter-Seez, I; Wulf, E; Yang, Z; Ye, J; Yuan, L; Yurkewicz, A; Zarzhitsky, P; Zerwas, D; Zhang, H; Zhang, L; Zhou, N; Zimmer, J; Zitoun, R; Zivkovic, L

    2010-01-01

    The ATLAS detector has been designed for operation at the Large Hadron Collider at CERN. ATLAS includes electromagnetic and hadronic liquid argon calorimeters, with almost 200,000 channels of data that must be sampled at the LHC bunch crossing frequency of 40 MHz. The calorimeter electronics calibration and readout are performed by custom electronics developed specifically for these purposes. This paper describes the system performance of the ATLAS liquid argon calibration and readout electronics, including noise, energy and time resolution, and long term stability, with data taken mainly from full-system calibration runs performed after installation of the system in the ATLAS detector hall at CERN.

  13. ATLAS DataFlow the Read-Out Subsystem, Results from Trigger and Data-Acquisition System Testbed Studies and from Modeling

    CERN Document Server

    Vermeulen, J C; Alexandrov, I; Amorim, A; Dos Anjos, A; Badescu, E; Barros, N; Beck, H P; Blair, R; Burckhart-Chromek, Doris; Caprini, M; Ciobotaru, M; Corso-Radu, A; Cranfield, R; Crone, G; Dawson, J; Dobinson, Robert W; Dobson, M; Drake, G; Ermoline, Y; Ferrari, R; Ferrer, M L; Francis, D; Gadomski, S; Gameiro, S; Gorini, B; Green, B; Gruwé, M; Haas, S; Haberichter, W N; Haeberli, C; Hasegawa, Y; Hauser, R; Hinkelbein, C; Hughes-Jones, R E; Joos, M; Kazarov, A; Kieft, G; Klose, D; Kolos, S; Korcyl, K; Kordas, K; Kotov, V; Kugel, A; Lankford, A; Lehmann, G; Le Vine, M J; Mapelli, L; Martin, B; McLaren, R; Meirosu, C; Mineev, M; Misiejuk, A; Mornacchi, G; Müller, M; Murillo, R; Nagasaka, Y; Petersen, J; Pope, B; Prigent, D; Ryabov, Yu; Schlereth, J L; Sloper, J E; Soloviev, I; Spiwoks, R; Stancu, S; Strong, J; Tremblet, L; Ünel, G; Vandelli, Wainer R; Werner, P; Wickens, F; Wiesmann, M; Wu, M; Yasu, Y; 14th IEEE - NPSS Real Time Conference 2005 Nuclear Plasma Sciences Society

    2005-01-01

    In the ATLAS experiment at the LHC, the output of readout hardware specific to each subdetector will be transmitted to buffers, located on custom made PCI cards ("ROBINs"). The data consist of fragments of events accepted by the first-level trigger at a maximum rate of 100 kHz. Groups of four ROBINs will be hosted in about 150 Read-Out Subsystem (ROS) PCs. Event data are forwarded on request via Gigabit Ethernet links and switches to the second-level trigger or to the Event builder. In this paper a discussion of the functionality and real-time properties of the ROS is combined with a presentation of measurement and modelling results for a testbed with a size of about 20% of the final DAQ system. Experimental results on strategies for optimizing the system performance, such as utilization of different network architectures and network transfer protocols, are presented for the testbed, together with extrapolations to the full system.

  14. ATLAS pixel detector timing optimisation with the back of crate card of the optical pixel readout system

    Energy Technology Data Exchange (ETDEWEB)

    Flick, T; Gerlach, P; Reeves, K; Maettig, P [Department of Physics, Bergische Universitaet Wuppertal (Germany)

    2007-04-15

    As with all detector systems at the Large Hadron Collider (LHC), the assignment of data to the correct bunch crossing, where bunch crossings will be separated in time by 25 ns, is one of the challenges for the ATLAS pixel detector. This document explains how the detector system will accomplish this by describing the general strategy, its implementation, the optimisation of the parameters, and the results obtained during a combined testbeam of all ATLAS subdetectors.

  15. Read-out and calibration of a tile calorimeter for ATLAS

    International Nuclear Information System (INIS)

    The read-out and calibration of scintillating tiles hadronic calorimeter for ATLAS is discussed. Tests with prototypes of FERMI, a system of read-out electronics based on a dynamic range compressor reducing the dynamic range from 16 to 10 bits and a 40 MHz 10 bits sampling ADC, are presented. In comparison with a standard charge integrating read-out improvements in the resolution of 1% in the constant term are obtained

  16. FELIX: the detector readout upgrade of the ATLAS experiment

    CERN Document Server

    Ryu, Soo; The ATLAS collaboration

    2015-01-01

    After the Phase-I upgrade and onward, the Front-End Link eXchange(FELIX) system will be the interface between the readout system and the detector front-end electronics and trigger electronics at the ATLAS experiment. FELIX will function as a gateway to a commodity switched network which will use standard technologies (Ethernet or Infiniband) to communicate with data collecting and processing components. In this talk the system architecture of FELIX will be described and the testing results of the FELIX demonstrator will be presented

  17. Research and development for a free-running readout system for the ATLAS LAr Calorimeters at the high luminosity LHC

    Science.gov (United States)

    Hils, Maximilian

    2016-07-01

    The ATLAS Liquid Argon (LAr) Calorimeters were designed and built to measure electromagnetic and hadronic energy in proton-proton collisions produced at the Large Hadron Collider (LHC) at centre-of-mass energies up to 14 TeV and instantaneous luminosities up to 1034 cm-2 s-1. The High Luminosity LHC (HL-LHC) programme is now developed for up to 5-7 times the design luminosity, with the goal of accumulating an integrated luminosity of 3000 fb-1. In the HL-LHC phase, the increased radiation levels and an improved ATLAS trigger system require a replacement of the Front-end (FE) and Back-end (BE) electronics of the LAr Calorimeters. Results from research and development of individual components and their radiation qualification as well as the overall system design will be presented.

  18. Evaluation of Fermi Read-out of the ATLAS Tilecal Prototype

    CERN Document Server

    Agnvall, S; Albiol, F; Alifanov, A; Amaral, P; Amelin, D V; Amorim, A; Anderson, K J; Angelini, C; Antola, A; Astesan, F; Astvatsaturov, A R; Autiero, D; Badaud, F; Barreira, G; Benetta, R; Berglund, S R; Blanchot, G; Blucher, E; Blaj, C; Bodö, P; Bogush, A A; Bohm, C; Boldea, V; Borisov, O N; Bosman, M; Bouhemaid, N; Brette, P; Breveglieri, L; Bromberg, C; Brossard, M; Budagov, Yu A; Calôba, L P; Carvalho, J; Casado, M P; Castera, A; Cattaneo, Paolo Walter; Cavalli-Sforza, M; Cavasinni, V; Chadelas, R; Chevaleyre, J C; Chirikov-Zorin, I E; Chlachidze, G; Cobal, M; Cogswell, F; Colaço, F; Constantinescu, S; Costanzo, D; Crouau, M; Dadda, L; Daudon, F; David, J; David, M; Davidek, T; Dawson, J; De, K; Del Prete, T; De Santo, A; Di Girolamo, B; Dita, S; Dolejsi, J; Dolezal, Z; Downing, R; Dugne, J J; Efthymiopoulos, I; Engström, M; Errede, D; Errede, S; Evans, H; Fenyuk, A; Ferrer, A; Flaminio, Vincenzo; Fristedt, A; Gallas, E J; Gaspar, M; Gildemeister, O; Givoletto, M; Glagolev, V V; Goggi, Giorgio V; Gómez, A; Gong, S; Guz, Yu; Grabskii, V; Grieco, M; Hakopian, H H; Haney, M W; Hansen, M; Hellman, S; Henriques, A; Hentzell, H; Holmberg, T; Holmgren, S O; Honoré, P F; Huston, J; Ivanyushenkov, Yu M; Jon-And, K; Juste, A; Kakurin, S; Karapetian, G V; Karyukhin, A N; Kérek, A; Khokhlov, Yu A; Kopikov, S V; Kostrikov, M E; Kostyukhin, V; Kukhtin, V V; Kulchitskii, Yu A; Kurzbauer, W; Lami, S; Landi, G; Lapin, V; Lazzeroni, C; Lebedev, A; Leitner, R; Li, J; Lippi, M; Le Dortz, O; Löfstedt, B; Lomakin, Yu F; Lomakina, O V; Lokajícek, M; Lund-Jensen, B; Maio, A; Malyukov, S N; Mariani, R; Marroquin, F; Martins, J P; Mazzoni, E; Merritt, F S; Michel, B; Miller, R; Minashvili, I A; Miralles, L; Mnatzakanian, E A; Montarou, G; Motto, S; Muanza, G S; Némécek, S; Nessi, Marzio; Ödmark, A; Onofre, A; Orteu, S; Padilla, C; Pallin, D; Pantea, D; Patriarca, J; Pereira, A; Perlas, J A; Persson, S T; Petit, P; Pilcher, J E; Pinhão, J; Poggioli, Luc; Poirot, S; Polesello, G; Price, L E; Proudfoot, J; Pukhov, O; Reinmuth, G; Renzoni, G; Richards, R; Riu, I; Romanov, V; Ronceux, B; Rumyantsev, V; Rusakovitch, N A; Sami, M; Sanders, H; Santos, J; Savoy-Navarro, Aurore; Sawyer, L; Says, L P; Schwemling, P; Seixas, J M; Selldén, B; Semenov, A A; Shchelchkov, A S; Shochet, M J; Simaitis, V J; Sissakian, A N; Solodkov, A A; Solovyanov, O; Sonderegger, P; Soustruznik, K; Stanek, R; Starchenko, E A; Stefanelli, R; Stephens, R; Suk, M; Sundblad, R; Svensson, C; Tang, F; Tardell, S; Tas, P; Teubert, F; Thaler, J J; Tokár, S; Topilin, N D; Trka, Z; Turcot, A S; Turcotte, M; Valkár, S; Varanda, M J; Vartapetian, A H; Vazeille, F; Vinogradov, V; Vivaldi, F; Vorozhtsov, S B; Wagner, D; White, A; Wolters, H; Yamdagni, N; Yarygin, G; Yosef, C; Yuan, J; Zaitsev, A; Zdrazil, M

    1998-01-01

    Prototypes of the \\fermi{} system have been used to read out a prototype of the \\atlas{} hadron calorimeter in a beam test at the CERN SPS. The \\fermi{} read-out system, using a compressor and a 40 MHz sampling ADC, is compared to a standard charge integrating read-out by measuring the energy resolution of the calorimeter separately with the two systems on the same events. Signal processing techniques have been designed to optimize the treatment of \\fermi{} data. The resulting energy resolution is better than the one obtained with the standard read-out.

  19. Development of ATLAS Liquid Argon Calorimeter Read-out Electronics for the HL-LHC

    CERN Document Server

    Newcomer, Mitchel; The ATLAS collaboration

    2015-01-01

    The high-luminosity phase of the Large Hadron Collider will provide a 5-7 times greater instantaneous and total luminosities than assumed in the original design of the ATLAS Liquid Argon Calorimeters and their read-out system. An improved trigger system with higher acceptance rate and longer latency and a better radiation tolerance require an upgrade of the read-out electronics. Concepts for the future read-out of the 183.000 calorimeter channels at 40-80 MHz and 16 bit dynamic range, and the development of radiation tolerant, low noise, low power and high-bandwidth electronic components will be presented.

  20. The readout driver (ROD) for the ATLAS liquid argon calorimeters

    Science.gov (United States)

    Efthymiopoulos, Ilias

    2001-04-01

    The Readout Driver (ROD) for the Liquid Argon calorimeter of the ATLAS detector is described. Each ROD module receives triggered data from 256 calorimeter cells via two fiber-optics 1.28 Gbit/s links with a 100 kHz event rate (25 kbit/event). Its principal function is to determine the precise energy and timing of the signal from discrete samples of the waveform, taken each period of the LHC clock (25 ns). In addition, it checks, histograms, and formats the digital data stream. A demonstrator system, consisting of a motherboard and several daughter-board processing units (PUs) was constructed and is currently used for tests in the lab. The design of this prototype board is presented here. The board offers maximum modularity and allows the development and testing of different PU designs based on today's leading integer and floating point DSPs.

  1. The readout driver (ROD) for the ATLAS liquid argon calorimeters

    CERN Document Server

    Efthymiopoulos, I

    2001-01-01

    The Readout Driver (ROD) for the Liquid Argon calorimeter of the ATLAS detector is described. Each ROD module receives triggered data from 256 calorimeter cells via two fiber-optics 1.28 Gbit/s links with a 100 kHz event rate (25 kbit/event). Its principal function is to determine the precise energy and timing of the signal from discrete samples of the waveform, taken each period of the LHC clock (25 ns). In addition, it checks, histograms, and formats the digital data stream. A demonstrator system, consisting of a motherboard and several daughter-board processing units (PUs) was constructed and is currently used for tests in the lab. The design of this prototype board is presented here. The board offers maximum modularity and allows the development and testing of different PU designs based on today's leading integer and floating point DSPs. (3 refs).

  2. A new read-out architecture for the ATLAS Tile Calorimeter Phase-II Upgrade

    CERN Document Server

    Valero, Alberto; The ATLAS collaboration

    2015-01-01

    TileCal is the Tile hadronic calorimeter of the ATLAS experiment at the LHC. The LHC has planned a series of upgrades culminating in the High Luminosity LHC (HL-LHC) which will increase of order five to seven times the LHC nominal instantaneous luminosity. TileCal will undergo an upgrade to accommodate to the HL-LHC parameters. The TileCal read-out electronics will be redesigned introducing a new read-out strategy. The new TileCal read-out architecture is presented including a description of the main electronics modules and some preliminary results obtained with the first demonstrator system.

  3. Medipix2 parallel readout system

    Science.gov (United States)

    Fanti, V.; Marzeddu, R.; Randaccio, P.

    2003-08-01

    A fast parallel readout system based on a PCI board has been developed in the framework of the Medipix collaboration. The readout electronics consists of two boards: the motherboard directly interfacing the Medipix2 chip, and the PCI board with digital I/O ports 32 bits wide. The device driver and readout software have been developed at low level in Assembler to allow fast data transfer and image reconstruction. The parallel readout permits a transfer rate up to 64 Mbytes/s. http://medipix.web.cern ch/MEDIPIX/

  4. ATLAS TDAQ System Administration:

    CERN Document Server

    Lee, Christopher Jon; The ATLAS collaboration; Bogdanchikov, Alexander; Ballestrero, Sergio; Contescu, Alexandru Cristian; Dubrov, Sergei; Fazio, Daniel; Korol, Aleksandr; Scannicchio, Diana; Twomey, Matthew Shaun; Voronkov, Artem

    2015-01-01

    The ATLAS Trigger and Data Acquisition (TDAQ) system is responsible for the online processing of live data, streaming from the ATLAS experiment at the Large Hadron Collider (LHC) at CERN. The online farm is composed of ̃3000 servers, processing the data readout from ̃100 million detector channels through multiple trigger levels. During the two years of the first Long Shutdown (LS1) there has been a tremendous amount of work done by the ATLAS TDAQ System Administrators, implementing numerous new software applications, upgrading the OS and the hardware, changing some design philosophies and exploiting the High Level Trigger farm with different purposes. During the data taking only critical security updates are applied and broken hardware is replaced to ensure a stable operational environment. The LS1 provided an excellent opportunity to look into new technologies and applications that would help to improve and streamline the daily tasks of not only the System Administrators, but also of the scientists who wil...

  5. Phase-I Trigger Readout Electronics Upgrade of the ATLAS Liquid-Argon Calorimeters

    CERN Document Server

    Mori, Tatsuya; The ATLAS collaboration

    2015-01-01

    This document for NEC’2015 proceedings gives an overview of the Phase-I Upgrade on the ATLAS LAr Calorimeter Trigger Readout. The design of custom developed hardware for fast real-time data processing and transfer is also overviewed. Performance results from the prototype boards in the demonstrator system are shown. First measurements of noise levels and linearity on response from the demonstrator system are shown.

  6. RT2016 Phase-I Trigger Readout Electronics Upgrade for the ATLAS Liquid-Argon Calorimeters

    CERN Document Server

    AUTHOR|(SzGeCERN)478829; The ATLAS collaboration

    2016-01-01

    For the Phase-I luminosity upgrade of the LHC, a higher granularity trigger readout of the ATLAS LAr Calorimeters is foreseen in order to enhance the trigger feature extraction and background rejection. The new readout system digitizes the detector signals, which are grouped into 34000 so-called Super Cells, with 12-bit precision at 40 MHz. The data is transferred via optical links to a digital processing system which extracts the Super Cell energies. A demonstrator version of the complete system has now been installed and operated on the ATLAS detector. The talk will give an overview of the Phase-I Upgrade of the ATLAS LAr Calorimeter readout and present the custom developed hardware including their role in real-time data processing and fast data transfer. This contribution will also report on the performance of the newly developed ASICs including their radiation tolerance and on the performance of the prototype boards in the demonstrator system based on various measurements with the 13 TeV collision data. R...

  7. Integrator based read-out in Tile Calorimeter of the ATLAS experiment

    CERN Document Server

    Gonzalez, G; The ATLAS collaboration

    2011-01-01

    TileCal, the central hadronic calorimeter of the ATLAS experiment at the CERN Large Hadron Collider (LHC), is built of steel and scintillating tiles with redundant readout by optical fibers and uses photomultipliers as photodetectors. It provides measurements for hadrons, jets and missing transverse energy. To equalize the response of individual TileCal cells with a precision better than 1% and to monitor the response of each cell over time, a calibration and monitoring system based on a Cesium 137 radioactive source driven through the calorimeter volume by liquid flow has been implemented. This calibration system relies on dedicated readout chain based on slow integrators that read currents from the TileCal photomultipliers averaged over milliseconds during the calibration runs. During the LHC collisions the TileCal integrator based readout provides monitoring of the beam conditions and of the stability of the TileCal optics, including stability of the photomultiplier gains. The work to be presented will foc...

  8. Development of ATLAS Liquid Argon Calorimeters Readout Electronics for HL-LHC

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00388354; The ATLAS collaboration

    2016-01-01

    The high-luminosity phase of the Large Hadron Collider (LHC) will provide 5-7 times greater instantaneous and total luminosities than assumed in the original design of the ATLAS Liquid Argon (LAr) Calorimeters and their readout system. The improved trigger system has a higher acceptance rate of 1 MHz and a longer latency of up to 60 micro-seconds. This requires an upgrade of the readout electronics, a better radiation tolerance is also required. This paper will present concepts for the future readout of the 182,468 calorimeter channels at 40 or 80 MHz with a 16 bit dynamic range. Progress of the development of low-noise, low-power and high-bandwidth electronic components will be presented. These include radiation-tolerant preamplifiers, analog-to-digital converters (ADC) up to 14 bits and low-power optical links providing transfer rates of at least 10 Gbps per fiber.

  9. Development of ATLAS Liquid Argon Calorimeters Readout Electronics for HL-LHC

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00388354; The ATLAS collaboration

    2016-01-01

    The high-luminosity phase of the Large Hadron Collider will provide 5-7 times greater instantaneous and total luminosities than assumed in the original design of the ATLAS Liquid Argon Calorimeters and their readout system. An improved trigger system with a higher acceptance rate of 1 MHz and a longer latency of up to 60 micro-seconds together with a better radiation tolerance require an upgrade of the readout electronics. Concepts for the future readout of the 182,500 calorimeter channels at 40/80 MHz and 16 bit dynamic range, and the development of low-noise, low-power and high-bandwidth electronic components will be presented. These include ASIC developments towards radiation-tolerant low-noise pre-amplifiers, analog-to-digital converters up to 14 bits and low-power optical links providing transfer rates of at least 10 Gb/s per fiber.

  10. Reliable and redundant FPGA based read-out design in the ATLAS TileCal Demonstrator

    Energy Technology Data Exchange (ETDEWEB)

    Akerstedt, Henrik; Muschter, Steffen; Drake, Gary; Anderson, Kelby; Bohm, Christian; Oreglia, Mark; Tang, Fukun

    2015-10-01

    The Tile Calorimeter at ATLAS [1] is a hadron calorimeter based on steel plates and scintillating tiles read out by PMTs. The current read-out system uses standard ADCs and custom ASICs to digitize and temporarily store the data on the detector. However, only a subset of the data is actually read out to the counting room. The on-detector electronics will be replaced around 2023. To achieve the required reliability the upgraded system will be highly redundant. Here the ASICs will be replaced with Kintex-7 FPGAs from Xilinx. This, in addition to the use of multiple 10 Gbps optical read-out links, will allow a full read-out of all detector data. Due to the higher radiation levels expected when the beam luminosity is increased, opportunities for repairs will be less frequent. The circuitry and firmware must therefore be designed for sufficiently high reliability using redundancy and radiation tolerant components. Within a year, a hybrid demonstrator including the new readout system will be installed in one slice of the ATLAS Tile Calorimeter. This will allow the proposed upgrade to be thoroughly evaluated well before the planned 2023 deployment in all slices, especially with regard to long term reliability. Different firmware strategies alongside with their integration in the demonstrator are presented in the context of high reliability protection against hardware malfunction and radiation induced errors.

  11. The UK ROB-in,A prototype ATLAS readout buffer input module

    CERN Document Server

    Boorman, G; Cranfield, R; Crone, G J; Green, B; Strong, J

    2000-01-01

    This paper describes the specification, design, operation, performanceand status of the UK ROB-in. The UK ROB-in is a prototype ATLAS ReadOut Buffer (ROB) input module intended both as a prototype componentfor the final system and for use in prototyping other parts of theATLAS trigger/DAQ system. Its function is to buffer event fragments atthe rate expected on a single detector Read Out Link and output orrelease selected fragments on request. The module is available in PCIor PMC formats and is designed around a MACH5 CPLD and an Intel i960microprocessor, together with appropriate SRAM and FIFO chips. Ittakes input via an S-LINK daughter-board connector at a continuousrate of up to 132 MB/s. Its functionality is based on the requirementsdescribed in the ROB-in User Requirements Document, itself based onrequirements defined for a complete ATLAS ROB.

  12. Timing and Readout Contorl in the LHCb Upgraded Readout System

    CERN Document Server

    Alessio, Federico

    2016-01-01

    In 2019, the LHCb experiment at CERN will undergo a major upgrade where its detectors electronics and entire readout system will be changed to read-out events at the full LHC rate of 40 MHz. In this paper, the new timing, trigger and readout control system for such upgrade is reviewed. Particular attention is given to the distribution of the clock, timing and synchronization information across the entire readout system using generic FTTH technology like Passive Optical Networks. Moreover the system will be responsible to generically control the Front-End electronics by transmitting configuration data and receiving monitoring data, offloading the software control system from the heavy task of manipulating complex protocols of thousands of Front-End electronics devices. The way in which this was implemented is here reviewed with a description of results from first implementations of the system, including usages in test-benches, implementation of techniques for timing distribution and latency control."

  13. A new read-out architecture for the ATLAS Tile Calorimeter Phase-II Upgrade

    CERN Document Server

    Valero, Alberto; The ATLAS collaboration

    2015-01-01

    TileCal is the Tile hadronic calorimeter of the ATLAS experiment at the LHC. The LHC has planned a series of upgrades culminating in the High Luminosity LHC (HL-LHC) which will increase of order five times the LHC nominal instantaneous luminosity. TileCal will undergo an upgrade to accommodate to the HL-LHC parameters. The TileCal read-out electronics will be redesigned introducing a new read-out strategy. The data generated in the detector will be transferred to the new Read-Out Drivers (sRODs) located in off-detector for every bunch crossing before any event selection is applied. Furthermore, the sROD will be responsible of providing preprocessed trigger information to the ATLAS first level of trigger. It will implement pipeline memories to cope with the latencies and rates specified in the new trigger schema and in overall it will represent the interface between the data acquisition, trigger and control systems and the on-detector electronics. The new TileCal read-out architecture will be presented includi...

  14. Yarr: A PCIe based readout system for semiconductor tracking systems

    International Nuclear Information System (INIS)

    The Yarr readout system is a novel DAQ concept, using an FPGA board connected via PCIe to a computer, to read out semiconductor tracking systems. The system uses the FPGA as a reconfigurable IO interface which, in conjunction with the very high speed of the PCIe bus, enables a focus of processing the data stream coming from the pixel detector in software. Modern computer system could potentially make the need of custom signal processing hardware in readout systems obsolete and the Yarr readout system showcases this for FE-I4 chips, which are state-of-the-art readout chips used in the ATLAS Pixel Insertable B-Layer and developed for tracking in high multiplicity environments. The underlying concept of the Yarr readout system tries to move intelligence from hardware into the software without the loss of performance, which is made possible by modern multi-core processors. The FPGA board firmware acts like a buffer and does no further processing of the data stream, enabling rapid integration of new hardware due to minimal firmware minimisation.

  15. Upgraded Trigger Readout Electronics for the ATLAS LAr Calorimeters for Future LHC Running

    International Nuclear Information System (INIS)

    The ATLAS Liquid Argon (LAr) calorimeters produce almost 200K signals that are digitized and processed by the front-end and back-end electronics for every triggered event. Additionally, the front-end electronics sums analog signals to provide coarse-grained energy sums to the first- level (L1) trigger system. The current design was optimized for the nominal LHC luminosity of 1034cm−2s−1. In order to retain the capability to trigger on low energy electrons and photons when the LHC is upgraded to higher luminosity, an improved LAr calorimeter trigger readout is proposed and being constructed. The new trigger readout system makes available the fine segmentation of the calorimeter at the L1 trigger with high precision in order to reduce the QCD jet background in electron, photon and tau triggers, and to improve jet and missing ET trigger performance. The new LAr Trigger Digitizer Board is designed to receive the higher granularity signals, digitize them on-detector and send them via fast optical links to a new Digital Processing System. The reconstructed energies of trigger readout channels after digital filtering are transmitted to the L1 system, allowing the extraction of improved trigger signatures. This contribution presents the motivation for the upgrade, the concept for the new trigger readout and the expected performance of the new trigger, and describes the components being developed for the new system

  16. Reliable and redundant FPGA based read-out design in the ATLAS TileCal Demonstrator

    CERN Document Server

    Akerstedt, H; The ATLAS collaboration; Drake, Gary; Anderson, Kelby; Bohm, C; Oreglia, Mark; Tang, Fukun

    2015-01-01

    The Tile Calorimeter at ATLAS is a hadron calorimeter based on steel plates and scintillating tiles read out by PMTs. The current read-out system uses standard ADCs and custom ASICs to digitize and temporarily store the data on the detector. However, only a subset of the data is actually read out to the counting room. The on-detector electronics will be replaced around 2023. To achieve the required reliability the upgraded system will be highly redundant. Here the ASICs will be replaced with Kintex-7 FPGAs from Xilinx. This, in addition to the use of multiple 10 Gbps optical read-out links, will allow a full read-out of all detector data. Due to the higher radiation levels expected when the beam luminosity is increased, opportunities for repairs will be less frequent. The circuitry and firmware must therefore be designed for sufficiently high reliability using redundancy and radiation tolerant components. Within a year, a hybrid demonstrator including the new read-out system will be installed in one slice of ...

  17. A Scheme of Read-Out Organization for the ATLAS High-Level Triggers and DAQ based on ROB Complexes

    CERN Document Server

    Calvet, D; Huet, M; Mandjavidze, I D

    1999-01-01

    This paper describes a possible organization of the ATLAS High-LevelTriggers and DAQ read-out system downstream the Read-Out Drivers. Itis based on the ROB Complex concept which assumes that each read-outunit is formed by several input buffer modules sharing a networkinterface to a common Trigger/DAQ data collection network. Animplementation of such ROB Complex based on PCI bus to connectread-out buffers, a control processor and a network interface cardis presented. The total number of ROB Complexes required for ATLAS,as well as the number of CompactPCI crates housing them are estimated.The results obtained from measurements on a ROB Complex prototypeintegrated in the ATLAS Level 2 Trigger ATM Testbed are given. Thefeasibility of some data preprocessing within a ROB Complex is shown.

  18. Electronic Readout of the Atlas Liquid Argon Calorimeter: Calibration and Performance

    CERN Document Server

    Majewski, S; The ATLAS collaboration

    2010-01-01

    The Liquid Argon (LAr) calorimeter is a key detector component in the ATLAS experiment at the Large Hadron Collider (LHC) at CERN. The LHC is a proton-proton collider with a center-of-mass energy of 14 TeV. The machine has been operated at energies of 900 GeV and 2.36 TeV in 2009 and is expected to reach the energy of 7 TeV in 2010. The LAr calorimeter is designed to provide precision measurements of electrons, photons, jets and missing transverse energy. It consists of a set of sampling calorimeters with liquid argon as active medium kept into three separate cryostats. The LAr calorimeters are read out via a system of custom electronics. The electronic readout of the ATLAS LAr calorimeters is divided into a Front End (FE) system of boards mounted in custom crates directly on the cryostat feedthroughs, and a Back End (BE) system of VME-based boards located in an off-detector underground counting room where there is no radiation. The FE system includes Front End boards (FEBs), which perform the readout and dig...

  19. Phase-I Trigger Readout Electronics Upgrade of the ATLAS Liquid-Argon Calorimeters

    CERN Document Server

    Mori, Tatsuya; The ATLAS collaboration

    2015-01-01

    The Large Hadron Collider (LHC) is foreseen to be upgraded during the shut-down period of 2018-2019 to deliver about 3 times the instantaneous design luminosity. Since the ATLAS trigger system, at that time, will not support such an increase of the trigger rate an improvement of the trigger system is required. The ATLAS LAr Calorimeter readout will therefore be modified and digital trigger signals with a higher spatial granularity will be provided to the trigger. The new trigger signals will be arranged in 34000 Super Cells which achieves a 5-10 better granularity than the trigger towers currently used and allows an improved background rejection. The Super Cell readout is composed of custom developed 12-bit combined SAR ADCs in 130 nm CMOS technology which will be installed on-detector in a radiation environment and digitizes the detector pulses at 40 MHz. The data will be transmitted to the back end using a custom serializer and optical converter applying 5.44 Gb/s optical links. These components are install...

  20. Design, construction, quality checks and test results of first resistive-Micromegas read-out boards for the ATLAS experiment

    CERN Document Server

    Iengo, Paolo; The ATLAS collaboration

    2015-01-01

    The development work carried out at CERN to push the Micromegas technology to a new frontier is now coming to an end. The construction of the first read-out boards for the upgrade of the ATLAS muon system will demonstrate in full-scale the feasibility of this ambitious project. The read-out boards, representing the heart of the detector, are manufactured in industries, making the Micromegas for ATLAS the first MPGD for a large experiment with a relevant part industrially produced. The boards are 50 cm wide and up to 220 cm long, carrying copper strips 315 μm wide with 415 μm pitch. Interconnected resistive strips, having the same pattern as the copper strips, provide spark protection. The boards are completed by the creation of cylindrical pillars 128 μm high, 280 μm in diameter and arranged in a triangular array 7 mm aside. The total number of boards to be produced for ATLAS is 2048 of 32 different types. We will review the main design parameters of the read-out boards for the ATLAS Micromegas, following...

  1. Electronics Development for the ATLAS Liquid Argon Calorimeter Trigger and Readout for Future LHC Running

    CERN Document Server

    Hopkins, Walter; The ATLAS collaboration

    2016-01-01

    The upgrade of the LHC will provide 7 times greater instantaneous and total luminosities than assumed in the original design of the ATLAS Liquid Argon (LAr) Calorimeters. Radiation tolerance criteria and an improved trigger system with higher acceptance rate and longer latency require an upgrade of the LAr readout electronics. In the first upgrade phase in 2019-2020, a trigger readout with up to 10 times higher granularity will be implemented. This allows an improved reconstruction of electromagnetic and hadronic showers and will reduce the background for electron, photon and energy-flow signals at the first trigger level. The analog and digital signal processing components are currently in their final design stages and a fully functional demonstrator system is operated and tested on the LAr Calorimeters. In a second upgrade stage in 2024-2026, the readout of all 183,000 LAr Calorimeter cells will be performed without trigger selection at 40 MHz sampling rate and 16 bit dynamic range. Calibrated energies of a...

  2. Electronics Development for the ATLAS Liquid ArgonCalorimeter Trigger and Readout for Future LHC Running

    CERN Document Server

    Hopkins, Walter; The ATLAS collaboration

    2016-01-01

    The upgrade of the LHC will provide 7 times greater instantaneous and total luminosities than assumed in the original design of the ATLAS Liquid Argon (LAr) Calorimeters. Radiation tolerance criteria and an improved trigger system with higher acceptance rate and longer latency require an upgrade of the LAr readout electronics. In the first upgrade phase in 2019-2020, a trigger readout with up to 10 times higher granularity will be implemented. This allows an improved reconstruction of electromagnetic and hadronic showers and will reduce the background for electron, photon and energy-flow signals at the first trigger level. The analog and digital signal processing components are currently in their final design stages and a fully functional demonstrator system is operated and tested on the LAr Calorimeters. In a second upgrade stage in 2024-2026, the readout of all 183,000 LAr Calorimeter cells will be performed without trigger selection at 40 MHz sampling rate and 16 bit dynamic range. Calibrated energies of a...

  3. Electronics Development for the ATLAS Liquid Argon Calorimeter - Trigger and Readout for Future LHC Running -

    CERN Document Server

    Starz, Steffen; The ATLAS collaboration

    2016-01-01

    The upgrade of the LHC will provide up to 7.5 times greater instantaneous and total luminosities than assumed in the original design of the ATLAS Liquid Argon (LAr) Calorimeters. Radiation tolerance criteria and an improved trigger system with higher acceptance rate and longer latency require an upgrade of the LAr readout electronics. In the first upgrade phase in 2019-2020, a trigger-readout with up to 10 times higher granularity will be implemented. This allows an improved reconstruction of electromagnetic and hadronic showers and will reduce the background for electron, photon and energy-flow signals at the first trigger level. The analog and digital signal processing components are currently in their final design stages and a fully functional demonstrator system is operated and tested on the LAr Calorimeters. In a second upgrade stage in 2024-2026, the readout of all 183,000 LAr Calorimeter cells will be performed without trigger selection at 40 MHz sampling rate and 16 bit dynamic range. Calibrated energ...

  4. Readout Electronics for the ATLAS LAr Calorimeter at HL-LHC

    CERN Document Server

    Chen, H; The ATLAS collaboration

    2011-01-01

    The ATLAS experiment is one of the two general-purpose detectors designed to study proton-proton collisions (14 TeV in the center of mass) produced at the Large Hadron Collider (LHC) and to explore the full physics potential of the LHC machine at CERN. The ATLAS Liquid Argon (LAr) calorimeters are high precision, high sensitivity and high granularity detectors designed to provide precision measurements of electrons, photons, jets and missing transverse energy. ATLAS (and its LAr Calorimeters) has been operating and collecting p-p collisions at LHC since 2009. The on-detector electronics (front-end) part of the current readout electronics of the calorimeters measures the ionization current signals by means of preamplifiers, shapers and digitizers and then transfers the data to the off-detector electronics (back-end) for further elaboration, via optical links. Only the data selected by the level-1 calorimeter trigger system are transferred, achieving a bandwidth reduction to 1.6 Gbps. The analog trigger sum sig...

  5. The Read-Out Driver (ROD) card for the ATLAS experiment: commissioning for the IBL detector and upgrade studies for the Pixel Layers 1 and 2

    CERN Document Server

    Travaglini, R; The ATLAS collaboration; Bindi, M; Falchieri, D; Gabrielli, A; Lama, L; Chen, S P; Hsu, S C; Hauck, S; Kugel, A; Flick, T; Wensing, M

    2013-01-01

    The upgrade of the ATLAS experiment at LHC foresees the insertion of an innermost silicon layer, called Insertable B-layer (IBL). IBL read-out system will be equipped with new electronics. The Readout-Driver card (ROD) is a VME board devoted to data processing, configuration and control. A pre-production batch has been delivered in order to perform tests with instrumented slices of the overall acquisition chain, aiming to finalize strategies for system commissioning. In this contribution both setups and results will be described, as well as preliminary studies on changes in order to adopt the ROD for the ATLAS Pixel Layers 1 and 2.

  6. The Layer 1 / Layer 2 readout upgrade for the ATLAS Pixel Detector

    CERN Document Server

    Mullier, Geoffrey; The ATLAS collaboration

    2016-01-01

    The Pixel Detector of the ATLAS experiment has shown excellent performance during the whole Run-1 of the Large Hadron Collider (LHC). The increase of instantaneous luminosity foreseen during the LHC Run 2, will lead to an increased detector occupancy that is expected to saturate the readout links of the outermost layers of the pixel detector: Layers 1 and 2. To ensure a smooth data taking under such conditions, the read out system of the recently installed fourth innermost pixel layer, the Insertable B-Layer, was modified to accomodate the needs of the older detector. The Layer 2 upgrade installation took place during the 2015 winter shutdown, with the Layer 1 installation scheduled for 2016. A report of the successful installation, together with the design of novel dedicated optical to electrical converters and the software and firmware updates will be presented.

  7. Quality control and quality assurance of micromegas readout boards for the ATLAS New Small Wheel

    CERN Document Server

    Nanda, Amit

    2016-01-01

    The resistive anode boards of the Micromegas detectors for ATLAS NSW upgrade, will be produced in industries. The anode boards will be thoroughly evaluated at CERN following a detailed quality control and quality assurance (QA/QC) procedure. The report describes thoroughly the procedures and the design of a small QC tool for easier measurements of electrical properties of the readout boards.

  8. Upgraded Trigger Readout Electronics for the ATLAS LAr Calorimeters for Future LHC Running

    CERN Document Server

    Ma, H; The ATLAS collaboration

    2015-01-01

    The ATLAS Liquid Argon (LAr) calorimeters produce almost 200K signals that are digitized and processed by the front-end and back-end electronics for every triggered event. Additionally, the front-end electronics sums analog signals to provide coarse-grained energy sums to the first- level (L1) trigger system. The current design was optimized for the nominal LHC luminosity of 10^34cm^−2s^−1. In order to retain the capability to trigger on low energy electrons and photons when the LHC is upgraded to higher luminosity, an improved LAr calorimeter trigger readout is proposed and being constructed. The new trigger readout system makes available the fine segmentation of the calorimeter at the L1 trigger with high precision in order to reduce the QCD jet background in electron, photon and tau triggers, and to improve jet and missing ET trigger performance. The new LAr Trigger Digitizer Board is designed to receive the higher granularity signals, digitize them on-detector and send them via fast optical links to a...

  9. The final design of the ATLAS Trigger/DAQ Readout-Buffer Input (ROBIN) Device

    CERN Document Server

    Kugel, A; Müller, M; Yu, M; Krause, E; Gorini, B; Joos, M; Petersen, J; Stancu, S; Green, B; Misiejuk, A; Kieft, G; Van Wasen, J; 10th Workshop on Electronics for LHC and Future Experiments

    2004-01-01

    The ATLAS readout subsystem (ROS) is the main interface between 1600 detector front-end readout links (ROL) and the high level (HLT) trigger farms. Its core device, the readout-buffer input (ROBIN), accepts event data on 3 readout links (ROLs) with a maximum rate of 100 kHz and a bandwidth of up to 160\\,MB/s per link. Incoming event data is temporarily buffered and delivered via PCI or Gigabit Ethernet on request. Two devices, a XILINX XC2V2000 FPGA and an IBM PowerPC 440, are present, implementing the ROBIN's functionality. Furthermore one 64 MB SDRAM event data buffer is available per ROL. The device supports the ATLAS baseline implementation, which foresees the PCI bus as the main communication path inside the ROS, as well as an optional data path using Gigabit Ethernet to increase scalability when needed. The paper presents the final design of the ATLAS ROBIN. Measurement results, obtained with a prototype device in PCI bus and Gigabit Ethernet setups, show the usability and approve the design choices.

  10. Data readout system for multiwire proportional chambers

    International Nuclear Information System (INIS)

    An electronic system for data readout from multiwire proportional chambers is described. The system will be used in a magnetic spectrometer for investigation into rare processes in an ITEP accelerating complex. Schematic solutions, used in the system allow one to maximally use the 'VECTOR' standard interface fast operation. The system assembly is completely made of domestic parts. The readout system structure developed allows one to easily reduce or increase the number of MPC channels questioned. The mean time, required for preparing data on cluster, does not exceed 1.5 μs

  11. Radiation Tolerant Electronics and Digital Processing for the Phase-1 Readout Upgrade of the ATLAS Liquid Argon Calorimeters

    CERN Document Server

    Milic, Adriana; The ATLAS collaboration

    2015-01-01

    The high luminosities of $L > 10^{34} cm^{-2} s^{-1}$ at the Large Hadron Collider (LHC) at CERN produce an intense radiation environment that the detectors and their electronics must withstand. The ATLAS detector is a multi-purpose apparatus constructed to explore the new particle physics regime opened by the LHC. Of the many decay particles observed by the ATLAS detector, the energy of the created electrons and photons is measured by a sampling calorimeter technique that uses Liquid Argon (LAr) as its active medium. The front end (FE) electronic readout of the ATLAS LAr calorimeter located on the detector itself consists of a combined analog and digital processing system. In order to exploit the higher luminosity while keeping the same trigger bandwidth of 100 kHz, higher transverse granularity, higher resolution and longitudinal shower shape information will be provided from the LAr calorimeter to the Level-1 trigger processors. New trigger readout electronics have been designed for this purpose, which wil...

  12. Data readout system utilizing photonic integrated circuit

    International Nuclear Information System (INIS)

    We describe a novel optical solution for data readout systems. The core of the system is an Indium-Phosphide photonic integrated circuit performing as a front-end readout unit. It functions as an optical serializer in which the serialization of the input signal is provided by means of on-chip optical delay lines. The circuit employs electro-optic phase shifters to build amplitude modulators, power splitters for signal distribution, semiconductor optical amplifiers for signal amplification as well as on-chip reflectors. We present the concept of the system, the design and first characterization results of the devices that were fabricated in a multi-project wafer run

  13. ATLAS Tile calorimeter calibration and monitoring systems

    CERN Document Server

    Chomont, Arthur Rene; The ATLAS collaboration

    2016-01-01

    The ATLAS Tile Calorimeter (TileCal) is the central section of the hadronic calorimeter of the ATLAS experiment and provides important information for reconstruction of hadrons, jets, hadronic decays of tau leptons and missing transverse energy. This sampling calorimeter uses steel plates as absorber and scintillating tiles as active medium. The light produced by the passage of charged particles is transmitted by wavelength shifting fibres to photomultiplier tubes (PMTs), located on the outside of the calorimeter. The readout is segmented into about 5000 cells (longitudinally and transversally), each of them being read out by two PMTs in parallel. To calibrate and monitor the stability and performance of each part of the readout chain during the data taking, a set of calibration systems is used. The TileCal calibration system comprises Cesium radioactive sources, laser and charge injection elements and it allows to monitor and equalize the calorimeter response at each stage of the signal production, from scin...

  14. Gravity Probe B gyroscope readout system

    Science.gov (United States)

    Muhlfelder, B.; Lockhart, J.; Aljabreen, H.; Clarke, B.; Gutt, G.; Luo, M.

    2015-11-01

    We describe the Gravity Probe B London-moment readout system successfully used on-orbit to measure two gyroscope spin axis drift rates predicted by general relativity. The system couples the magnetic signal of a spinning niobium-coated rotor into a low noise superconducting quantum interference device. We describe the multi-layered magnetic shield needed to attenuate external fields that would otherwise degrade readout performance. We discuss the ∼35 nrad/yr drift rate sensitivity that was achieved on-orbit.

  15. Irradiation tests of readout chain components of the ATLAS liquid argon calorimeters

    CERN Document Server

    Leroy, C; Golikov, V; Golubyh, S M; Kukhtin, V; Kulagin, E; Luschikov, V; Minashkin, V F; Shalyugin, A N

    1999-01-01

    Various readout chain components of the ATLAS liquid argon calorimeters have been exposed to high neutron fluences and $gamma$-doses at the irradiation test facility of the IBR-2 reactor of JINR, Dubna. Results of the capacitance and impedance measurements of coaxial cables are presented. Results of peeling tests of PC board samples (kapton and copper strips) as a measure of the bonding agent irradiation hardness are also reported.

  16. Firmware development and testing of the ATLAS IBL Readout Driver card

    CERN Document Server

    Chen, S; The ATLAS collaboration

    2014-01-01

    The ATLAS Experiment is reworking and upgrading systems during the current LHC shut down. In particular, the Pixel detector is inserting an additional inner layer called Insertable B-Layer (IBL). The Readout-Driver card (ROD), the Back-of-Crate card (BOC), and the S-Link together form the essential frontend data path of the IBL’s off-detector DAQ system. The strategy for IBLROD firmware development focused on migrating and tailoring HDL code blocks from PixelROD to ensure modular compatibility in future ROD upgrades, in which a unified code version will interface with IBL and Pixel layers. Essential features such as data formatting, frontend-specific error handling, and calibration are added to the ROD data path. An IBLDAQ testbench using realistic frontend chip model was created to serve as an initial framework for full offline electronic system simulation. In this document, major firmware achievements concerning the IBLROD data path implementation, tested in testbench and on ROD prototypes, will be report...

  17. Firmware development and testing of the ATLAS IBL Read-Out Driver card

    CERN Document Server

    Chen, S-P; The ATLAS collaboration; Falchieri, D; Gabrielli, A; Hauck, S; Hsu, S-C; Kretz, M; Kugel, A; Travaglini, R; Wensing, M

    2014-01-01

    The ATLAS Experiment is reworking and upgrading systems during the current LHC shutdown. In particular, the Pixel detector is inserting an additional inner layer called Insertable B-Layer (IBL). The Read-Out Driver card (ROD), the Back-of-Crate card (BOC), and the S-Link together form the essential frontend data path of the IBL’s off-detector DAQ system. The strategy for IBL ROD firmware development focused on migrating and tailoring HDL code blocks from Pixel ROD to ensure modular compatibility in future ROD upgrades, in which a unified code version will interface with IBL and Pixel layers. Essential features such as data formatting, frontend-specific error handling, and calibration are added to the ROD data path. An IBL DAQ testbench using a realistic frontend chip model was created to serve as an initial framework for full offline electronic system simulation. In this document, major firmware achievements concerning the IBL ROD data path implementation, tested in testbench and on ROD prototypes, will be ...

  18. System Design of the ATLAS Absolute Luminosity Monitor

    CERN Document Server

    Anghinolfi, Francis; Franz, Sebastien; Iwanski, W; Lundberg, B; PH-EP

    2007-01-01

    The ATLAS absolute luminosity monitor is composed of 8 roman pots symmetrically located in the LHC tunnel. Each pot contains 23 multi anode photomultiplier tubes, and each one of those is fitted with a front-end assembly called PMF. A PMF provides the high voltage biasing of the tube, the frontend readout chip and the readout logic in a very compact arrangement. The 25 PMFs contained in one roman pot are connected to a motherboard used as an interface to the backend electronics. The system allows to configure the front-end electronics from the ATLAS detector control system and to transmit the luminosity data over Slink.

  19. Muon Identification with the ATLAS Tile Calorimeter Read-Out Driver for Level-2 Trigger Purposes

    CERN Document Server

    Ruiz-Martinez, A

    2008-01-01

    The Hadronic Tile Calorimeter (TileCal) at the ATLAS experiment is a detector made out of iron as passive medium and plastic scintillating tiles as active medium. The light produced by the particles is converted to electrical signals which are digitized in the front-end electronics and sent to the back-end system. The main element of the back-end electronics are the VME 9U Read-Out Driver (ROD) boards, responsible of data management, processing and transmission. A total of 32 ROD boards, placed in the data acquisition chain between Level-1 and Level-2 trigger, are needed to read out the whole calorimeter. They are equipped with fixed-point Digital Signal Processors (DSPs) that apply online algorithms on the incoming raw data. Although the main purpose of TileCal is to measure the energy and direction of the hadronic jets, taking advantage of its projective segmentation soft muons not triggered at Level-1 (with pT<5 GeV) can be recovered. A TileCal standalone muon identification algorithm is presented and i...

  20. A readout system for passive pressure sensors

    International Nuclear Information System (INIS)

    This paper presents a readout system for the passive pressure sensors which consist of a pressure-sensitive capacitor and an inductance coil to form an LC circuit. The LC circuit transforms the pressure variation into the LC resonant frequency shift. The proposed system is composed of a reader antenna inductively coupled to the sensor inductor, a measurement circuit, and a PC post-processing unit. The measurement circuit generates a DC output voltage related to the sensor's resonant frequency and converts the output voltage into digital form. The PC post-processing unit processes the digital data and calculates the sensor's resonant frequency. To test the performance of the readout system, a sensor is designed and fabricated based on low temperature co-fired ceramic (LTCC), and a series of testing experiments is carried out. The experimental results show good agreement with the impedance analyzer's results, their error is less than 2.5%, and the measured values are almost insensitive to the variation of readout distance. It proves that the proposed system is effective practically. (semiconductor integrated circuits)

  1. ''The Read-Out Driver'' ROD card for the Insertable B-layer (IBL) detector of the ATLAS experiment: commissioning and upgrade studies for the Pixel Layers 1 and 2

    International Nuclear Information System (INIS)

    The upgrade of the ATLAS experiment at LHC foresees the insertion of an innermost silicon layer, called the Insertable B-layer (IBL). The IBL read-out system will be equipped with new electronics. The Readout-Driver card (ROD) is a VME board devoted to data processing, configuration and control. A pre-production batch has been delivered for testing with instrumented slices of the overall acquisition chain, aiming to finalize strategies for system commissioning. In this paper system setups and results will be described, as well as preliminary studies on changes needed to adopt the ROD for the ATLAS Pixel Layers 1 and 2

  2. ATLAS production system

    CERN Document Server

    Borodin, Mikhail; The ATLAS collaboration; De, Kaushik; Klimentov, Alexei; Golubkov, Dmitry; Maeno, Tadashi; Mashinistov, Ruslan; Wenaus, Torre; Padolski, Siarhei

    2016-01-01

    The second generation of the ATLAS production system called ProdSys2 is a distributed workload manager which used by thousands of physicists to analyze the data remotely, with the volume of processed data is beyond the exabyte scale, across a more than hundred heterogeneous sites. It achieves high utilization by combining dynamic job definition based on many criterias, such as input and output size, memory requirements and CPU consumption with manageable scheduling policies and by supporting different kind of computational resources, such as GRID, clouds, supercomputers and volunteering computers. Besides jobs definition Production System also includes flexible web user interface, which implements user-friendly environment for main ATLAS workflows, e.g. simple way of combining different data flows, and real-time monitoring, optimised for using with huge amount of information to present. We present an overview of the ATLAS Production System major components: job and task definition, workflow manager web user i...

  3. DTMROC-S: Deep submicron version of the readout chip for the TRT detector in ATLAS

    OpenAIRE

    Anghinolfi, Francisco; åkesson, Torsten, Paul, åke; Eerola, Paula; Farthouat, Philippe; Lichard, Peter; Ryjov, Vladimir; Szczygiel, Richard; Dressnandt, Nandor; Keener, Paul; Newcomer, Mitch; Van Berg, Rick; Williams, Hugh

    2002-01-01

    A new version of the circuit for the readout of the ATLAS straw tube detector, TRT [1], has been developed in a deep-submicron process. The DTMROC-S is fabricated in a commercial 0.25μm CMOS IBM technology, with a library hardened by layout techniques [2]. Compared to the previous version of the chip [3] done in a 0.8μm radiation-hard CMOS and despite of the features added for improving the robustness and testability of the circuit, the deep-submicron technology results in a much smaller chip...

  4. Radiation Tolerant Electronics and Digital Processing for the Phase-I Trigger Readout Upgrade of the ATLAS Liquid Argon Calorimeters

    CERN Document Server

    Milic, Adriana; The ATLAS collaboration

    2015-01-01

    The high luminosities of $\\mathcal{L} > 10^{34} \\mathrm{cm}^{-2} \\mathrm{s}^{-1}$at the Large Hadron Collider (LHC) at CERN produce an intense radiation environment that the detectors and their electronics must withstand. The ATLAS detector is a multi-purpose apparatus constructed to explore the new particle physics regime opened by the LHC. Of the many decay particles observed by the ATLAS detector, the energy of the created electrons and photons is measured by a sampling calorimeter technique that uses Liquid Argon (LAr) as its active medium. The Front End (FE) electronic readout of the ATLAS LAr calorimeter located on the detector itself consists of a combined analog and digital processing system. The FE electronics were qualified for radiation levels corresponding to 10 years of LHC operations. The high luminosity running of the LHC (HL-LHC), with instantaneous luminosities of $5 \\times 10^{34} \\mathrm{cm}^ {-2} \\mathrm{s}^{-1}$ and an integrated luminosity of $3000 \\ \\mathrm{fb}^{-1}$ will exceed these d...

  5. High-rate irradiation of 15 mm muon drift tubes and development of an ATLAS compatible readout driver for micromegas detectors

    International Nuclear Information System (INIS)

    around 72% for a single tube layer at 10 kHz/cm2 irradiation rate. A second proposal for a New Small Wheel detector technology are Micromegas detectors. These highly segmented planar gaseous detectors are capable of very high rate particle tracking with single plane angular resolution or track reconstruction. The ATLAS community has decided in 2013 in favor of this technology for precision tracking in the New Small Wheels. A prototype Micromegas detector will be installed in summer 2014 on the present ATLAS Small Wheel to serve as test case of the technology and as template for the necessary changes to the ATLAS hardware and software infrastructure. To fully profit from this installation, an ATLAS compatible Read Out Driver (ROD) had to be developed, that allows to completely integrate the prototype chamber into the ATLAS data acquisition chain. This device contains state-of-the-art FPGAs and is based on the Scalable Readout System (SRS) of the RD51 collaboration. The system design, its necessary functionalities and its interfaces to other systems are presented at use of APV25 frontend chips. Several initial issues with the system have been solved during the development. The new ROD was integrated into the ATLAS Monitored Drift Tube Readout and into a VME based readout system of the LMU Cosmic Ray Facility. Additional successful operation has been proven meanwhile in several test cases within the ATLAS infrastructure. The whole data acquisition chain is ready for productive use in the ATLAS environment.

  6. The readout system for the LHCb Outer Tracker

    CERN Document Server

    Wiedner, D; Apeldorn, G; Bachmann, S; Bagaturia, Yu S; Bauer, T; Berkien, A; Blouw, J; Bos, E; Deisenroth, M; Dubitzki, R; Eisele, F; Guz, Yu; Haas, T; Hommels, B; Ketel, T; Knopf, J; Merk, M; Nardulli, J; Nedos, M; Pellegrino, A; Rausch, A; Rusnyak, R; Schwemmer, R; Simoni, E; Sluijk, T; Spaan, B; Spelt, J; Stange, U; Van Tilburg, J; Trunk, U; Tuning, N; Uwer, U; Vankow, P; Warda, K

    2007-01-01

    The LHCb Outer Tracker is composed of 55 000 straw drift tubes. The requirements for the OT electronics are the precise (1 ns) drift time measurement at 6 % occupancy and 1 MHz readout. Charge signals from the straw detector are amplified, shaped and discriminated by ATLAS ASDBLR chips. Drift-times are determined and stored in the OTIS TDC and put out to a GOL serializer at L0 accept. Optical fibres carry the data 90 m to the TELL1 acquisition board. The full readout chain performed well in an e- test beam.

  7. A Readout System for the LHCb Outer Tracker

    CERN Document Server

    Wiedner, D; Apeldorn , G; Bachmann, S; Bagaturi , I; Bauer, T; Berkien, A; Blouw, J; Bos, E; Deisenroth, M; Dubitzki, R; Eisele, F; Guz , Y; Haas, T; Hommels, B; Ketel, T; Knopf , J; Merk , M; Nardulli , J; Nedos, M; Pellegrino, A; Rausch, A; Rusnyak, R; Schwemmer, R; Simoni, E; Sluijk , T; Spaan, B; Spelt , J; Stange, U; van Tilburg, J; Trunk , U; Tuning , N; Uwer, U; Vankow , P; Warda, K

    2006-01-01

    The LHCb Outer Tracker is composed of 55 000 straw drift tubes. The requirements for the OT electronics are the precise (1 ns) drift time measurement at 6 % occupancy and 1 MHz readout. Charge signals from the straw detector are amplified, shaped and discriminated by ATLAS ASDBLR chips. Drift-times are determined and stored in the OTIS TDC and put out to a GOL serializer at L0 accept. Optical fibres carry the data 90 m to the TELL1 acquisition board. The full readout chain performed well in an e- test beam.

  8. Monitoring the CMS strip tracker readout system

    International Nuclear Information System (INIS)

    The CMS Silicon Strip Tracker at the LHC comprises a sensitive area of approximately 200 m2 and 10 million readout channels. Its data acquisition system is based around a custom analogue front-end chip. Both the control and the readout of the front-end electronics are performed by off-detector VME boards in the counting room, which digitise the raw event data and perform zero-suppression and formatting. The data acquisition system uses the CMS online software framework to configure, control and monitor the hardware components and steer the data acquisition. The first data analysis is performed online within the official CMS reconstruction framework, which provides many services, such as distributed analysis, access to geometry and conditions data, and a Data Quality Monitoring tool based on the online physics reconstruction. The data acquisition monitoring of the Strip Tracker uses both the data acquisition and the reconstruction software frameworks in order to provide real-time feedback to shifters on the operational state of the detector, archiving for later analysis and possibly trigger automatic recovery actions in case of errors. Here we review the proposed architecture of the monitoring system and we describe its software components, which are already in place, the various monitoring streams available, and our experiences of operating and monitoring a large-scale system

  9. Integrated multi-crate FERA readout system

    International Nuclear Information System (INIS)

    We discuss a moderate-size readout system based entirely on FERA compatible units. The implementation of a specially developed FERA Extender module is presented, whose main feature is the ability to distribute the system over many CAMAC crates. This provides a convenient way of splitting the FERA bus into several virtually independent sub-systems driven by individual gate signals. Tagging of the event fragments from each sub-system with an event number incremented on the arrival of each master gate, provides a convenient means of reconstructing the full event at a later stage. An example of the external supplementary FERA control logic required for a complex multi-crate and multi-gate system controlled by a single FERA Manager, is also discussed together with some remarks on the system performance

  10. LHCb: A new Readout Control system for the LHCb Upgrade

    CERN Multimedia

    Alessio, F

    2012-01-01

    The LHCb experiment has proposed an upgrade towards a full 40 MHz readout system in order to run between five and ten times its initial design luminosity. The entire readout architecture will be upgraded in order to cope with higher sub-detector occupancies, higher rate and higher network load. In this paper, we describe the architecture, functionalities and the first hardware implementation of a new Readout Control system for the LHCb upgrade. The system is based on FPGAs and bi-directional links for the control of the entire readout architecture. First results on the validation of the system are also given.

  11. Spectroscopic measurements with the ATLAS FE-I4 pixel readout chip

    Energy Technology Data Exchange (ETDEWEB)

    Pohl, David-Leon; Janssen, Jens; Hemperek, Tomasz; Huegging, Fabian; Wermes, Norbert [Physikalisches Institut der Univeristaet Bonn (Germany)

    2015-07-01

    The ATLAS FE-I4 pixel readout chip is a large (2 x 2 cm{sup 2}) state of the art ASIC used in high energy physics experiments as well as for research and development purposes. While the FE-I4 is optimized for high hit rates it provides very limited charge resolution. Therefore two methods were developed to obtain high resolution single pixel charge spectra with the ATLAS FE-I4. The first method relies on the ability to change the detection threshold in small steps while counting hits from a particle source and has a resolution limited by electronic noise only. The other method uses a FPGA based time-to-digital-converter to digitize the analog charge signal with high precision. The feasibility, performance and challenges of these methods are discussed. First results of sensor characterizations from radioactive sources and test beams with the ATLAS FE-I4 in view of the charge collection efficiency after irradiation are presented.

  12. Development of Digital Signal Processing with FPGAs for the Readout of the ATLAS Liquid Argon Calorimeter at HL-LHC

    CERN Document Server

    Stärz, Steffen; Zuber, K

    2010-01-01

    The Liquid Argon calorimeter of the ATLAS detector at CERN in Geneva is supposed to be equipped with advanced readout electronics for the operation at High Luminosity LHC. In this diploma thesis the aspect of fast serial data transmission and data processing to be used for the communication between different readout modules and data storage buffers of the trigger shall be further developed. Furthermore, the main focus is put on first preparation of the detector raw data with regard to a signal correction using a FIR filter. It is aimed at a most efficient, most resource economising and minimal latency causing solution that allows to process the huge amount of upcoming detector raw data in real time. Therefore a via UDP/IP reconfigurable prototype of a 5-stage FIR filter with Gigabit Ethernet Interface was implemented in a Xilinx Virtex-5 FPGA. The performance reached is fully within the the requirements for the upgraded calorimeter readout of ATLAS.

  13. The Omega Ring Imaging Cerenkov Detector readout system user's guide

    International Nuclear Information System (INIS)

    The manual describes the electronic readout system of the Ring Imaging Cerenkov Detector at the CERN Omega Spectrometer. The system is described in its configuration of September 1984 after the Rich readout system had been used in two Omega experiments. (U.K.)

  14. Irradiation tests and expected performance of readout electronics of the ATLAS hadronic endcap calorimeter for the HL-LHC

    CERN Document Server

    Cheplakov, A; The ATLAS collaboration

    2014-01-01

    The readout electronics of the ATLAS Hadronic Endcap Calorimeter (HEC) will have to withstand a much more demanding radiation environment at the future high-luminosity LHC (HL-LHC) compared to LHC design values. The heart of the HEC read-out electronics is the pre-amplifier and summing (PAS) system which is realized in GaAs ASIC technology. The PAS devices are installed inside the LAr cryostat directly on the detector. They have been proven to operate reliably in LHC conditions up to luminosities of 1000 fb-1, within safety margins. However, at the HL-LHC a total luminosity of 3000 fb-1 is expected, which corresponds to radiation levels being increased by a factor 3-5. On top of that a safety factor of at least 2 needs to be accounted for to reflect our confidence in the simulations. The GaAs ASIC has therefore been exposed to neutron and proton radiation with integrated fluences in excess of 4∙10^15 n/cm2 and 2.6∙10^14 p/cm2, several factors above the levels corresponding to ten years of HL-LHC running. ...

  15. Irradiation Tests and Expected Performance of Readout Electronics of the ATLAS Hadronic Endcap Calorimeter for the HL-LHC

    CERN Document Server

    Cheplakov, A; The ATLAS collaboration

    2014-01-01

    The readout electronics of the ATLAS Hadronic Endcap Calorimeter (HEC) will have to withstand a much more demanding radiation environment at the future high-luminosity LHC (HL-LHC) compared to LHC design values. The heart of the HEC read-out electronics is the pre-amplifier and summing (PAS) system which is realized in GaAs ASIC technology. The PAS devices are installed inside the LAr cryostat directly on the detector. They have been proven to operate reliably in LHC conditions up to luminosities of 1000 fb-1, within safety margins. However, at the HL-LHC a total luminosity of 3000 fb-1 is expected, which corresponds to radiation levels being increased by a factor 3-5. On top of that a safety factor of at least 2 needs to be accounted for to reflect our confidence in the simulations. The GaAs ASIC has therefore been exposed to neutron and proton radiation with integrated fluences in excess of 4x10^15 n/cm2 and 2.6x10^14 p/cm2, several factors above the levels corresponding to ten years of HL-LHC running. In-s...

  16. Prototype ATLAS IBL Modules using the FE-I4A Front-End Readout Chip

    CERN Document Server

    Albert, J; Alimonti, Gianluca; Allport, Phil; Altenheiner, Silke; Ancu, Lucian; Andreazza, Attilio; Arguin, Jean-Francois; Arutinov, David; Backhaus, Malte; Bagolini, Alvise; Ballansat, Jacques; Barbero, Marlon; Barbier, Gérard; Bates, Richard; Battistin, Michele; Baudin, Patrick; Beau, Tristan; Beccherle, Roberto; Beck, Hans Peter; Benoit, Mathieu; Bensinger, Jim; Bomben, Marco; Borri, Marcello; Boscardin, Maurizio; Botelho Direito, Jose Antonio; Bousson, Nicolas; Boyd, George Russell Jr; Breugnon, Patrick; Bruni, Graziano; Bruschi, Marco; Buchholz, Peter; Buttar, Craig; Cadoux, Franck; Calderini, Giovanni; Caminada, Leah; Capeans, Mar; Casse, Gianluigi; Catinaccio, Andrea; Cavalli-Sforza, Matteo; Chauveau, Jacques; Chu, Ming-Lee; Ciapetti, Marco; Cindro, Vladimir; Citterio, Mauro; Clark, Allan; Cobal, Marina; Coelli, Simone; Colijn, Auke-Pieter; Colin, Daly; Collot, Johann; Crespo-Lopez, Olivier; Dalla Betta, Gian-Franco; Darbo, Giovanni; DaVia, Cinzia; David, Pierre-Yves; Debieux, Stéphane; Delebecque, Pierre; Devetak, Erik; DeWilde, Burton; Di Girolamo, Beniamino; Dinu, Nicoleta; Dittus, Fridolin; Diyakov, Denis; Djama, Fares; Dobos, Daniel Adam; Doonan, Kate; Dopke, Jens; Dorholt, Ole; Dube, Sourabh; Dushkin, Andrey; Dzahini, Daniel; Egorov, Kirill; Ehrmann, Oswin; Elldge, David; Elles, Sabine; Elsing, Markus; Eraud, Ludovic; Ereditato, Antonio; Eyring, Andreas; Falchieri, Davide; Falou, Aboud; Fang, Xiaochao; Fausten, Camille; Favre, Yannick; Ferrere, Didier; Fleta, Celeste; Fleury, Julien; Flick, Tobias; Forshaw, Dean; Fougeron, Denis; Fritzsch, Thomas; Gabrielli, Alessandro; Gaglione, Renaud; Gallrapp, Christian; Gan, K; Garcia-Sciveres, Maurice; Gariano, Giuseppe; Gastaldi, Thibaut; Gemme, Claudia; Gensolen, Fabrice; George, Matthias; Ghislain, Patrick; Giacomini, Gabriele; Gibson, Stephen; Giordani, Mario Paolo; Giugni, Danilo; Gjersdal, Håvard; Glitza, Karl Walter; Gnani, Dario; Godlewski, Jan; Gonella, Laura; Gorelov, Igor; Gorišek, Andrej; Gössling, Claus; Grancagnolo, Sergio; Gray, Heather; Gregor, Ingrid-Maria; Grenier, Philippe; Grinstein, Sebastian; Gromov, Vladimir; Grondin, Denis; Grosse-Knetter, Jörn; Hansen, Thor-Erik; Hansson, Per; Harb, Ali; Hartman, Neal; Hasi, Jasmine; Hegner, Franziska; Heim, Timon; Heinemann, Beate; Hemperek, Tomasz; Hessey, Nigel; Hetmánek, Martin; Hoeferkamp, Martin; Hostachy, Jean-Yves; Hügging, Fabian; Husi, Coralie; Iacobucci, Giuseppe; Idarraga, John; Ikegami, Yoichi; Janoška, Zdenko; Jansen, Jens; Jansen, Luc; Jensen, Frank; Jentzsch, Jennifer; Joseph, John; Kagan, Harris; Karagounis, Michael; Kass, Richard; Kenney, Christopher J; Kersten, Susanne; Kind, Peter; Klingenberg, Reiner; Kluit, Ruud; Kocian, Martin; Koffeman, Els; Kok, Angela; Korchak, Oleksandr; Korolkov, Ilya; Kostyukhin, Vadim; Krieger, Nina; Krüger, Hans; Kruth, Andre; Kugel, Andreas; Kuykendall, William; La Rosa, Alessandro; Lai, Chung-Hang; Lantzsch, Kerstin; Laporte, Didier; Lapsien, Tobias; Lounis, abdenour; Lozano, Manuel; Lu, Yunpeng; Lubatti, Henry; Macchiolo, Anna; Mallik, Usha; Mandić, Igor; Marchand, Denis; Marchiori, Giovanni; Massol, Nicolas; Matthias, Wittgen; Mättig, Peter; Mekkaoui, Abderrazak; Menouni, Mohsine; Menu, Johann; Meroni, Chiara; Mesa, Javier; Micelli, Andrea; Michal, Sébastien; Miglioranzi, Silvia; Mikuž, Marko; Mitsui, Shingo; Monti, Mauro; Moore, J; Morettini, Paolo; Muenstermann, Daniel; Murray, Peyton; Nellist, Clara; Nelson, David J; Nessi, Marzio; Neumann, Manuel; Nisius, Richard; Nordberg, Markus; Nuiry, Francois-Xavier; Oppermann, Hermann; Oriunno, Marco; Padilla, Cristobal; Parker, Sherwood; Pellegrini, Giulio; Pelleriti, Gabriel; Pernegger, Heinz; Piacquadio, Nicola Giacinto; Picazio, Attilio; Pohl, David; Polini, Alessandro; Popule, Jiří; Portell Bueso, Xavier; Povoli, Marco; Puldon, David; Pylypchenko, Yuriy; Quadt, Arnulf; Quirion, David; Ragusa, Francesco; Rambure, Thibaut; Richards, Erik; Ristic, Branislav; Røhne, Ole; Rothermund, Mario; Rovani, Alessandro; Rozanov, Alexandre; Rubinskiy, Igor; Rudolph, Matthew Scott; Rummler, André; Ruscino, Ettore; Salek, David; Salzburger, Andreas; Sandaker, Heidi; Schipper, Jan-David; Schneider, Basil; Schorlemmer, Andre; Schroer, Nicolai; Schwemling, Philippe; Seidel, Sally; Seiden, Abraham; Šícho, Petr; Skubic, Patrick; Sloboda, Michal; Smith, D; Sood, Alex; Spencer, Edwin; Strang, Michael; Stugu, Bjarne; Stupak, John; Su, Dong; Takubo, Yosuke; Tassan, Jean; Teng, Ping-Kun; Terada, Susumu; Todorov, Theodore; Tomášek, Michal; Toms, Konstantin; Travaglini, Riccardo; Trischuk, William; Troncon, Clara; Troska, Georg; Tsiskaridze, Shota; Tsurin, Ilya; Tsybychev, Dmitri; Unno, Yoshinobu; Vacavant, Laurent; Verlaat, Bart; Vianello, Elisa; Vigeolas, Eric; von Kleist, Stephan; Vrba, Václav; Vuillermet, Raphaël; Wang, Rui; Watts, Stephen; Weber, Michele; Weber, Marteen; Weigell, Philipp; Weingarten, Jens; Welch, Steven David; Wenig, Siegfried; Wermes, Norbert; Wiese, Andreas; Wittig, Tobias; Yildizkaya, Tamer; Zeitnitz, Christian; Ziolkowski, Michal; Zivkovic, Vladimir; Zoccoli, Antonio; Zorzi, Nicola; Zwalinski, Lukasz

    2012-01-01

    The ATLAS Collaboration will upgrade its semiconductor pixel tracking detector with a new Insertable B-layer (IBL) between the existing pixel detector and the vacuum pipe of the Large Hadron Collider. The extreme operating conditions at this location have necessitated the development of new radiation hard pixel sensor technologies and a new front-end readout chip, called the FE-I4. Planar pixel sensors and 3D pixel sensors have been investigated to equip this new pixel layer, and prototype modules using the FE-I4A have been fabricated and characterized using 120 GeV pions at the CERN SPS and 4 GeV positrons at DESY, before and after module irradiation. Beam test results are presented, including charge collection efficiency, tracking efficiency and charge sharing.

  17. Analyses of test beam data for the ATLAS upgrade readout chip (ABC130)

    International Nuclear Information System (INIS)

    As part of the ATLAS phase II upgrade it is planned to replace the current tracker with an all silicon tracker. The outer part of the new tracker will consist of silicon strip detectors. For the readout of the strip detector a new Analog to Binary Converter chip (ABC130) was designed. The chip is processed in the 130 nm technology. In laboratory measurements the preamplifier of the new ABC130 showed a significant lower gain than expected. From the measurements in the laboratory it was not possible to distinguish if the malfunction is in the preamplifier or in the test circuit. Therefore an unbiased test was mandatory. Among other measurements, one was a test beam campaign at the Stanford Linear Accelerator Collider (SLAC). The result of measurement is shown in the presentation.

  18. FASTBUS readout system for the CDF DAQ upgrade

    International Nuclear Information System (INIS)

    The Data Acquisition System (DAQ) at the Collider Detector at Fermilab is currently being upgraded to handle a minimum of 100 events/sec for an aggregate bandwidth that is at least 25 Mbytes/sec. The DAQ System is based on a commercial switching network that has interfaces to VME bus. The modules that readout the front end crates (FASTBUS and RABBIT) have to deliver the data to the VME bus based host adapters of the switch. This paper describes a readout system that has the required bandwidth while keeping the experiment dead time due to the readout to a minimum

  19. Design of a large dynamics fast acquisition device: application to readout of the electromagnetic calorimeter in the ATLAS experiment

    International Nuclear Information System (INIS)

    The construction of the new particle accelerator, the LHC (Large Hadron Collider) at CERN is entails many research and development projects. It is the case in electronics where the problem of the acquisition of large dynamic range signals at high sampling frequencies occurs. Typically, the requirements are a dynamic range of about 65,000 (around 16 bits) at 40 MHz. Some solutions to this problem will be presented. One of them is using a commercial analog-to-digital converter. This case brings up the necessity of a signal conditioning equipment. This thesis describes a way of building such a system that will be called 'multi-gain system'. Then, an application of this method is presented. It involves the realization of an automatic gain switching integrated circuit. It is designed for the readout of the ATLAS electromagnetic calorimeter. The choice and the calculation of the components of this systems are described. They are followed by the results of some measurements done on a prototype made using the AMS 1.2μm BiCMOS foundry. Possible enhancements are also presented. We conclude on the feasibility of such a system and its various applications in a number of fields that are not restricted to particle physics. (author)

  20. Strip detectors read-out system user's guide

    International Nuclear Information System (INIS)

    The Strip Detector Read-out System consists of two VME modules: SDR-Flash and SDR-seq completed by a fast logic SDR-Trig stand alone card. The system is a self-consistent, cost effective and easy use solution for the read-out of analog multiplexed signals coming from some of the front-end electronics chips (Viking/VA chips family, Premus 128 etc...) currently used together with solid (silicon) or gas microstrip detectors. (author)

  1. Upgrade for the ATLAS Tile Calorimeter Readout Electronics at the High Luminosity LHC

    CERN Document Server

    Cerqueira, A; The ATLAS collaboration

    2012-01-01

    The Tile Calalorimeter (TileCal) is the hadronic calorimeter covering the most central region of the ATLAS experiment at LHC. It is a sampling calorimeter with iron plates as absorber and plastic scintillating tiles as the active material. The scintillation light produced by the passage of charged particles is transmitted by wavelength shifting fibers to photomultiplier tubes (PMTs). The TileCal readout consists of about 10000 channels. The main upgrade will occur for the High Luminosity LHC phase (phase 2) which is scheduled around 2022. The upgrade aims at replacing the majority of the on- and off-detector electronics so that all calorimeter signals are directly digitized and sent to the off-detector electronics in the counting room. This will be done with minimum latency and maximum robustness. It will provide maximum TileCal information to the first level of the calorimeter trigger (probably called level 0) to improve the trigger efficiency as required to cope with the increased luminosity. An ambitious u...

  2. Monitoring Tool for Digital Errors in the ATLAS Tile Calorimeter Readout

    CERN Document Server

    Cuciuc, M; The ATLAS collaboration

    2012-01-01

    A software monitoring tools for easy visualization of digital errors that occurs during data taking in the Tile Calorimeter of the ATLAS experiment has been developed. This system is useful in keeping track of the performance over time as well as in making predictions about future failures. It can also correlate the digital errors with other problems, such as with power supplies, for diagnostic purposes. The ATLAS archive database is used to correlate the current digital error rates with the detector and data acquisition status. The results are stored locally so that users can monitor the evolution of error rates and localize problems. The system provides a flexible easy-to-use interface that can be accessed using a web browser.

  3. Multiple brain atlas database and atlas-based neuroimaging system.

    Science.gov (United States)

    Nowinski, W L; Fang, A; Nguyen, B T; Raphel, J K; Jagannathan, L; Raghavan, R; Bryan, R N; Miller, G A

    1997-01-01

    For the purpose of developing multiple, complementary, fully labeled electronic brain atlases and an atlas-based neuroimaging system for analysis, quantification, and real-time manipulation of cerebral structures in two and three dimensions, we have digitized, enhanced, segmented, and labeled the following print brain atlases: Co-Planar Stereotaxic Atlas of the Human Brain by Talairach and Tournoux, Atlas for Stereotaxy of the Human Brain by Schaltenbrand and Wahren, Referentially Oriented Cerebral MRI Anatomy by Talairach and Tournoux, and Atlas of the Cerebral Sulci by Ono, Kubik, and Abernathey. Three-dimensional extensions of these atlases have been developed as well. All two- and three-dimensional atlases are mutually preregistered and may be interactively registered with an actual patient's data. An atlas-based neuroimaging system has been developed that provides support for reformatting, registration, visualization, navigation, image processing, and quantification of clinical data. The anatomical index contains about 1,000 structures and over 400 sulcal patterns. Several new applications of the brain atlas database also have been developed, supported by various technologies such as virtual reality, the Internet, and electronic publishing. Fusion of information from multiple atlases assists the user in comprehensively understanding brain structures and identifying and quantifying anatomical regions in clinical data. The multiple brain atlas database and atlas-based neuroimaging system have substantial potential impact in stereotactic neurosurgery and radiotherapy by assisting in visualization and real-time manipulation in three dimensions of anatomical structures, in quantitative neuroradiology by allowing interactive analysis of clinical data, in three-dimensional neuroeducation, and in brain function studies. PMID:9148878

  4. The Read-Out Driver for the ATLAS MDT Muon Precision Chambers

    CERN Document Server

    Boterenbrood, H; Kieft, G; König, A; Vermeulen, J C; Wijnen, T A M; 14th IEEE - NPSS Real Time Conference 2005 Nuclear Plasma Sciences Society

    2006-01-01

    Some 200 MDT Read Out Drivers (MRODs) will be built to read out the 1200 MDT precision chambers of the muon spectrometer of the ATLAS experiment at the LHC. The MRODs receive event data via optical links (one per chamber, up to 8 per MROD), build event fragments at a maximum rate of 100 kHz, output these to the ATLAS data-acquisition system and take care of monitoring and error checking, handling and flagging. The design of the MROD-1 prototype (a 9U VME64 module in which this functionality is implemented using FPGAs and ADSP-21160 Digital Signal Processors programmed in C++) is described, followed by a presentation of results of performance measurements. Then the implications for the production version (called MROD-X) and the experience with pre-production modules of the MROD-X are discussed.

  5. The Database Driven ATLAS Trigger Configuration System

    CERN Document Server

    Martyniuk, Alex; The ATLAS collaboration

    2015-01-01

    This contribution describes the trigger selection configuration system of the ATLAS low- and high-level trigger (HLT) and the upgrades it received in preparation for LHC Run 2. The ATLAS trigger configuration system is responsible for applying the physics selection parameters for the online data taking at both trigger levels and the proper connection of the trigger lines across those levels. Here the low-level trigger consists of the already existing central trigger (CT) and the new Level-1 Topological trigger (L1Topo), which has been added for Run 2. In detail the tasks of the configuration system during the online data taking are Application of the selection criteria, e.g. energy cuts, minimum multiplicities, trigger object correlation, at the three trigger components L1Topo, CT, and HLT On-the-fly, e.g. rate-dependent, generation and application of prescale factors to the CT and HLT to adjust the trigger rates to the data taking conditions, such as falling luminosity or rate spikes in the detector readout ...

  6. AGIS: The ATLAS Grid Information System

    CERN Document Server

    Anisenkov, A; The ATLAS collaboration; Klimentov, A; Oleynik, D; Petrosyan, A

    2014-01-01

    In this paper we describe ATLAS Grid Information System (AGIS), the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services.

  7. AGIS: The ATLAS Grid Information System

    CERN Document Server

    Anisenkov, A; The ATLAS collaboration; Klimentov, A; Oleynik, D; Petrosyan, A

    2013-01-01

    In this paper we describe ATLAS Grid Information System (AGIS), the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services.

  8. Commissioning of the read-out driver (ROD) card for the ATLAS IBL detector and upgrade studies for the pixel Layers 1 and 2

    Energy Technology Data Exchange (ETDEWEB)

    Balbi, G.; Bindi, M. [Istituto Nazionale di Fisica Nucleare (INFN), Bologna (Italy); Falchieri, D. [Istituto Nazionale di Fisica Nucleare (INFN), Bologna (Italy); Department of Physics and Astronomy, University of Bologna (Italy); Gabrielli, A., E-mail: alessandro.gabrielli@bo.infn.it [Istituto Nazionale di Fisica Nucleare (INFN), Bologna (Italy); Department of Physics and Astronomy, University of Bologna (Italy); Travaglini, R. [Istituto Nazionale di Fisica Nucleare (INFN), Bologna (Italy); Chen, S.-P.; Hsu, S.-C.; Hauck, S. [University of Washington, Seattle (United States); Kugel, A. [ZITI – Institute for Computer Engineering, University of Heidelberg at Mannheim (Germany)

    2014-11-21

    The higher luminosity that is expected for the LHC after future upgrades will require better performance by the data acquisition system, especially in terms of throughput. In particular, during the first shutdown of the LHC collider in 2013/14, the ATLAS Pixel Detector will be equipped with a fourth layer – the Insertable B-Layer or IBL – located at a radius smaller than the present three layers. Consequently, a new front end ASIC (FE-I4) was designed as well as a new off-detector chain. The latter is composed mainly of two 9U-VME cards called the Back-Of-Crate (BOC) and Read-Out Driver (ROD). The ROD is used for data and event formatting and for configuration and control of the overall read-out electronics. After some prototyping samples were completed, a pre-production batch of 5 ROD cards was delivered with the final layout. Actual production of another 15 ROD cards is ongoing in Fall 2013, and commissioning is scheduled in 2014. Altogether 14 cards are necessary for the 14 staves of the IBL detector, one additional card is required by the Diamond Beam Monitor (DBM), and additional spare ROD cards will be produced for a total of 20 boards. This paper describes some integration tests that were performed and our plan to test the production of the ROD cards. Slices of the IBL read-out chain have been instrumented, and ROD performance is verified on a test bench mimicking a small-sized final setup. This contribution will report also one view on the possible adoption of the IBL ROD for ATLAS Pixel Detector Layer 2 (firstly) and, possibly, in the future, for Layer 1.

  9. Commissioning of the read-out driver (ROD) card for the ATLAS IBL detector and upgrade studies for the pixel Layers 1 and 2

    International Nuclear Information System (INIS)

    The higher luminosity that is expected for the LHC after future upgrades will require better performance by the data acquisition system, especially in terms of throughput. In particular, during the first shutdown of the LHC collider in 2013/14, the ATLAS Pixel Detector will be equipped with a fourth layer – the Insertable B-Layer or IBL – located at a radius smaller than the present three layers. Consequently, a new front end ASIC (FE-I4) was designed as well as a new off-detector chain. The latter is composed mainly of two 9U-VME cards called the Back-Of-Crate (BOC) and Read-Out Driver (ROD). The ROD is used for data and event formatting and for configuration and control of the overall read-out electronics. After some prototyping samples were completed, a pre-production batch of 5 ROD cards was delivered with the final layout. Actual production of another 15 ROD cards is ongoing in Fall 2013, and commissioning is scheduled in 2014. Altogether 14 cards are necessary for the 14 staves of the IBL detector, one additional card is required by the Diamond Beam Monitor (DBM), and additional spare ROD cards will be produced for a total of 20 boards. This paper describes some integration tests that were performed and our plan to test the production of the ROD cards. Slices of the IBL read-out chain have been instrumented, and ROD performance is verified on a test bench mimicking a small-sized final setup. This contribution will report also one view on the possible adoption of the IBL ROD for ATLAS Pixel Detector Layer 2 (firstly) and, possibly, in the future, for Layer 1

  10. The ATLAS Detector Control System

    CERN Document Server

    Schlenker, S; Kersten, S; Hirschbuehl, D; Braun, H; Poblaguev, A; Oliveira Damazio, D; Talyshev, A; Zimmermann, S; Franz, S; Gutzwiller, O; Hartert, J; Mindur, B; Tsarouchas, CA; Caforio, D; Sbarra, C; Olszowska, J; Hajduk, Z; Banas, E; Wynne, B; Robichaud-Veronneau, A; Nemecek, S; Thompson, PD; Mandic, I; Deliyergiyev, M; Polini, A; Kovalenko, S; Khomutnikov, V; Filimonov, V; Bindi, M; Stanecka, E; Martin, T; Lantzsch, K; Hoffmann, D; Huber, J; Mountricha, E; Santos, HF; Ribeiro, G; Barillari, T; Habring, J; Arabidze, G; Boterenbrood, H; Hart, R; Marques Vinagre, F; Lafarguette, P; Tartarelli, GF; Nagai, K; D'Auria, S; Chekulaev, S; Phillips, P; Ertel, E; Brenner, R; Leontsinis, S; Mitrevski, J; Grassi, V; Karakostas, K; Iakovidis, G.; Marchese, F; Aielli, G

    2011-01-01

    The ATLAS experiment is one of the multi-purpose experiments at the Large Hadron Collider (LHC), constructed to study elementary particle interactions in collisions of high-energy proton beams. Twelve different sub-detectors as well as the common experimental infrastructure are supervised by the Detector Control System (DCS). The DCS enables equipment supervision of all ATLAS sub-detectors by using a system of >130 server machines running the industrial SCADA product PVSS. This highly distributed system reads, processes and archives of the order of 106 operational parameters. Higher level control system layers allow for automatic control procedures, efficient error recognition and handling, and manage the communication with external systems such as the LHC. This contribution firstly describes the status of the ATLAS DCS and the experience gained during the LHC commissioning and the first physics data taking operation period. Secondly, the future evolution and maintenance constraints for the coming years an...

  11. ATLAS Nightly Build System Upgrade

    CERN Document Server

    Dimitrov, G; The ATLAS collaboration; Simmons, B; Undrus, A

    2014-01-01

    The ATLAS Nightly Build System is a facility for automatic production of software releases. Being the major component of ATLAS software infrastructure, it supports more than 50 multi-platform branches of nightly releases and provides ample opportunities for testing new packages, for verifying patches to existing software, and for migrating to new platforms and compilers. The Nightly System testing framework runs several hundred integration tests of different granularity and purpose. The nightly releases are distributed and validated, and some are transformed into stable releases used for data processing worldwide. The first LHC long shutdown (2013-2015) activities will elicit increased load on the Nightly System as additional releases and builds are needed to exploit new programming techniques, languages, and profiling tools. This paper describes the plan of the ATLAS Nightly Build System Long Shutdown upgrade. It brings modern database and web technologies into the Nightly System, improves monitoring of nigh...

  12. ATLAS Nightly Build System Upgrade

    CERN Document Server

    Dimitrov, G; The ATLAS collaboration; Simmons, B; Undrus, A

    2013-01-01

    The ATLAS Nightly Build System is a facility for automatic production of software releases. Being the major component of ATLAS software infrastructure, it supports more than 50 multi-platform branches of nightly releases and provides ample opportunities for testing new packages, for verifying patches to existing software, and for migrating to new platforms and compilers. The Nightly System testing framework runs several hundred integration tests of different granularity and purpose. The nightly releases are distributed and validated, and some are transformed into stable releases used for data processing worldwide. The first LHC long shutdown (2013-2015) activities will elicit increased load on the Nightly System as additional releases and builds are needed to exploit new programming techniques, languages, and profiling tools. This paper describes the plan of the ATLAS Nightly Build System Long Shutdown upgrade. It brings modern database and web technologies into the Nightly System, improves monitoring of nigh...

  13. The Retinal Readout System: a status report A Status Report

    CERN Document Server

    Litke, A M

    1999-01-01

    The 'Retinal Readout System' is being developed to study the language the eye uses to send information about the visual world to the brain. Its architecture is based on that of silicon microstrip detectors. An array of 512 microscopic electrodes picks up the signals generated by the output neurons of live retinal tissue in response to a dynamic image focused on the input neurons. These signals are amplified, filtered and multiplexed by a set of eight custom-designed VLSI readout chips, and digitized and recorded by a data acquisition system. This report describes the goals, design, and status of the system. (author)

  14. Spatial distribution read-out system for thermoluminescence sheets

    Science.gov (United States)

    Yamamoto, I.; Tomiyama, T.; Imaeda, K.; Ninagawa, K.; Wada, T.; Yamashita, Y.; Misaki, A.

    1985-01-01

    A spatial distribution read-out system of thermoluminescence (TL) sheets is developed. This system consists of high gain image intensifier, a CCD-TV camera, a video image processor and a host computer. This system has been applied to artificial TL sheets (BaSO4:Eu doped) for detecting high energy electromagnetic shower and heavy nuclei tracks.

  15. The New Readout System of the NA62 LKr Calorimeter

    CERN Document Server

    Ceccucci, A; Farthouat, P; Lamanna, G; Rouet, J; Ryjov, V; Venditti, S

    2015-01-01

    The NA62 experiment [1] at CERN SPS (Super Proton Synchrotron) accelerator aims at studying Kaon decays with high precision. The high resolution Liquid Krypton (LKr) calorimeter, built for the NA48 [2] experiment, is a crucial part of the photon-veto system; to cope with the demanding NA62 re- quirements,itsback-endelectron icshadtobecompletelyrenewed. The new readout system is based on the Calorimeter REAdout Module (CREAM) [3], a 6U VME board whose design and pro- duction was sub-contracted to CAEN [4], with CERN NA62 group continuously supervising the de velopment and production phase. The first version of the board was delivered by the manufacturer in March 2013 and, as of June 2014, the full board production is ongoing. In addition to describing the CREAM board, all aspects of the new LKr readout system, including its integration within the NA62 TDAQ scheme, will be treated.

  16. A compact light readout system for longitudinally segmented shashlik calorimeters

    CERN Document Server

    Berra, A; Cecchini, S; Cindolo, F; Jollet, C; Longhin, A; Ludovici, L; Mandrioli, G; Mauri, N; Meregaglia, A; Paoloni, A; Pasqualini, L; Patrizii, L; Pozzato, M; Pupilli, F; Prest, M; Sirri, G; Terranova, F; Vallazza, E; Votano, L

    2016-01-01

    The longitudinal segmentation of shashlik calorimeters is challenged by dead zones and non-uniformities introduced by the light collection and readout system. This limitation can be overcome by direct fiber-photosensor coupling, avoiding routing and bundling of the wavelength shifter fibers and embedding ultra-compact photosensors (SiPMs) in the bulk of the calorimeter. We present the first experimental test of this readout scheme performed at the CERN PS-T9 beamline in 2015 with negative particles in the 1-5~GeV energy range. In this paper, we demonstrate that the scheme does not compromise the energy resolution and linearity compared with standard light collection and readout systems. In addition, we study the performance of the calorimeter for partially contained charged hadrons to assess the $e/\\pi$ separation capability and the response of the photosensors to direct ionization.

  17. A compact light readout system for longitudinally segmented shashlik calorimeters

    Science.gov (United States)

    Berra, A.; Brizzolari, C.; Cecchini, S.; Cindolo, F.; Jollet, C.; Longhin, A.; Ludovici, L.; Mandrioli, G.; Mauri, N.; Meregaglia, A.; Paoloni, A.; Pasqualini, L.; Patrizii, L.; Pozzato, M.; Pupilli, F.; Prest, M.; Sirri, G.; Terranova, F.; Vallazza, E.; Votano, L.

    2016-09-01

    The longitudinal segmentation of shashlik calorimeters is challenged by dead zones and non-uniformities introduced by the light collection and readout system. This limitation can be overcome by direct fiber-photosensor coupling, avoiding routing and bundling of the wavelength shifter fibers and embedding ultra-compact photosensors (SiPMs) in the bulk of the calorimeter. We present the first experimental test of this readout scheme performed at the CERN PS-T9 beamline in 2015 with negative particles in the 1-5 GeV energy range. In this paper, we demonstrate that the scheme does not compromise the energy resolution and linearity compared with standard light collection and readout systems. In addition, we study the performance of the calorimeter for partially contained charged hadrons to assess the e / π separation capability and the response of the photosensors to direct ionization.

  18. The pixel readout system for the PHENIX pad chambers

    International Nuclear Information System (INIS)

    A new concept for two-dimensional position readout of wire chambers is described. The basic idea is to use a cathode segmented into small pixels that are read out in specific groups (pads). The electronics is mounted on the outer face of the chamber with a chip-on-board technique, pushing the material thickness to a minimum. The system described here, containing 210 000 readout channels, will be used to read out the pad chambers in the PHENIX experiment at the Relativistic Heavy Ion Collider (RHIC)

  19. The pixel readout system for the PHENIX pad chambers

    Energy Technology Data Exchange (ETDEWEB)

    Nilsson, Paul B., E-mail: paul.nilsson@kosufy.lu.se; Barrette, J.; Bryan, W.; Fraenkel, Z.; Greene, V.; Garpman, S.; Gustafsson, H.-A.; Jagadish, U.; Nikkinen, L.; Lacey, R.; Lauret, J.; Mark, S.K.; Milov, A.; O' Brien, E.; Oskarsson, A.; Oesterman, L.; Otterlund, I.; Pinkenburg, C.; Ravinovich, I.; Rose, A.; Silvermyr, D.; Sivertz, M.; Smith, M.; Stenlund, E.; Svensson, T.; Teodorescu, O.; Tserruya, I.; Xie, W.; Young, G.R

    1999-12-27

    A new concept for two-dimensional position readout of wire chambers is described. The basic idea is to use a cathode segmented into small pixels that are read out in specific groups (pads). The electronics is mounted on the outer face of the chamber with a chip-on-board technique, pushing the material thickness to a minimum. The system described here, containing 210 000 readout channels, will be used to read out the pad chambers in the PHENIX experiment at the Relativistic Heavy Ion Collider (RHIC)

  20. The ATLAS distributed analysis system

    Science.gov (United States)

    Legger, F.; Atlas Collaboration

    2014-06-01

    In the LHC operations era, analysis of the multi-petabyte ATLAS data sample by globally distributed physicists is a challenging task. To attain the required scale the ATLAS Computing Model was designed around the concept of Grid computing, realized in the Worldwide LHC Computing Grid (WLCG), the largest distributed computational resource existing in the sciences. The ATLAS experiment currently stores over 140 PB of data and runs about 140,000 concurrent jobs continuously at WLCG sites. During the first run of the LHC, the ATLAS Distributed Analysis (DA) service has operated stably and scaled as planned. More than 1600 users submitted jobs in 2012, with 2 million or more analysis jobs per week, peaking at about a million jobs per day. The system dynamically distributes popular data to expedite processing and maximally utilize resources. The reliability of the DA service is high and steadily improving; Grid sites are continually validated against a set of standard tests, and a dedicated team of expert shifters provides user support and communicates user problems to the sites. Both the user support techniques and the direct feedback of users have been effective in improving the success rate and user experience when utilizing the distributed computing environment. In this contribution a description of the main components, activities and achievements of ATLAS distributed analysis is given. Several future improvements being undertaken will be described.

  1. The ATLAS distributed analysis system

    International Nuclear Information System (INIS)

    In the LHC operations era, analysis of the multi-petabyte ATLAS data sample by globally distributed physicists is a challenging task. To attain the required scale the ATLAS Computing Model was designed around the concept of Grid computing, realized in the Worldwide LHC Computing Grid (WLCG), the largest distributed computational resource existing in the sciences. The ATLAS experiment currently stores over 140 PB of data and runs about 140,000 concurrent jobs continuously at WLCG sites. During the first run of the LHC, the ATLAS Distributed Analysis (DA) service has operated stably and scaled as planned. More than 1600 users submitted jobs in 2012, with 2 million or more analysis jobs per week, peaking at about a million jobs per day. The system dynamically distributes popular data to expedite processing and maximally utilize resources. The reliability of the DA service is high and steadily improving; Grid sites are continually validated against a set of standard tests, and a dedicated team of expert shifters provides user support and communicates user problems to the sites. Both the user support techniques and the direct feedback of users have been effective in improving the success rate and user experience when utilizing the distributed computing environment. In this contribution a description of the main components, activities and achievements of ATLAS distributed analysis is given. Several future improvements being undertaken will be described.

  2. The design of the 3He readout system on CSNS

    International Nuclear Information System (INIS)

    Used in the High-Intensity Powder Diffraction in China Spallation Neutron Sources (CSNS) Project, the electronic parts of the 3He readout system is introduced. The design of the Charge Measurement module (MQ) is described in details, including the structure of the circuit, the firmware of the FPGA on the board. The test results are given out in the final. (authors)

  3. Development of a digital readout board for the ATLAS Tile Calorimeter upgrade demonstrator

    International Nuclear Information System (INIS)

    During the LHC shutdown in 2013/14, one of the ATLAS scintillating Tile Calorimeter (TileCal) on-detector modules will be replaced with a compatible hybrid demonstrator system. This is being built to fulfill all requirements for the complete upgrade of the TileCal electronics in 2022 but augmented to stay compatible with the present system. We report on the hybrid system's FPGA based communication module that is responsible for receiving and unpacking commands using a 4.8 Gbps downlink and driving a high bandwidth data uplink. The report includes key points like multi-gigabit transmission, clock distribution, programming and operation of the hardware. We also report on a firmware skeleton implementing all these key points and demonstrate how timing, trigger, control and data transmission can be achieved in the demonstrator

  4. Development and test of the DAQ system for a Micromegas prototype installed into the ATLAS experiment

    CERN Document Server

    Zibell, Andre; The ATLAS collaboration; Bianco, Michele; Martoiu, Victor Sorin

    2015-01-01

    A Micromegas (MM) quadruplet prototype with an active area of 0.5 m$^2$ that adopts the general design foreseen for the upgrade of the innermost forward muon tracking systems (Small Wheels) of the ATLAS detector in 2018-2019, has been built at CERN and is going to be tested in the ATLAS cavern environment during the LHC RUN-II period 2015-2017. The integration of this prototype detector into the ATLAS data acquisition system using custom ATCA equipment is presented. An ATLAS compatible ReadOutDriver (ROD) based on the Scalable Readout System (SRS), the Scalable Readout Unit (SRU), will be used in order to transmit the data after generating valid event fragments to the high-level Read Out System (ROS). The SRU will be synchronized with the LHC bunch crossing clock (40.08 MHz) and will receive the Level-1 trigger signals from the Central Trigger Processor (CTP) through the TTCrx receiver ASIC. The configuration of the system will be driven directly from the ATLAS Run Control System. By using the ATLAS TDAQ Soft...

  5. The ATLAS Detector Control System

    Science.gov (United States)

    Lantzsch, K.; Arfaoui, S.; Franz, S.; Gutzwiller, O.; Schlenker, S.; Tsarouchas, C. A.; Mindur, B.; Hartert, J.; Zimmermann, S.; Talyshev, A.; Oliveira Damazio, D.; Poblaguev, A.; Braun, H.; Hirschbuehl, D.; Kersten, S.; Martin, T.; Thompson, P. D.; Caforio, D.; Sbarra, C.; Hoffmann, D.; Nemecek, S.; Robichaud-Veronneau, A.; Wynne, B.; Banas, E.; Hajduk, Z.; Olszowska, J.; Stanecka, E.; Bindi, M.; Polini, A.; Deliyergiyev, M.; Mandic, I.; Ertel, E.; Marques Vinagre, F.; Ribeiro, G.; Santos, H. F.; Barillari, T.; Habring, J.; Huber, J.; Arabidze, G.; Boterenbrood, H.; Hart, R.; Iakovidis, G.; Karakostas, K.; Leontsinis, S.; Mountricha, E.; Ntekas, K.; Filimonov, V.; Khomutnikov, V.; Kovalenko, S.; Grassi, V.; Mitrevski, J.; Phillips, P.; Chekulaev, S.; D'Auria, S.; Nagai, K.; Tartarelli, G. F.; Aielli, G.; Marchese, F.; Lafarguette, P.; Brenner, R.

    2012-12-01

    The ATLAS experiment is one of the multi-purpose experiments at the Large Hadron Collider (LHC) at CERN, constructed to study elementary particle interactions in collisions of high-energy proton beams. Twelve different sub detectors as well as the common experimental infrastructure are controlled and monitored by the Detector Control System (DCS) using a highly distributed system of 140 server machines running the industrial SCADA product PVSS. Higher level control system layers allow for automatic control procedures, efficient error recognition and handling, manage the communication with external systems such as the LHC controls, and provide a synchronization mechanism with the ATLAS data acquisition system. Different databases are used to store the online parameters of the experiment, replicate a subset used for physics reconstruction, and store the configuration parameters of the systems. This contribution describes the computing architecture and software tools to handle this complex and highly interconnected control system.

  6. Radiation hardness and lifetime studies of LEDs and VCSELs for the optical readout of the ATLAS SCT

    CERN Document Server

    Beringer, J; Mommsen, R K; Nickerson, R B; Weidberg, A R; Monnier, E; Hou, H Q; Lear, K L

    1999-01-01

    We study the radiation hardness and the lifetime of Light Emitting Diodes (LEDs) and Vertical Cavity Surface Emitting Laser diodes (VCSELs) in the context of the development of the optical readout for the ATLAS SemiConductor Tracker (SCT) at LHC. About 170 LEDs from two different manufacturers and about 130 VCSELs were irradiated with neutron and proton fluences equivalent to (and in some cases more than twice as high as) the combined neutral and charged particle fluence of about 5x10 sup 1 sup 4 n (1 MeV eq. in GaAs)/cm sup 2 expected in the ATLAS inner detector. We report on the radiation damage and the conditions required for its partial annealing under forward bias, we calculate radiation damage constants, and we present post-irradiation failure rates for LEDs and VCSELs. The lifetime after irradiation was investigated by operating the diodes at an elevated temperature of 50 degree sign C for several months, resulting in operating times corresponding to up to 70 years of operation in the ATLAS SCT. From o...

  7. Readout Electronics Calibration and Energy Resolution Analysis for ATLAS New Small Wheel Phase I Upgrade

    CERN Document Server

    Trischuk, Dominique Anderson

    2016-01-01

    The High Luminosity Large Hadron Collider (HL-LHC), a planned upgrade of the LHC for 2025, will provide a challenging environment the detectors. The ATLAS muon endcap system was not designed to operate at the high rates that will be provided by the HL-LHC and must be upgraded. The New Small Wheel (NSW) will replace the current Muon Small Wheel and will provide enhanced trigger and tracking capabilities. The VMM chip is a custom applied specific integrated circuit (ASIC), designed at Brookhaven National Laboratory, that will serve as the frontend ASIC for the detectors in the NSW. In order to provide precise timing measurements, the VMM chip must be calibrated. The micromegas are one of two detectors that will be installed in the NSW. A measurement of the energy spectrum can be used to calculate the energy resolution of the micromegas. The calibration method for the VMM chips and energy resolution measurements of the micromegas are described in this report.

  8. Upgrading the ATLAS control system

    International Nuclear Information System (INIS)

    Heavy-ion accelerators are tools used in the research of nuclear and atomic physics. The ATLAS facility at the Argonne National Laboratory is one such tool. The ATLAS control system serves as the primary operator interface to the accelerator. A project to upgrade the control system is presently in progress. Since this is an upgrade project and not a new installation, it was imperative that the development work proceed without interference to normal operations. An additional criteria for the development work was that the writing of additional ''in-house'' software should be kept to a minimum. This paper briefly describes the control system being upgraded, and explains some of the reasons for the decision to upgrade the control system. Design considerations and goals for the new system are described, and the present status of the upgrade is discussed

  9. The realization of VME readout system based on embedded linux

    International Nuclear Information System (INIS)

    This article describes how to realize the Embedded linux on the VMEbus controller, PowerPC. The first port, it introduces the system hardware, data process flow, and test method. And then, it describes key technologies in the process of realization-the realization of VME read and write, interrupt, DMA drivers. Finally, it analyses the test results. The system can be used to test board based on VME bus and readout data. (authors)

  10. The ATLAS Distributed Analysis System

    CERN Document Server

    Legger, F; The ATLAS collaboration; Pacheco Pages, A; Stradling, A

    2013-01-01

    In the LHC operations era, analysis of the multi-petabyte ATLAS data sample by globally distributed physicists is a challenging task. To attain the required scale the ATLAS Computing Model was designed around the concept of grid computing, realized in the Worldwide LHC Computing Grid (WLCG), the largest distributed computational resource existing in the sciences. The ATLAS experiment currently stores over 140 PB of data and runs about 140,000 concurrent jobs continuously at WLCG sites. During the first run of the LHC, the ATLAS Distributed Analysis (DA) service has operated stably and scaled as planned. More than 1600 users submitted jobs in 2012, with 2 million or more analysis jobs per week, peaking at about a million jobs per day. The system dynamically distributes popular data to expedite processing and maximally utilize resources. The reliability of the DA service is high but steadily improving; grid sites are continually validated against a set of standard tests, and a dedicated team of expert shifters ...

  11. The ATLAS Distributed Analysis System

    CERN Document Server

    Legger, F; The ATLAS collaboration

    2014-01-01

    In the LHC operations era, analysis of the multi-petabyte ATLAS data sample by globally distributed physicists is a challenging task. To attain the required scale the ATLAS Computing Model was designed around the concept of grid computing, realized in the Worldwide LHC Computing Grid (WLCG), the largest distributed computational resource existing in the sciences. The ATLAS experiment currently stores over 140 PB of data and runs about 140,000 concurrent jobs continuously at WLCG sites. During the first run of the LHC, the ATLAS Distributed Analysis (DA) service has operated stably and scaled as planned. More than 1600 users submitted jobs in 2012, with 2 million or more analysis jobs per week, peaking at about a million jobs per day. The system dynamically distributes popular data to expedite processing and maximally utilize resources. The reliability of the DA service is high but steadily improving; grid sites are continually validated against a set of standard tests, and a dedicated team of expert shifters ...

  12. The LUCID detector ATLAS luminosity monitor and its electronic system

    Science.gov (United States)

    Manghi, F. Lasagni

    2016-07-01

    In 2015 LHC is starting a new run, at higher center of mass energy (13 TeV) and with 25 ns bunch-spacing. The ATLAS luminosity monitor LUCID has been completely rebuilt, both the detector and the electronics, in order to cope with the new running conditions. The new detector electronics features a new read-out board (LUCROD) for signal acquisition and digitization, PMT-charge integration and single-side luminosity measurements, and a revisited LUMAT board for combination of signals from the two detectors. This note describes the new board design, the firmware and software developments, the implementation of luminosity algorithms, the optical communication between boards and the integration into the ATLAS TDAQ system.

  13. The OPERA global readout and GPS distribution system

    International Nuclear Information System (INIS)

    OPERA is an experiment dedicated to the observation of νμ into ντ oscillations in appearance mode using a pure νμ beam (CNGS) produced at CERN and detected at Gran Sasso. The experiment exploits a hybrid technology with emulsions and electronics detectors. The OPERA readout is performed through a triggerless, continuously running, distributed and highly available system. Its global architecture is based on Ethernet-capable smart sensors with microprocessing and network interface directly at the front-end stage. A unique interface board is used for the full detector reading out ADC-, TDC- or Controller-boards. All the readout channels are synchronized through a GPS-locked common bidirectional clock distribution system developed on purpose in a PCI format. It offers a second line to address all channels and the off-line synchronization with the CNGS to select the events.

  14. Optical readout and control systems for the CMS tracker

    CERN Document Server

    Troska, Jan K; Faccio, F; Gill, K; Grabit, R; Jareno, R M; Sandvik, A M; Vasey, F

    2003-01-01

    The Compact Muon Solenoid (CMS) Experiment will be installed at the CERN Large Hadron Collider (LHC) in 2007. The readout system for the CMS Tracker consists of 10000000 individual detector channels that are time-multiplexed onto 40000 unidirectional analogue (40 MSample /s) optical links for transmission between the detector and the 65 m distant counting room. The corresponding control system consists of 2500 bi-directional digital (40 Mb/s) optical links based as far as possible upon the same components. The on-detector elements (lasers and photodiodes) of both readout and control links will be distributed throughout the detector volume in close proximity to the silicon detector elements. For this reason, strict requirements are placed on minimal package size, mass, power dissipation, immunity to magnetic field, and radiation hardness. It has been possible to meet the requirements with the extensive use of commercially available components with a minimum of customization. The project has now entered its vol...

  15. ATLAS nightly build system upgrade

    International Nuclear Information System (INIS)

    The ATLAS Nightly Build System is a facility for automatic production of software releases. Being the major component of ATLAS software infrastructure, it supports more than 50 multi-platform branches of nightly releases and provides ample opportunities for testing new packages, for verifying patches to existing software, and for migrating to new platforms and compilers. The Nightly System testing framework runs several hundred integration tests of different granularity and purpose. The nightly releases are distributed and validated, and some are transformed into stable releases used for data processing worldwide. The first LHC long shutdown (2013-2015) activities will elicit increased load on the Nightly System as additional releases and builds are needed to exploit new programming techniques, languages, and profiling tools. This paper describes the plan of the ATLAS Nightly Build System Long Shutdown upgrade. It brings modern database and web technologies into the Nightly System, improves monitoring of nightly build results, and provides new tools for offline release shifters. We will also outline our long-term plans for distributed nightly releases builds and testing

  16. Configuration of the ATLAS Trigger System

    CERN Document Server

    Elsing, M; Armstrong, S; Baines, J T M; Bee, C P; Biglietti, M; Bogaerts, A; Boisvert, V; Bosman, M; Brandt, S; Caron, B; Casado, M P; Cataldi, G; Cavalli, D; Cervetto, M; Comune, G; Corso-Radu, A; Di Mattia, A; Díaz-Gómez, M; Dos Anjos, A; Drohan, J; Ellis, Nick; Epp, B; Etienne, F; Falciano, S; Farilla, A; George, S; Ghete, V M; González, S; Grothe, M; Kaczmarska, A; Karr, K M; Khomich, A; Konstantinidis, N P; Krasny, W; Li, W; Lowe, A; Luminari, L; Ma, H; Meessen, C; Mello, A G; Merino, G; Morettini, P; Moyse, E; Nairz, A; Negri, A; Nikitin, N V; Nisati, A; Padilla, C; Parodi, F; Pérez-Réale, V; Pinfold, J L; Pinto, P; Polesello, G; Qian, Z; Rajagopalan, S; Resconi, S; Rosati, S; Scannicchio, D A; Schiavi, C; Segura, E; De Seixas, J M; Shears, T G; Sivoklokov, S Yu; Smizanska, M; Soluk, R A; Stanescu, C; Tapprogge, Stefan; Touchard, F; Vercesi, V; Watson, A; Wengler, T; Werner, P; Wheeler, S; Wickens, F J; Wiedenmann, W; Wielers, M; Zobernig, G; CHEP 2003 Computing in High Energy Physics

    2003-01-01

    In this paper a conceptual overview is given of the software foreseen to configure the ATLAS trigger system. Two functional software prototypes have been developed to configure the ATLAS Level-1 emulation and the High-Level Trigger software. Emphasis has been put so far on following a consistent approach between the two trigger systems and on addressing their requirements, taking into account the specific use-case of the `Region-of-Interest' mechanism for the ATLAS Level-2 trigger. In the future the configuration of the two systems will be combined to ensure a consistent selection configuration for the entire ATLAS trigger system.

  17. ATLAS TDAQ System Administration: an overview and evolution

    CERN Document Server

    LEE, CJ; The ATLAS collaboration; BOGDANCHIKOV, A; BRASOLIN, F; CONTESCU, AC; DARLEA, GL; KOROL, A; SCANNICCHIO, DA; TWOMEY, M; VALSAN, ML

    2013-01-01

    The ATLAS Trigger and Data Acquisition (TDAQ) system is responsible for the online processing of live data streaming from the ATLAS experiment at the Large Hadron Collider (LHC) at CERN. The system processes the direct data readout from ~100 million channels on the detector through three trigger levels, selecting interesting events for analysis with a factor of 10^7 reduction on the data rate with a latency of less than a few seconds. Most of the functionality is implemented on ~3000 servers composing the online farm. Due to the critical functionality of the system a sophisticated computing environment is maintained, covering the online farm and ATLAS control rooms, as well as a number of development and testing labs. The specificity of the system required the development of dedicated applications (e.g. ConfDB, BWM) for system configuration and maintenance; in parallel other Open Source tools (Puppet and Quattor) are used to centrally configure the operating systems. The health monitoring of the TDAQ system h...

  18. Upgrade readout and trigger electronics for the ATLAS liquid argon calorimeters for future LHC running

    CERN Document Server

    Yamanaka, T; The ATLAS collaboration

    2014-01-01

    The ATLAS Liquid Argon (LAr) calorimeters produce almost 200K signals that must be digitized and processed by the front-end and back-end electronics at every triggered event. Additionally, the front-end electronics sums analog signals to provide coarse-grained energy sums to the first-level (L1) trigger system. The current design was optimized for the nominal LHC luminosity of 10^34 cm^-2s^-1. However, in future higher-luminosity phases of LHC operation, the luminosity (and associated pile-up noise) will be 3-7 times higher. An improved spatial granularity of the trigger primitives is therefore proposed, in order to improve the trigger performance at high background rejection rates. For the first upgrade phase in 2018, new LAr Trigger Digitizer Boards are being designed to receive the higher granularity signals, digitize them on-detector and send them via fast optical links to a new digital processing system (DPS). This applies digital filtering and identifies significant energy depositions in each trigger ch...

  19. Upgraded readout and trigger electronics for the ATLAS liquid argon calorimeters for future LHC running

    CERN Document Server

    Yamanaka, T; The ATLAS collaboration

    2014-01-01

    The ATLAS Liquid Argon (LAr) calorimeters produce almost 200K signals that must be digitized and processed by the front-end and back-end electronics at every triggered event. Additionally, the front-end electronics sums analog signals to provide coarse-grained energy sums to the first-level (L1) trigger system. The current design was optimized for the nominal LHC luminosity of 10^34 cm^-2s^-1. However, in future higher-luminosity phases of LHC operation, the luminosity (and associated pile-up noise) will be 3-7 times higher. An improved spatial granularity of the trigger primitives is therefore proposed, in order to improve the trigger performance at high background rejection rates. For the first upgrade phase in 2018, new LAr Trigger Digitizer Boards are being designed to receive the higher granularity signals, digitize them on-detector and send them via fast optical links to a new digital processing system (DPS). This applies digital filtering and identifies significant energy depositions in each trigger ch...

  20. Upgraded readout and trigger electronics for the ATLAS liquid argon calorimeters for future LHC running

    CERN Document Server

    Ma, Hong; The ATLAS collaboration

    2014-01-01

    The ATLAS Liquid Argon (LAr) calorimeters produce almost 200K signals that must be digitized and processed by the front-end and back-end electronics for every triggered event. Additionally, the front-end electronics sums analog signals to provide coarse-grained energy sums to the first-level (L1) trigger system. The current design was optimized for the nominal LHC luminosity of 10^34/cm^2/s. However, in future higher-luminosity phases of LHC operation, the luminosity (and associated pile-up noise) will be 3-7 times higher. An improved spatial granularity of the trigger primitives is therefore proposed, in order to improve the trigger performance at high background rejection rates. For the first upgrade phase in 2018, new LAr Trigger Digitizer Boards are being designed to receive the higher granularity signals, digitize them on-detector and send them via fast optical links to a new digital processing system (DPS). This applies digital filtering and identifies significant energy depositions in each trigger chan...

  1. ATLAS TDAQ System Administration: an overview and evolution

    CERN Document Server

    LEE, CJ; The ATLAS collaboration; BOGDANCHIKOV, A; BRASOLIN, F; CONTESCU, AC; DARLEA, G-L; KOROL, A; SCANNICCHIO, DA; TWOMEY, M; VALSAN, ML

    2013-01-01

    The ATLAS Trigger and Data Acquisition (TDAQ) system is responsible for the online processing of live data streaming from the ATLAS experiment at the Large Hadron Collider (LHC) at CERN. The system processes the direct data readout from ~100 million channels on the detector through multiple trigger levels, selecting interesting events for analysis with a factor of $10^{7}$ reduction on the data rate with a latency of less than a few seconds. Most of the functionality is implemented on ~3000 servers composing the online farm. Due to the critical functionality of the system a sophisticated computing environment is maintained, covering the online farm and ATLAS control rooms, as well as a number of development and testing labs. The specificity of the system required the development of dedicated applications (e.g. ConfDB, BWM) for system configuration and maintenance; in parallel other Open Source tools (Puppet and Quattor) are used to centrally configure the operating systems. The health monitoring of the TDAQ s...

  2. Drift chamber readout system of the DIRAC experiment

    CERN Document Server

    Afanasiev, L G

    2002-01-01

    A drift chamber readout system of the DIRAC experiment at CERN is presented. The system is intended to read out the signals from planar chambers operating in a high current mode. The sense wire signals are digitized in the 16-channel time-to-digital converter boards which are plugged in the signal plane connectors. This design results in a reduced number of modules, a small number of cables and high noise immunity. The system has been successfully operating in the experiment since 1999.

  3. ATLAS TDAQ System Administration: evolution and re-design

    CERN Document Server

    Ballestrero, Sergio; The ATLAS collaboration; Brasolin, Franco; Contescu, Alexandru Cristian; Dubrov, Sergei; Fazio, Daniel; Korol, Aleksandr; Lee, Christopher Jon; Scannicchio, Diana; Twomey, Matthew Shaun

    2015-01-01

    The ATLAS Trigger and Data Acquisition (TDAQ) system is responsible for the online processing of live data, streaming from the ATLAS experiment at the Large Hadron Collider (LHC) at CERN. The online farm is composed of $\\sim 3000$ servers, processing the data readout from $\\sim 100$ million detector channels through multiple trigger levels. During the two years of the first Long Shutdown (LS1) there has been a tremendous amount of work done by the ATLAS TDAQ System Administrators, implementing numerous new software applications, upgrading the OS and the hardware, changing some design philosophies and exploiting the High Level Trigger farm with different purposes. The OS version has been upgraded to SLC6; for the largest part of the farm, which is composed by net booted nodes, this required a completely new design of the net booting system. In parallel, the migration to Puppet of the Configuration Management systems has been completed for both net booted and local booted hosts; the Post-Boot Scripts system and...

  4. Glance Information System for ATLAS Management

    Science.gov (United States)

    Grael, F. F.; Maidantchik, C.; Évora, L. H. R. A.; Karam, K.; Moraes, L. O. F.; Cirilli, M.; Nessi, M.; Pommès, K.; ATLAS Collaboration

    2011-12-01

    ATLAS Experiment is an international collaboration where more than 37 countries, 172 institutes and laboratories, 2900 physicists, engineers, and computer scientists plus 700 students participate. The management of this teamwork involves several aspects such as institute contribution, employment records, members' appointment, authors' list, preparation and publication of papers and speakers nomination. Previously, most of the information was accessible by a limited group and developers had to face problems such as different terminology, diverse data modeling, heterogeneous databases and unlike users needs. Moreover, the systems were not designed to handle new requirements. The maintenance has to be an easy task due to the long lifetime experiment and professionals turnover. The Glance system, a generic mechanism for accessing any database, acts as an intermediate layer isolating the user from the particularities of each database. It retrieves, inserts and updates the database independently of its technology and modeling. Relying on Glance, a group of systems were built to support the ATLAS management and operation aspects: ATLAS Membership, ATLAS Appointments, ATLAS Speakers, ATLAS Analysis Follow-Up, ATLAS Conference Notes, ATLAS Thesis, ATLAS Traceability and DSS Alarms Viewer. This paper presents the overview of the Glance information framework and describes the privilege mechanism developed to grant different level of access for each member and system.

  5. Glance Information System for ATLAS Management

    International Nuclear Information System (INIS)

    ATLAS Experiment is an international collaboration where more than 37 countries, 172 institutes and laboratories, 2900 physicists, engineers, and computer scientists plus 700 students participate. The management of this teamwork involves several aspects such as institute contribution, employment records, members' appointment, authors' list, preparation and publication of papers and speakers nomination. Previously, most of the information was accessible by a limited group and developers had to face problems such as different terminology, diverse data modeling, heterogeneous databases and unlike users needs. Moreover, the systems were not designed to handle new requirements. The maintenance has to be an easy task due to the long lifetime experiment and professionals turnover. The Glance system, a generic mechanism for accessing any database, acts as an intermediate layer isolating the user from the particularities of each database. It retrieves, inserts and updates the database independently of its technology and modeling. Relying on Glance, a group of systems were built to support the ATLAS management and operation aspects: ATLAS Membership, ATLAS Appointments, ATLAS Speakers, ATLAS Analysis Follow-Up, ATLAS Conference Notes, ATLAS Thesis, ATLAS Traceability and DSS Alarms Viewer. This paper presents the overview of the Glance information framework and describes the privilege mechanism developed to grant different level of access for each member and system.

  6. The LCLS Undulator Beam Loss Monitor Readout System

    Energy Technology Data Exchange (ETDEWEB)

    Dusatko, John; Browne, M.; Fisher, A.S.; Kotturi, D.; Norum, S.; Olsen, J.; /SLAC

    2012-07-23

    The LCLS Undulator Beam Loss Monitor System is required to detect any loss radiation seen by the FEL undulators. The undulator segments consist of permanent magnets which are very sensitive to radiation damage. The operational goal is to keep demagnetization below 0.01% over the life of the LCLS. The BLM system is designed to help achieve this goal by detecting any loss radiation and indicating a fault condition if the radiation level exceeds a certain threshold. Upon reception of this fault signal, the LCLS Machine Protection System takes appropriate action by either halting or rate limiting the beam. The BLM detector consists of a PMT coupled to a Cherenkov radiator located near the upstream end of each undulator segment. There are 33 BLMs in the system, one per segment. The detectors are read out by a dedicated system that is integrated directly into the LCLS MPS. The BLM readout system provides monitoring of radiation levels, computation of integrated doses, detection of radiation excursions beyond set thresholds, fault reporting and control of BLM system functions. This paper describes the design, construction and operational performance of the BLM readout system.

  7. First Results from the Online Radiation Dose Monitoring System in ATLAS experiment

    CERN Document Server

    Mandić, I; The ATLAS collaboration; Deliyergiyev, M; Gorišek, A; Kramberger, G; Mikuž, M; Franz, S; Hartert, J; Dawson, I; Miyagawa, P; Nicolas, L

    2011-01-01

    High radiation doses which will accumulate in components of ATLAS experiment during data taking will causes damage to detectors and readout electronics. It is therefore important to continuously monitor the doses to estimate the level of degradation caused by radiation. Online radiation monitoring system measures ionizing dose in SiO2 , displacement damage in silicon in terms of 1-MeV(Si) equivalent neutron fluence and fluence of thermal neutrons at several locations in ATLAS detector. In this paper design of the system, results of measurements and comparison of measured integrated doses and fluences with predictions from FLUKA simulation will be shown.

  8. The ATLAS Level-1 Central Trigger System in operation

    Science.gov (United States)

    Pauly, Thilo; ATLAS Collaboration

    2010-04-01

    The ATLAS Level-1 Central Trigger (L1CT) system is a central part of ATLAS data-taking. It receives the 40 MHz bunch clock from the LHC machine and distributes it to all sub-detectors. It initiates the detector read-out by forming the Level-1 Accept decision, which is based on information from the calorimeter and muon trigger processors, plus a variety of additional trigger inputs from detectors in the forward regions. The L1CT also provides trigger-summary information to the data acquisition and the Level-2 trigger systems for use in higher levels of the selection process, in offline analysis, and for monitoring. In this paper we give an overview of the operational framework of the L1CT with particular emphasis on cross-system aspects. The software framework allows a consistent configuration with respect to the LHC machine, upstream and downstream trigger processors, and the data acquisition. Trigger and dead-time rates are monitored coherently on all stages of processing and are logged by the online computing system for physics analysis, data quality assurance and operational debugging. In addition, the synchronisation of trigger inputs is watched based on bunch-by-bunch trigger information. Several software tools allow to efficiently display the relevant information in the control room in a way useful for shifters and experts. We present the overall performance during cosmic-ray data taking with the full ATLAS detector and the experience with first beam in the LHC.

  9. The ATLAS Level-1 Central Trigger System in operation

    International Nuclear Information System (INIS)

    The ATLAS Level-1 Central Trigger (L1CT) system is a central part of ATLAS data-taking. It receives the 40 MHz bunch clock from the LHC machine and distributes it to all sub-detectors. It initiates the detector read-out by forming the Level-1 Accept decision, which is based on information from the calorimeter and muon trigger processors, plus a variety of additional trigger inputs from detectors in the forward regions. The L1CT also provides trigger-summary information to the data acquisition and the Level-2 trigger systems for use in higher levels of the selection process, in offline analysis, and for monitoring. In this paper we give an overview of the operational framework of the L1CT with particular emphasis on cross-system aspects. The software framework allows a consistent configuration with respect to the LHC machine, upstream and downstream trigger processors, and the data acquisition. Trigger and dead-time rates are monitored coherently on all stages of processing and are logged by the online computing system for physics analysis, data quality assurance and operational debugging. In addition, the synchronisation of trigger inputs is watched based on bunch-by-bunch trigger information. Several software tools allow to efficiently display the relevant information in the control room in a way useful for shifters and experts. We present the overall performance during cosmic-ray data taking with the full ATLAS detector and the experience with first beam in the LHC.

  10. The Next Generation ATLAS Production System

    CERN Document Server

    Borodin, Mikhail; The ATLAS collaboration; Golubkov, Dmitry; Klimentov, Alexei; Maeno, Tadashi; Mashinistov, Ruslan; Vaniachine, Alexandre

    2015-01-01

    The ATLAS experiment at LHC data processing and simulation grows continuously, as more data and more use cases emerge. For data processing the ATLAS experiment adopted the data transformation approach, where software applications transform the input data into outputs. In the ATLAS production system, each data transformation is represented by a task, a collection of many jobs, dynamically submitted by the ATLAS workload management system (PanDA/JEDI) and executed on the Grid, clouds and supercomputers. Patterns in ATLAS data transformation workflows composed of many tasks provided a scalable production system framework for template definitions of the many-tasks workflows. User interface and system logic of these workflows are being implemented in the Database Engine for Tasks (DEFT). Such development required using modern computing technologies and approaches. We report technical details of this development: database implementation, server logic and Web user interface technologies.

  11. The ATLAS IBL CO2 Cooling System

    CERN Document Server

    Verlaat, Bartholomeus; The ATLAS collaboration

    2016-01-01

    The Atlas Pixel detector has been equipped with an extra B-layer in the space obtained by a reduced beam pipe. This new pixel detector called the ATLAS Insertable B-Layer (IBL) is installed in 2014 and is operational in the current ATLAS data taking. The IBL detector is cooled with evaporative CO2 and is the first of its kind in ATLAS. The ATLAS IBL CO2 cooling system is designed for lower temperature operation (<-35⁰C) than the previous developed CO2 cooling systems in High Energy Physics experiments. The cold temperatures are required to protect the pixel sensors for the high expected radiation dose up to 550 fb^-1 integrated luminosity. This paper describes the design, development, construction and commissioning of the IBL CO2 cooling system. It describes the challenges overcome and the important lessons learned for the development of future systems which are now under design for the Phase-II upgrade detectors.

  12. Simulation of the upgraded Phase-1 Trigger Readout Electronics of the Liquid-Argon Calorimeter of the ATLAS Detector at the LHC

    OpenAIRE

    Grohs, Johannes Philipp

    2016-01-01

    In the context of an intensive upgrade plan for the Large Hadron Collider (LHC) in order to provide proton beams of increased luminosity, a revision of the data readout electronics of the Liquid-Argon-Calorimeter of the ATLAS detector is scheduled. This is required to retain the efficiency of the trigger at increased event rates despite its fixed bandwidth. The focus lies on the early digitization and finer segmentation of the data provided to the trigger. Furthermore, there is the possibilit...

  13. The ATLAS Detector Safety System

    CERN Multimedia

    Helfried Burckhart; Kathy Pommes; Heidi Sandaker

    The ATLAS Detector Safety System (DSS) has the mandate to put the detector in a safe state in case an abnormal situation arises which could be potentially dangerous for the detector. It covers the CERN alarm severity levels 1 and 2, which address serious risks for the equipment. The highest level 3, which also includes danger for persons, is the responsibility of the CERN-wide system CSAM, which always triggers an intervention by the CERN fire brigade. DSS works independently from and hence complements the Detector Control System, which is the tool to operate the experiment. The DSS is organized in a Front- End (FE), which fulfills autonomously the safety functions and a Back-End (BE) for interaction and configuration. The overall layout is shown in the picture below. ATLAS DSS configuration The FE implementation is based on a redundant Programmable Logical Crate (PLC) system which is used also in industry for such safety applications. Each of the two PLCs alone, one located underground and one at the s...

  14. Comparisons of the MINOS near and far detector readout systems at a test beam

    OpenAIRE

    Cabrera, A; Adamson, P.; Barker, M.; Belias, A.; Boyd, S.; Crone, G.; Drake, G.; Falk, E; Harris, P. G.; Hartnell, J.; Jenner, L.; Kordosky, M.; Lang, K.; Litchfield, R. P.; Michael, D.

    2009-01-01

    MINOS is a long baseline neutrino oscillation experiment that uses two detectors separated by 734 km. The readout systems used for the two detectors are different and have to be independently calibrated. To verify and make a direct comparison of the calibrated response of the two readout systems, test beam data were acquired using a smaller calibration detector. This detector was simultaneously instrumented with both readout systems and exposed to the CERN PS T7 test beam. Differe...

  15. Prototype ATLAS IBL modules using the FE-I4A front-end readout chip

    Czech Academy of Sciences Publication Activity Database

    Albert, J.; Alex, M.; Alimonti, G.; Hejtmánek, Martin; Janoška, Zdenko; Korchak, Oleksandr; Popule, Jiří; Šícho, Petr; Sloboda, Michal; Tomášek, Michal; Vrba, Václav

    2012-01-01

    Roč. 7, NOV (2012), 1-45. ISSN 1748-0221 R&D Projects: GA MŠk LA08032 Institutional research plan: CEZ:AV0Z10100502 Keywords : ATLAS * upgrade * tracker * silicon * FE-I4 * planar sensors * test beam Subject RIV: BF - Elementary Particles and High Energy Physics Impact factor: 1.869, year: 2011 http://arxiv.org/abs/arXiv:1209.1906

  16. A New Readout Control System for the LHCb Upgrade at CERN

    CERN Document Server

    Alessio, Federico

    2012-01-01

    The at LHCb experiment CERN has proposed an upgrade towards a full 40 MHz readout system in order to run between five and ten times its initial design luminosity. The various sub-systems in the readout architecture will need to be upgraded in order to cope with higher sub-detector occupancies, higher rate and higher readout load. In this paper, we describe the new architecture, the new functionalities and the first hardware implementation of the new LHCb Readout Control system (S-TFC) for the upgraded LHCb experiment, together with first results on the validation of the system.

  17. Trigger and Readout System for the Ashra-1 Detector

    Science.gov (United States)

    Aita, Y.; Aoki, T.; Asaoka, Y.; Morimoto, Y.; Motz, H. M.; Sasaki, M.; Abiko, C.; Kanokohata, C.; Ogawa, S.; Shibuya, H.; Takada, T.; Kimura, T.; Learned, J. G.; Matsuno, S.; Kuze, S.; Binder, P. M.; Goldman, J.; Sugiyama, N.; Watanabe, Y.

    Highly sophisticated trigger and readout system has been developed for All-sky Survey High Resolution Air-shower (Ashra) detector. Ashra-1 detector has 42 degree diameter field of view. Detection of Cherenkov and fluorescence light from large background in the large field of view requires finely segmented and high speed trigger and readout system. The system is composed of optical fiber image transmission system, 64 × 64 channel trigger sensor and FPGA based trigger logic processor. The system typically processes the image within 10 to 30 ns and opens the shutter on the fine CMOS sensor. 64 × 64 coarse split image is transferred via 64 × 64 precisely aligned optical fiber bundle to a photon sensor. Current signals from the photon sensor are discriminated by custom made trigger amplifiers. FPGA based processor processes 64 × 64 hit pattern and correspondent partial area of the fine image is acquired. Commissioning earth skimming tau neutrino observational search was carried out with this trigger system. In addition to the geometrical advantage of the Ashra observational site, the excellent tau shower axis measurement based on the fine imaging and the night sky background rejection based on the fine and fast imaging allow zero background tau shower search. Adoption of the optical fiber bundle and trigger LSI realized 4k channel trigger system cheaply. Detectability of tau shower is also confirmed by simultaneously observed Cherenkov air shower. Reduction of the trigger threshold appears to enhance the effective area especially in PeV tau neutrino energy region. New two dimensional trigger LSI was introduced and the trigger threshold was lowered. New calibration system of the trigger system was recently developed and introduced to the Ashra detector

  18. Readout system of TPC/MPD NICA project

    Energy Technology Data Exchange (ETDEWEB)

    Averyanov, A. V.; Bajajin, A. G.; Chepurnov, V. F.; Cheremukhina, G. A.; Fateev, O. V.; Korotkova, A. M.; Levchanovskiy, F. V.; Lukstins, J.; Movchan, S. A.; Razin, S. V.; Rybakov, A. A.; Vereschagin, S. V., E-mail: vereschagin@jinr.ru; Zanevsky, Yu. V.; Zaporozhets, S. A.; Zruyev, V. N. [Joint Institute for Nuclear Research (Russian Federation)

    2015-12-15

    The time-projection chamber (TPC) is the main tracking detector in the MPD/NICA. The information on charge-particle tracks in the TPC is registered by the MWPG with cathode pad readout. The frontend electronics (FEE) are developed with use of modern technologies such as application specific integrated circuits (ASIC), field-programmable gate arrays (FPGA), and data transfer to a concentrator via a fast optical interface. The main parameters of the FEE are as follows: total number of channels, ∼95 000; data stream from the whole TPC, 5 GB/s; low power consumption, less than 100 mW/ch; signal to noise ratio (S/N), 30; equivalent noise charge (ENC), <1000e{sup –} (C{sub in} = 10–20 pF); and zero suppression (pad signal rejection ∼90%). The article presents the status of the readout chamber construction and the data acquisition system. The results of testing FEE prototypes are presented.

  19. Readout system of the ALICE Muon tracking detector

    International Nuclear Information System (INIS)

    A Large Ion Collider Experiment (ALICE) will be aimed at studying heavy ion collisions at the extreme energy densities accessible at the CERN's Large Hadron Collider (LHC), where the formation of the Quark Gluon Plasma is expected. The ALICE muon forward spectrometer will identify muons with momentum above 4 GeV/c, allowing the study of quarkonia and heavy flavors in the pseudorapidity range -4.0<η<-2.5 with 2π azimuthal coverage. The muon tracking system consists of 10 Cathode Pad Chambers (CPC) with 1.1 million of pads that represent the total number of acquisition channels to manage. In this article, we will give an overview of the ALICE Muon Spectrometer. Afterward, we will focus on tracking system Front end Electronics (FEE) and readout system. We will show that the Digital Signal Processor (DSP) architecture fulfills all the requirements, including radiation hardness against neutrons. Finally, real-time performances are discussed.

  20. The ATLAS Data Flow System for Run 2

    CERN Document Server

    Kazarov, Andrei; The ATLAS collaboration

    2015-01-01

    After its first shutdown, the LHC will provide pp collisions with increased luminosity and energy. In the ATLAS experiment, the Trigger and Data Acquisition (TDAQ) system has been upgraded to deal with the increased event rates. The Data Flow (DF) element of the TDAQ is a distributed hardware and software system responsible for buffering and transporting event data from the readout system to the High Level Trigger (HLT) and to the event storage. The DF has been reshaped in order to profit from the technological progress and to maximize the flexibility and efficiency of the data selection process. The updated DF is radically different from the previous implementation both in terms of architecture and expected performance. The pre-existing two level software filtering, known as L2 and the Event Filter, and the Event Building are now merged into a single process, performing incremental data collection and analysis. This design has many advantages, among which are: the radical simplification of the architecture, ...

  1. The ATLAS Data Flow System for LHC Run II

    CERN Document Server

    Kazarov, Andrei; The ATLAS collaboration

    2015-01-01

    After its first shutdown, the LHC will provide pp collisions with increased luminosity and energy. In the ATLAS experiment, the Trigger and Data Acquisition (TDAQ) system has been upgraded to deal with the increased event rates. The Data Flow (DF) element of the TDAQ is a distributed hardware and software system responsible for buffering and transporting event data from the readout system to the High Level Trigger (HLT) and to the event storage. The DF has been reshaped in order to profit from the technological progress and to maximize the flexibility and efficiency of the data selection process. The updated DF is radically different from the previous implementation both in terms of architecture and expected performance. The pre-existing two level software filtering, known as L2 and the Event Filter, and the Event Building are now merged into a single process, performing incremental data collection and analysis. This design has many advantages, among which are: the radical simplification of the architecture, ...

  2. Development and test of the DAQ system for a Micromegas prototype to be installed in the ATLAS experiment

    CERN Document Server

    Zibell, Andre; The ATLAS collaboration; Bianco, Michele; Martoiu, Victor Sorin

    2015-01-01

    A Micromegas (MM) quadruplet prototype with an active area of 0.5 m 2 that adopts the general design foreseen for the upgrade of the innermost forward muon tracking systems (Small Wheels) of the ATLAS detector in 2018-2019, has been built at CERN and is going to be tested in the ATLAS cavern environment during the LHC RUN-II period 2015-2017. The integration of this prototype detector into the ATLAS data acquisition system using custom ATCA equipment is presented. An ATLAS compatible Read Out Driver (ROD) based on the Scalable Readout System (SRS), the Scalable Readout Unit (SRU), will be used in order to transmit the data after generating valid event fragments to the high-level Read Out System (ROS). The SRU will be synchronized with the LHC bunch crossing clock (40.08 MHz) and will receive the Level-1 trigger signals from the Central Trigger Processor (CTP) through the TTCrx receiver ASIC. The configuration of the system will be driven directly from the ATLAS Run Control System. By using the ATLAS TDAQ Soft...

  3. Diagnostic analysis of silicon strips detector readout in the ATLAS Semi-Conductor Tracker module production

    International Nuclear Information System (INIS)

    The ATLAS Semi-Conductor Tracker (SCT) Collaboration is currently in the production phase of fabricating and testing silicon strips modules for the ATLAS detector at the Large Hadron Collider being built at the CERN laboratory in Geneva, Switzerland. A small but relevant percentage of ICs developed a new set of defects after being mounted on hybrids that were not detected in the wafer screening. To minimize IC replacement and outright module failure, analysis methods were developed to study IC problems during the production of SCT modules. These analyses included studying wafer and hybrid data correlations to finely tune the selection of ICs and tests to utilize the ability to adjust front-end parameters of the IC in order to reduce the rejection and replacement rate of fabricated components. This paper will discuss a few examples of the problems encountered during the production of SCT hybrids and modules in the area of ICs performance, and will demonstrate the value of the flexibility built into the ABCD3T chip

  4. Diagnostic analysis of silicon strips detector readout in the ATLAS Semi-Conductor Tracker module production

    CERN Document Server

    Ciocio, Alessandra

    2005-01-01

    The ATLAS Semi-Conductor Tracker (SCT) Collaboration is currently in the production phase of fabricating and testing silicon strips modules for the ATLAS detector at the Large Hadron Collider being built at the CERN laboratory in Geneva, Switzerland. A small but relevant percentage of ICs developed a new set of defects after being mounted on hybrids that were not detected in the wafer screening. To minimize IC replacement and outright module failure, analysis methods were developed to study IC problems during the production of SCT modules. These analyses included studying wafer and hybrid data correlations to finely tune the selection of ICs and tests to utilize the ability to adjust front-end parameters of the IC in order to reduce the rejection and replacement rate of fabricated components. This paper will discuss a few examples of the problems encountered during the production of SCT hybrids and modules in the area of ICs performance, and will demonstrate the value of the flexibility built into the ABCD3T ...

  5. Flexible readout and integration sensor (FRIS): a bio-inspired, system-on-chip, event-based readout architecture

    Science.gov (United States)

    Lin, Joseph H.; Pouliquen, Philippe O.; Andreou, Andreas G.; Goldberg, Arnold C.; Rizk, Charbel G.

    2012-06-01

    We present a bio-inspired system-on-chip focal plane readout architecture which at the system level, relies on an event based sampling scheme where only pixels within a programmable range of photon flux rates are output. At the pixel level, a one bit oversampled analog-to-digital converter together with a decimator allows for the quantization of signals up to 26 bits. Furthermore, digital non-uniformity correction of both gain and offset errors is applied at the pixel level prior to readout. We report test results for a prototype array fabricated in a standard 90nm CMOS process. Tests performed at room and cryogenic temperatures demonstrate the capability to operate at a temporal noise ratio as low as 1.5, an electron well capacity over 100Ge-, and an ADC LSB down to 1e-.

  6. The design of a DAQ system for a GEM imaging detector based on FET array readout

    International Nuclear Information System (INIS)

    A data acquisition system was designed for a GEM imaging detector, which is readout by a FET switch array and can be used in real-time imaging. By using some advanced technologies, like FPGA and MCU, the designed DAQ system succeeds in multi-channel real-time readout with high-accuracy and high universality. (authors)

  7. The ATLAS ROBIN

    Energy Technology Data Exchange (ETDEWEB)

    Cranfield, R; Crone, G [University College London, London (United Kingdom); Francis, D; Gorini, B; Joos, M; Petersen, J; Tremblet, L; Unel, G [CERN, Geneva (Switzerland); Green, B; Misiejuk, A; Strong, J; Teixeira-Dias, P [Royal Holloway University of London, London (United Kingdom); Kieft, G; Vermeulen, J [FOM - Institute SAF and University of Amsterdam/Nikhef, Amsterdam (Netherlands); Kugel, A; Mueller, M; Yu, M [University of Mannheim, Mannheim (Germany); Perera, V; Wickens, F [Rutherford Appleton Laboratory, Didcot (United Kingdom)], E-mail: kugel@ti.uni-mannheim.de

    2008-01-15

    The ATLAS readout subsystem is the main interface between {approx} 1600 detector front-end readout links and the higher-level trigger farms. To handle the high event rate (up to 100 kHz) and bandwidth (up to 160 MB/s per link) the readout PCs are equipped with four ROBIN (readout buffer input) cards. Each ROBIN attaches to three optical links, provides local event buffering for approximately 300 ms and communicates with the higher-level trigger system for data and delete requests. According to the ATLAS baseline architecture this communication runs via the PCI bus of the host PC. In addition, each ROBIN provides a private Gigabit Ethernet port which can be used for the same purpose. Operational monitoring is performed via PCI. This paper presents a summary of the ROBIN hardware and software together with measurements results obtained from various test setups.

  8. Modified read-out system of the beam phase measurement system for CSNS

    International Nuclear Information System (INIS)

    The customized beam phase measurement system can meet the requirement of beam loss control of the radio-frequency quadrupole (RFQ). However, its read-out part cannot satisfy the requirement of China Spallation Neutron Source (CSNS). CSNS uses the Experimental Physics and Industrial Control System (EPICS) as its control system. So it is necessary to develop the EPICS read-out system consisting of EPICS IOC databases, driver support and OPIs. The new system has been successfully tested in the RFQ. In the future, it will be applied to the beam diagnostics of CSNS. (authors)

  9. The ATLAS Level-1 Central Trigger System 012

    CERN Document Server

    Borrego-Amaral, P; Farthouat, Philippe; Gällnö, P; Haller, J; Maeno, T; Pauly, T; Schuler, G; Spiwoks, R; Torga-Teixeira, R; Wengler, T; Pessoa-Lima, H; De Seixas, J M

    2004-01-01

    The central part of the ATLAS Level-1 trigger system consists of the Central Trigger Processor (CTP), the Local Trigger Processors (LTPs), the Timing, Trigger and Control (TTC) system, and the Read-out Driver Busy (ROD_BUSY) modules. The CTP combines information from calorimeter and muon trigger processors, as well as from other sources and makes the final Level-1 Accept decision (L1A) on the basis of lists of selection criteria, implemented as a trigger menu. Timing and trigger signals are fanned out to about 40 LTPs which inject them into the sub-detector TTC partitions. The LTPs also support stand-alone running and can generate all necessary signals from memory. The TTC partitions fan out the timing and trigger signals to the sub-detector front-end electronics. The ROD_BUSY modules receive busy signals from the front-end electronics and send them to the CTP (via an LTP) to throttle the generation of L1As. An overview of the ATLAS Level-1 Central trigger system will be presented, with emphasis on the design...

  10. Upgrade of the Laser Calibration System for the ATLAS Hadronic Calorimeter TileCal

    CERN Document Server

    Van Woerden, Marius Cornelis; The ATLAS collaboration

    2015-01-01

    We present in this contribution the new system for laser calibration of the ATLAS hadronic calorimeter TileCal. The laser system is a part of the three stage calibration apparatus designed to compute the calibration constants of the individual cells of TileCal. The laser system is mainly used to correct for short term (one month) drifts of the readout of the individual cells. A sub-percent accuracy in the control of the calibration constants is required to keep the systematics effects introduced by relative cell miscalibration below the irreducible systematics in determining the parameters of the reconstructed hadronic jets. To achieve this goal in the LHC run II conditions, a new laser system was designed. The architecture of the system is described with details on the new optical line used to distribute laser pulses in each individual detector module and on the new electronics used to drive the laser, to readout the system optical monitors and to interface the system with the Atlas readout, trigger, and slo...

  11. Upgrade of the Laser Calibration System for the ATLAS Hadronic Calorimeter TileCal

    CERN Document Server

    Van Woerden, Marius Cornelis; The ATLAS collaboration

    2015-01-01

    We present in this contribution the new system for laser calibration of the ATLAS hadronic calorimeter TileCal. The laser system is a part of the three stage calibration apparatus designed to compute the calibration constants of the individual cells of TileCal. The laser system is mainly used to correct for short term (one month) drifts of the readout of the individual cells. A sub-percent accuracy in the control of the calibration constants is required to keep the systematics effects introduced by relative cell miscalibration below the irreducible systematics in determining the parameters of the reconstructed hadronic jets. To achieve this goal in the LHC Run 2 conditions, a new laser system was designed. The architecture of the system is described with details on the new optical line used to distribute laser pulses in each individual detector module and on the new electronics used to drive the laser, to readout the system optical monitors and to interface the system with the Atlas readout, trigger, and slow...

  12. Application of the ATLAS DAQ and Monitoring System for MDT and RPC Commissioning

    CERN Document Server

    Pasqualucci, E

    2007-01-01

    The ATLAS DAQ and monitoring software are currently commonly used to test detectors during the commissioning phase. In this paper, their usage in MDT and RPC commissioning is described, both at the surface pre-commissioning and commissioning stations and in the ATLAS pit. Two main components are heavily used for detector tests. The ROD Crate DAQ software is based on the ATLAS Readout application. Based on the plug-in mechanism, it provides a complete environment to interface any kind of detector or trigger electronics to the ATLAS DAQ system. All the possible flavours of this application are used to test and run the MDT and RPC detectors at the pre-commissioning and commissioning sites. Ad-hoc plug-ins have been developed to implement data readout via VME, both with ROD prototypes and emulating final electronics to read out data with temporary solutions, and to provide trigger distribution and busy management in a multi-crate environment. Data driven event building functionality is also used to combine data f...

  13. The readout system for the ArTeMis camera

    Science.gov (United States)

    Doumayrou, E.; Lortholary, M.; Dumaye, L.; Hamon, G.

    2014-07-01

    During ArTeMiS observations at the APEX telescope (Chajnantor, Chile), 5760 bolometric pixels from 20 arrays at 300mK, corresponding to 3 submillimeter focal planes at 450μm, 350μm and 200μm, have to be read out simultaneously at 40Hz. The read out system, made of electronics and software, is the full chain from the cryostat to the telescope. The readout electronics consists of cryogenic buffers at 4K (NABU), based on CMOS technology, and of warm electronic acquisition systems called BOLERO. The bolometric signal given by each pixel has to be amplified, sampled, converted, time stamped and formatted in data packets by the BOLERO electronics. The time stamping is obtained by the decoding of an IRIG-B signal given by APEX and is key to ensure the synchronization of the data with the telescope. Specifically developed for ArTeMiS, BOLERO is an assembly of analogue and digital FPGA boards connected directly on the top of the cryostat. Two detectors arrays (18*16 pixels), one NABU and one BOLERO interconnected by ribbon cables constitute the unit of the electronic architecture of ArTeMiS. In total, the 20 detectors for the tree focal planes are read by 10 BOLEROs. The software is working on a Linux operating system, it runs on 2 back-end computers (called BEAR) which are small and robust PCs with solid state disks. They gather the 10 BOLEROs data fluxes, and reconstruct the focal planes images. When the telescope scans the sky, the acquisitions are triggered thanks to a specific network protocol. This interface with APEX enables to synchronize the acquisition with the observations on sky: the time stamped data packets are sent during the scans to the APEX software that builds the observation FITS files. A graphical user interface enables the setting of the camera and the real time display of the focal plane images, which is essential in laboratory and commissioning phases. The software is a set of C++, Labview and Python, the qualities of which are respectively used

  14. The ATLAS Level-1 Central Trigger System in Operation

    CERN Document Server

    Pauly, T

    2010-01-01

    The ATLAS Level-1 Central Trigger (L1CT) electronics is a central part of ATLAS data-taking. It receives the 40 MHz bunch clock from the LHC machine and distributes it to all sub-detectors. It initiates the detector read-out by forming the Level-1 Accept decision, which is based on information from the calorimeter and muon trigger processors, plus a variety of additional trigger inputs from detectors in the forward regions. The L1CT also provides trigger-summary information to the data acquisition and the Level-2 trigger systems for use in higher levels of the selection process, in offline analysis, and for monitoring. In this paper we give an overview of the operational framework of the L1CT with particular emphasis on cross-system aspects. The software framework allows a consistent configuration with respect to the LHC machine, upstream and downstream trigger processors, and the data acquisition. Trigger and dead-time rates are monitored coherently on all stages of processing and are logged by the online c...

  15. LHCb : Clock and timing distribution in the LHCb upgraded detector and readout system

    CERN Multimedia

    Alessio, Federico; Barros Marin, M; Cachemiche, JP; Hachon, F; Jacobsson, Richard; Wyllie, Ken

    2014-01-01

    The LHCb experiment is upgrading part of its detector and the entire readout system towards a full 40 MHz readout system in order to run between five and ten times its initial design luminosity and increase its trigger efficiency. In this paper, the new timing, trigger and control distribution system for such an upgrade is reviewed with particular attention given to the distribution of the clock and timing information across the entire readout system, up to the FE and the on-detector electronics. Current ideas are here presented in terms of reliability, jitter, complexity and implementation.

  16. The Relational Database Aspects of Argonne's ATLAS Control System

    OpenAIRE

    Quock, D. E. R.; Munson, F. H.; Eder, K. J.; Dean, S. L.

    2001-01-01

    The Relational Database Aspects of Argonnes ATLAS Control System Argonnes ATLAS (Argonne Tandem Linac Accelerator System) control system comprises two separate database concepts. The first is the distributed real-time database structure provided by the commercial product Vsystem [1]. The second is a more static relational database archiving system designed by ATLAS personnel using Oracle Rdb [2] and Paradox [3] software. The configuration of the ATLAS facility has presented a unique opportuni...

  17. Cross-compilation of ATLAS online software to the power PC-Vx works system

    International Nuclear Information System (INIS)

    BES III, selected ATLAS online software as a framework of its run-control system. BES III applied Power PC-VxWorks system on its front-end readout system, so it is necessary to cross-compile this software to PowerPC-VxWorks system. The article demonstrates several aspects related to this project, such as the structure and organization of the ATLAS online software, the application of CMT tool while cross-compiling, the selection and configuration of the cross-compiler, methods to solve various problems due to the difference of compiler and operating system etc. The software, after cross-compiling, can normally run, and makes up a complete run-control system with the software running on Linux system. (authors)

  18. Evolution of the ATLAS Trigger and Data Acquisition System

    CERN Document Server

    Pozo Astigarraga, M E; The ATLAS collaboration

    2014-01-01

    ATLAS is a Physics experiment that explores high-energy particle collisions at the Large Hadron Collider at CERN. It uses tens of millions of electronics channels to capture the outcome of the particle bunches crossing each other every 25 ns. Since reading out and storing the complete information is not feasible (~100 TB/s), ATLAS makes use of a complex and highly distributed Trigger and Data Acquisition (TDAQ) system, in charge of selecting only interesting data and transporting those to permanent mass storage (~1 GB/s) for later analysis. The data reduction is carried out in two stages: first, custom electronics performs an initial level of data rejection for each bunch crossing based on partial and localized information. Only data corresponding to collisions passing this stage of selection will be actually read-out from the on-detector electronics. Then, a large computer farm (~17 k cores) analyses these data in real-time and decides which ones are worth being stored for Physics analysis. A large network a...

  19. Evolution of the ATLAS Trigger and Data Acquisition System

    CERN Document Server

    Pozo Astigarraga, M E; The ATLAS collaboration

    2015-01-01

    ATLAS is a Physics experiment that explores high-energy particle collisions at the Large Hadron Collider at CERN. It uses tens of millions of electronics channels to capture the outcome of the particle bunches crossing each other every 25 ns. Since reading out and storing the complete information is not feasible (~100 TB/s), ATLAS makes use of a complex and highly distributed Trigger and Data Acquisition (TDAQ) system, in charge of selecting only interesting data and transporting those to permanent mass storage (~1 GB/s) for later analysis. The data reduction is carried out in two stages: first, custom electronics performs an initial level of data rejection for each bunch crossing based on partial and localized information. Only data corresponding to collisions passing this stage of selection will be actually read-out from the on-detector electronics. Then, a large computer farm (~17 k cores) analyses these data in real-time and decides which ones are worth being stored for Physics analysis. A large network a...

  20. Development of the Photomultiplier-Tube Readout System for the CTA Large Size Telescope

    OpenAIRE

    Kubo, H; Paoletti, R.; Awane, Y; Bamba, A.; Barcelo, M.; Barrio, J.A.; Blanch, O.; Boix, J; Delgado, C; Fink, D.; Gascon, D.; Gunji, S; Hagiwara, R.; Hanabata, Y.; K. Hatanaka

    2013-01-01

    We have developed a prototype of the photomultiplier tube (PMT) readout system for the Cherenkov Telescope Array (CTA) Large Size Telescope (LST). Two thousand PMTs along with their readout systems are arranged on the focal plane of each telescope, with one readout system per 7-PMT cluster. The Cherenkov light pulses generated by the air showers are detected by the PMTs and amplified in a compact, low noise and wide dynamic range gain block. The output of this block is then digitized at a sam...

  1. A study of performance issues of the ATLAS event selection system based on an ATM switching network

    International Nuclear Information System (INIS)

    The next generation of High Energy Physics experiments, ATLAS and CMS, proposed at the CERN Large Hadron Collider (LHC), will place heavy demands on the data acquisition and on-line filtering systems. Asynchronous Transfer Mode (ATM) is a candidate technology to implement the high performance network in the data collection system for the ATLAS experiment. This work presents the results of modeling and simulation studies which aim at integrating the detailed organization of the detector read-out, the trigger requirements and the capabilities of ATM switching networks. The status of hardware development of small scale demonstrators is outlined

  2. An Optical Readout System for the LHCb Silicon Tracker

    CERN Document Server

    Vollhardt, A

    2005-01-01

    The LHCb experiment is to precisely measure the CP-violation parameters in the B-meson decay. It is one of the four large experiments which is currently being installed at the Large Hadron Collider (LHC) and planned to start taking data in 2007. The Silicon Tracker covers the tracking volume around the beam pipe with the highest track densities. The huge amount of data generated during operation has to be transported to the processor farm for follow-up track recognition and analysis. This work presents a digital optical readout link which transmits the tracking information with a rate of 2.7 Terabit/s over a distance of 100 m from the detector to the computer farms. Special attention was paid to the radiation tolerance of the transmitting section, as its location is exposed to ionizing particles and radiation levels similar to those encountered in space applications. The design was aimed to provide a modular system which simplifies production, testing, commissioning and maintenance. Where possible, commercial...

  3. X-ray and gamma ray detector readout system

    Science.gov (United States)

    Tumer, Tumay O; Clajus, Martin; Visser, Gerard

    2010-10-19

    A readout electronics scheme is under development for high resolution, compact PET (positron emission tomography) imagers based on LSO (lutetium ortho-oxysilicate, Lu.sub.2SiO.sub.5) scintillator and avalanche photodiode (APD) arrays. The key is to obtain sufficient timing and energy resolution at a low power level, less than about 30 mW per channel, including all required functions. To this end, a simple leading edge level crossing discriminator is used, in combination with a transimpedance preamplifier. The APD used has a gain of order 1,000, and an output noise current of several pA/ Hz, allowing bipolar technology to be used instead of CMOS, for increased speed and power efficiency. A prototype of the preamplifier and discriminator has been constructed, achieving timing resolution of 1.5 ns FWHM, 2.7 ns full width at one tenth maximum, relative to an LSO/PMT detector, and an energy resolution of 13.6% FWHM at 511 keV, while operating at a power level of 22 mW per channel. Work is in progress towards integration of this preamplifier and discriminator with appropriate coincidence logic and amplitude measurement circuits in an ASIC suitable for a high resolution compact PET instrument. The detector system and/or ASIC can also be used for many other applications for medical to industrial imaging.

  4. Hybrid Control System for the ATLAS Facility

    International Nuclear Information System (INIS)

    A thermal-hydraulic integral effect test (IET) loop, advanced thermal-hydraulic test loop for accident simulation (ATLAS), has been constructed in the Korea Atomic Energy Research Institute (KAERI). For the data acquisition and control system, hybrid control system (HCS) was adopted to enhance the integrated performance of demanding process control application for acquiring of experimental data. The whole feature of the data acquisition and control system consists of 1 set of the HCS for headware connection, 1 server station for signal processing schemes, 1 engineering work station (EWS) for control logics, and 3 operator interface station (OPS) for human-machine interface. The total number of signals for the data acquisition and the system control of the atlas facility is up to about 2010 channels, which are distributed in 16 chasses which are installed in 10 cabinets. The main focus of this paper is to present the technical configuration of the HCS of the atlas facility

  5. AGIS: Evolution of Distributed Computing Information system for ATLAS

    CERN Document Server

    Anisenkov, Alexey; The ATLAS collaboration; Alandes Pradillo, Maria; Karavakis, Edward

    2015-01-01

    The variety of the ATLAS Computing Infrastructure requires a central information system to define the topology of computing resources and to store the different parameters and configuration data which are needed by the various ATLAS software components. The ATLAS Grid Information System is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services.

  6. Implementation and Tests of FPGA-embedded PowerPC in the control system of the ATLAS IBL ROD card

    CERN Document Server

    Balbi, G; The ATLAS collaboration; Falchieri, D; Gabrielli, A; Furini, M; Kugel, A; Travaglini, R; Wensing, M

    2012-01-01

    The Insertable B-layer project is planned for the upgrade of the ATLAS experiment at LHC. A silicon layer will be inserted into the existing Pixel Detector together with new electronics. The readout off-detector system is implemented with a Back-Of-Crate module implementing I/O functionality and a Readout-Driver card (ROD) for data processing. The ROD hosts the electronics devoted to control operations implemented both with a back- compatible solution (via DSP) and with a PowerPC embedded into an FPGA. In this document major firmware and software achievements concerning the PowerPC implementation, tested on ROD prototypes, will be reported.

  7. READOUT SYSTEM FOR ARRAYS OF FRISCH-RING CdZnTe DETECTORS

    International Nuclear Information System (INIS)

    Frisch-ring CdZnTe detectors have demonstrated good energy resolution for identifying isotopes, 3 Frisch-ring detectors, coupled with a readout electronics system. It supports 64 readout channels, and includes front-end electronics, signal processing circuit, USB interface and high-voltage power supply. The data-acquisition software is used to process the data stream, which includes amplitude and timing information for each detected event. This paper describes the design and assembly of the detector modules, readout electronics, and a conceptual prototype system. Some test results are also reported

  8. A Triggerless readout system for the bar PANDA electromagnetic calorimeter

    Science.gov (United States)

    Tiemens, M.; PANDA Collaboration

    2015-02-01

    One of the physics goals of the future bar PANDA experiment at FAIR is to research newly discovered exotic states. Because the detector response created by these particles is very similar to the background channels, a new type of data readout had to be developed, called "triggerless" readout. In this concept, each detector subsystem preprocesses the signal, so that in a later stage, high-level phyiscs constraints can be applied to select events of interest. A dedicated clock source using a protocol called SODANET over optical fibers ensures proper synchronisation between the components. For this new type of readout, a new way of simulating the detector response also needed to be developed, taking into account the effects of pile-up caused by the 20 MHz interaction rate.

  9. The detector control system for the ATLAS semiconductor tracker assembly phase

    CERN Document Server

    Sfyrla, Anna; Basiladze, Sergei G; Brenner, Richard; Chamizo-Llatas, Maria; Codispoti, Giuseppe; Ferrari, Pamela; Mikulec, Bettina; Phillips, Peter; Sandaker, Heidi; Stanecka, Ewa

    2005-01-01

    The ATLAS Semiconductor Tracker (SCT) consists of 4088 silicon microstrip modules, with a total of 6.3 million readout channels. These are arranged into 4 concentric barrel layers and 2 endcaps of 9 disks each. The coherent and safe operation of the SCT during commissioning and subsequent operation is an essential task of the Detector Control System (DCS). The main building blocks of the SCT DCS, the cooling system, the power supplies and the environmental system, are described. First results from DCS testing are presented.

  10. The PCIe-based readout system for the LHCb experiment

    Science.gov (United States)

    Cachemiche, J. P.; Duval, P. Y.; Hachon, F.; Le Gac, R.; Réthoré, F.

    2016-02-01

    The LHCb experiment is designed to study differences between particles and anti-particles as well as very rare decays in the beauty and charm sector at the LHC. The detector will be upgraded in 2019 in order to significantly increase its efficiency, by removing the first-level hardware trigger. The upgrade experiment will implement a trigger-less readout system in which all the data from every LHC bunch-crossing are transported to the computing farm over 12000 optical links without hardware filtering. The event building and event selection are carried out entirely in the farm. Another original feature of the system is that data transmitted through these fibres arrive directly to computers through a specially designed PCIe card called PCIe40. The same board handles the data acquisition flow and the distribution of fast and slow controls to the detector front-end electronics. It embeds one of the most powerful FPGAs currently available on the market with 1.2 million logic cells. The board has a bandwidth of 480 Gbits/s in both input and output over optical links and 100 Gbits/s over the PCI Express bus to the CPU. We will present how data circulate through the board and in the PC server for achieving the event building. We will focus on specific issues regarding the design of such a board with a very large FPGA, in particular in terms of power supply dimensioning and thermal simulations. The features of the board will be detailed and we will finally present the first performance measurements.

  11. Evolution of the ATLAS Nightly Build System

    International Nuclear Information System (INIS)

    The ATLAS Nightly Build System is a major component in the ATLAS collaborative software organization, validation, and code approval scheme. For over 10 years of development it has evolved into a factory for automatic release production and grid distribution. The 50 multi-platform branches of ATLAS releases provide vast opportunities for testing new packages, verification of patches to existing software, and migration to new platforms and compilers for ATLAS code that currently contains 2200 packages with 4 million C++ and 1.4 million python scripting lines written by about 1000 developers. Recent development was focused on the integration of ATLAS Nightly Build and Installation systems. The nightly releases are distributed and validated and some are transformed into stable releases used for data processing worldwide. The ATLAS Nightly System is managed by the NICOS control tool on a computing farm with 50 powerful multiprocessor nodes. NICOS provides the fully automated framework for the release builds, testing, and creation of distribution kits. The ATN testing framework of the Nightly System runs unit and integration tests in parallel suites, fully utilizing the resources of multi-core machines, and provides the first results even before compilations complete. The NICOS error detection system is based on several techniques and classifies the compilation and test errors according to their severity. It is periodically tuned to place greater emphasis on certain software defects by highlighting the problems on NICOS web pages and sending automatic e-mail notifications to responsible developers. These and other recent developments will be presented and future plans will be described.

  12. Electronic zooming TV readout system for an x-ray microscope

    International Nuclear Information System (INIS)

    The electronic zooming TV readout system using the X-ray zooming tube has been developed for purposes of real-time readout of very high resolution X-ray image, e.g. the output image from an X-ray microscope. The system limiting resolution is 0.2∼0.3 μm and it is easy to operate in practical applications

  13. A microcontroller based read-out system for secondary emission wire monitor

    International Nuclear Information System (INIS)

    A microcontroller based readout system has been developed for the secondary emission wire monitor (SEWM) for INDUS-2 project, which is a 2.5 GeV synchrotron radiation source being developed at Centre for Advanced Technology, Indore. The read-out system can handle up to 16 horizontal and 16 vertical input signals from SEWM. This data is sent to a PC to determine the transverse profile and position of the electron beam. (author)

  14. A hardware readout system for a curved one dimensional position sensitive proportional counter. Pt. 1

    International Nuclear Information System (INIS)

    We developed a hardware readout system for a curved one dimensional position sensitive X-ray proportional counter. Eight analog signals from cathode strips and wedges are processed to give, within a few microseconds, 14-bit information about the position of detection of an X-ray quantum. Elementary parts of our readout system are 9-bit Flash-ADCs, Multiplying-DACs and EPROMs. (orig.)

  15. A hardware readout system for a curved one-dimensional position-sensitive proportional counter

    International Nuclear Information System (INIS)

    We developed a hardware readout system for a curved one-dimensional position-sensitive X-ray proportional counter. Eight analog signals from cathode strips and wedges are processed to give, within a few microseconds, 14-bit information about the position of detection of an X-ray quantum. Elementary parts of our readout system are 9-bit flash-ADCs, multiplying-DACs and EPROMs. (orig.)

  16. A simple SQUID system with one operational amplifier as readout electronics

    International Nuclear Information System (INIS)

    We describe a dc Superconducting Quantum Interference Device (SQUID) readout electronics in Flux Locked Loop (FLL) mode without integrator and with only one operational amplifier, which is called Single Chip Readout Electronics (SCRE). A weakly damped niobium-SQUID magnetometer with a large flux-to-voltage transfer coefficient of about ∂V/∂Φ ≈ 380 μV/Φ0 and SCRE results in a very simple SQUID system. We characterize the system and demonstrate its applicability to Magnetocardiography (MCG) and measurements using the Transient ElectroMagnetic (TEM) method. SCRE not only simplifies the readout scheme, but also improves the system stability, the bandwidth and the slew rate. The difference between SCRE and a conventional readout scheme (preamplifier + amplifier + integrator) is also discussed. (paper)

  17. Digital radiography using amorphous selenium: photoconductively activated switch (PAS) readout system.

    Science.gov (United States)

    Reznik, Nikita; Komljenovic, Philip T; Germann, Stephen; Rowlands, John A

    2008-03-01

    A new amorphous selenium (a-Se) digital radiography detector is introduced. The proposed detector generates a charge image in the a-Se layer in a conventional manner, which is stored on electrode pixels at the surface of the a-Se layer. A novel method, called photoconductively activated switch (PAS), is used to read out the latent x-ray charge image. The PAS readout method uses lateral photoconduction at the a-Se surface which is a revolutionary modification of the bulk photoinduced discharge (PID) methods. The PAS method addresses and eliminates the fundamental weaknesses of the PID methods--long readout times and high readout noise--while maintaining the structural simplicity and high resolution for which PID optical readout systems are noted. The photoconduction properties of the a-Se surface were investigated and the geometrical design for the electrode pixels for a PAS radiography system was determined. This design was implemented in a single pixel PAS evaluation system. The results show that the PAS x-ray induced output charge signal was reproducible and depended linearly on the x-ray exposure in the diagnostic exposure range. Furthermore, the readout was reasonably rapid (10 ms for pixel discharge). The proposed detector allows readout of half a pixel row at a time (odd pixels followed by even pixels), thus permitting the readout of a complete image in 30 s for a 40 cm x 40 cm detector with the potential of reducing that time by using greater readout light intensity. This demonstrates that a-Se based x-ray detectors using photoconductively activated switches could form a basis for a practical integrated digital radiography system. PMID:18404939

  18. The ATLAS liquid argon calorimeter high-voltage system: commissioning, optimisation, and LHC relative luminosity measurement.

    CERN Document Server

    Arfaoui, Samir; Monnier, E

    2011-01-01

    The main goals of the ATLAS scientific programme are the observation or exclusion of physics beyond the Standard Model (SM), as well as the measurement of production cross-sections of SM processes. In oder to do so,it is important to measure the luminosity at the interaction point with great precision. The ATLAS luminosity is extracted using several detectors with varying efficiencies and acceptances. Different methods, such as inclusive - or coincidence - event counting and calorimeter integrated current measurements, are calibrated and cross-compared to provide the most accurate luminosity determination. In order to provide more cross-checks and a better control on the systematic uncertainties, an independent measurement using the liquid argon (LAr) forward calorimeter (FCal), based on the readout current of its high-voltage system, has been developed. This document describes how the LAr calorimeter high-voltage system has been installed and commissioned, as well as its application to a relative luminosity ...

  19. The ATLAS liquid argon calorimeter high-voltage system: commissioning, optimisation and LHC relative luminosity measurement

    International Nuclear Information System (INIS)

    The main goals of the ATLAS scientific programme are the observation or exclusion of physics beyond the Standard Model (SM), as well as the measurement of production cross-sections of SM processes. In order to do so, it is important to measure the luminosity at the interaction point with great precision. The ATLAS luminosity is extracted using several detectors with varying efficiencies and acceptances. Different methods, such as inclusive - or coincidence - event counting and calorimeter integrated current measurements, are calibrated and cross-compared to provide the most accurate luminosity determination. In order to provide more cross-checks and a better control on the systematic uncertainties, an independent measurement using the liquid argon (LAr) forward calorimeter (FCal), based on the readout current of its high-voltage system, has been developed. This document describes how the LAr calorimeter high-voltage system has been installed and commissioned, as well as its application to a relative luminosity determination. (author)

  20. DAQ hardware and software development for the ATLAS Pixel Detector

    Science.gov (United States)

    Stramaglia, Maria Elena

    2016-07-01

    In 2014, the Pixel Detector of the ATLAS experiment has been extended by about 12 million pixels thanks to the installation of the Insertable B-Layer (IBL). Data-taking and tuning procedures have been implemented along with newly designed readout hardware to support high bandwidth for data readout and calibration. The hardware is supported by an embedded software stack running on the readout boards. The same boards will be used to upgrade the readout bandwidth for the two outermost barrel layers of the ATLAS Pixel Detector. We present the IBL readout hardware and the supporting software architecture used to calibrate and operate the 4-layer ATLAS Pixel Detector. We discuss the technical implementations and status for data taking, validation of the DAQ system in recent cosmic ray data taking, in-situ calibrations, and results from additional tests in preparation for Run 2 at the LHC.

  1. DAQ hardware and software development for the ATLAS Pixel Detector

    CERN Document Server

    Stramaglia, Maria Elena; The ATLAS collaboration

    2015-01-01

    In 2014, the Pixel Detector of the ATLAS experiment has been extended by about 12 million pixels thanks to the installation of the Insertable B-Layer (IBL). Data-taking and tuning procedures have been implemented along with newly designed read-out hardware to support high bandwidth for data readout and calibration. The hardware is supported by an embedded software stack running on the read-out boards. The same boards will be used to upgrade the read-out bandwidth for the two outermost layers of the ATLAS Pixel Barrel (54 million pixels). We present the IBL read-out hardware and the supporting software architecture used to calibrate and operate the 4-layer ATLAS Pixel detector. We discuss the technical implementations and status for data taking, validation of the DAQ system in recent cosmic ray data taking, in-situ calibrations, and results from additional tests in preparation for Run 2 at the LHC.

  2. Upgrade of the Laser calibration system for the ATLAS hadronic calorimeter TileCal

    Science.gov (United States)

    van Woerden, Marius Cornelis

    2016-07-01

    We present in this contribution the new system for Laser calibration of the ATLAS hadronic calorimeter TileCal. The Laser system is a part of the three stage calibration apparatus designed to compute the calibration factors of the individual cells of TileCal. The Laser system is mainly used to correct for short term drifts of the readout of the individual cells. A sub-percent accuracy in the control of the calibration factors is required. To achieve this goal in the LHC Run2 conditions, a new Laser system was designed. The architecture of the system is described with details on the new optical line used to distribute Laser pulses in each individual detector module and on the new electronics used to drive the Laser, to read out optical monitors and to interface the system with the ATLAS readout, trigger and slow control. The LaserII system has been fully integrated into the framework used for measuring calibration factors and for monitoring data quality. First results on the Laser system performances studied are presented.

  3. System administration of ATLAS TDAQ computing environment

    Science.gov (United States)

    Adeel-Ur-Rehman, A.; Bujor, F.; Benes, J.; Caramarcu, C.; Dobson, M.; Dumitrescu, A.; Dumitru, I.; Leahu, M.; Valsan, L.; Oreshkin, A.; Popov, D.; Unel, G.; Zaytsev, A.

    2010-04-01

    This contribution gives a thorough overview of the ATLAS TDAQ SysAdmin group activities which deals with administration of the TDAQ computing environment supporting High Level Trigger, Event Filter and other subsystems of the ATLAS detector operating on LHC collider at CERN. The current installation consists of approximately 1500 netbooted nodes managed by more than 60 dedicated servers, about 40 multi-screen user interface machines installed in the control rooms and various hardware and service monitoring machines as well. In the final configuration, the online computer farm will be capable of hosting tens of thousands applications running simultaneously. The software distribution requirements are matched by the two level NFS based solution. Hardware and network monitoring systems of ATLAS TDAQ are based on NAGIOS and MySQL cluster behind it for accounting and storing the monitoring data collected, IPMI tools, CERN LANDB and the dedicated tools developed by the group, e.g. ConfdbUI. The user management schema deployed in TDAQ environment is founded on the authentication and role management system based on LDAP. External access to the ATLAS online computing facilities is provided by means of the gateways supplied with an accounting system as well. Current activities of the group include deployment of the centralized storage system, testing and validating hardware solutions for future use within the ATLAS TDAQ environment including new multi-core blade servers, developing GUI tools for user authentication and roles management, testing and validating 64-bit OS, and upgrading the existing TDAQ hardware components, authentication servers and the gateways.

  4. System administration of ATLAS TDAQ computing environment

    International Nuclear Information System (INIS)

    This contribution gives a thorough overview of the ATLAS TDAQ SysAdmin group activities which deals with administration of the TDAQ computing environment supporting High Level Trigger, Event Filter and other subsystems of the ATLAS detector operating on LHC collider at CERN. The current installation consists of approximately 1500 netbooted nodes managed by more than 60 dedicated servers, about 40 multi-screen user interface machines installed in the control rooms and various hardware and service monitoring machines as well. In the final configuration, the online computer farm will be capable of hosting tens of thousands applications running simultaneously. The software distribution requirements are matched by the two level NFS based solution. Hardware and network monitoring systems of ATLAS TDAQ are based on NAGIOS and MySQL cluster behind it for accounting and storing the monitoring data collected, IPMI tools, CERN LANDB and the dedicated tools developed by the group, e.g. ConfdbUI. The user management schema deployed in TDAQ environment is founded on the authentication and role management system based on LDAP. External access to the ATLAS online computing facilities is provided by means of the gateways supplied with an accounting system as well. Current activities of the group include deployment of the centralized storage system, testing and validating hardware solutions for future use within the ATLAS TDAQ environment including new multi-core blade servers, developing GUI tools for user authentication and roles management, testing and validating 64-bit OS, and upgrading the existing TDAQ hardware components, authentication servers and the gateways.

  5. Full system test of module to DAQ for ATLAS IBL

    International Nuclear Information System (INIS)

    IBL (Insertable B Layer) as the inner most layer in the ATLAS detector at the LHC has been successfully integrated to the system last June 2014. IBL system reliability and consistency is under investigation during ongoing milestone runs at CERN. Back of Crate card (BOC) and Read out Driver (ROD) as two of the main electronic cards act as an interface between the IBL modules and the TDAQ chain. The detector data will be received and processed and then formatted by an interaction between these two electronic cards. The BOC takes advantage of using S-Link implementation inside the main FPGAs. The S-Link protocol as a standard high performance data acquisition link between the readout electronic cards and the TDAQ system is developed and used at CERN. It is based on the idea that detector formatted data will be transferred through optical fibers to the ROS (Read out System) PC for being stored via the ROBIN (Read out Buffer) cards. This talk presents the results that confirm a stable and good performance of the system, from the modules to the read out electronic cards and then to the ROS PCs via S-Link.

  6. Integration of the Trigger and Data Acquisition Systems in ATLAS

    International Nuclear Information System (INIS)

    During 2006 and the first half of 2007, the installation, integration and commissioning of trigger and data acquisition (TDAQ) equipment in the ATLAS experimental area have progressed. There have been a series of technical runs using the final components of the system already installed in the experimental area. Various tests have been run including ones where level 1 preselected simulated proton-proton events have been processed in a loop mode through the trigger and dataflow chains. The system included the readout buffers containing the events, event building, level 2 and event filter trigger algorithms. The scalability of the system with respect to the number of event building nodes used has been studied and quantities critical for the final system, such as trigger rates and event processing times, have been measured using different trigger algorithms as well as different TDAQ components. This paper presents the TDAQ architecture, the current status of the installation and commissioning and highlights the main test results that validate the system.

  7. Integration of the trigger and data acquisition systems in ATLAS

    International Nuclear Information System (INIS)

    During 2006 and the first half of 2007, the installation, integration and commissioning of trigger and data acquisition (TDAQ) equipment in the ATLAS experimental area have progressed. There have been a series of technical runs using the final components of the system already installed in the experimental area. Various tests have been run including ones where level 1 preselected simulated proton-proton events have been processed in a loop mode through the trigger and dataflow chains. The system included the readout buffers containing the events, event building, level 2 and event filter trigger algorithms. The scalability of the system with respect to the number of event building nodes used has been studied and quantities critical for the final system, such as trigger rates and event processing times, have been measured using different trigger algorithms as well as different TDAQ components. This paper presents the TDAQ architecture, the current status of the installation and commissioning and highlights the main test results that validate the system

  8. The ATLAS data quality defect database system

    International Nuclear Information System (INIS)

    The ATLAS experiment at the Large Hadron Collider has implemented a new system for recording information on detector status and data quality, and for transmitting this information to users performing physics analysis. This system revolves around the concept of ''defects,'' which are well-defined, fine-grained, unambiguous occurrences affecting the quality of recorded data. The motivation, implementation, and operation of this system is described. (orig.)

  9. Planar pixel detector module development for the HL-LHC ATLAS pixel system

    Energy Technology Data Exchange (ETDEWEB)

    Bates, Richard L., E-mail: richard.bates@glasgow.ac.uk [SUPA School of Physics and Astronomy, University of Glasgow, Glasgow G12 8QQ (United Kingdom); Buttar, C.; Stewart, A.; Blue, A.; Doonan, K.; Ashby, J. [SUPA School of Physics and Astronomy, University of Glasgow, Glasgow G12 8QQ (United Kingdom); Casse, G.; Dervan, P.; Forshaw, D.; Tsurin, I. [The University of Liverpool, Liverpool (United Kingdom); Brown, S.; Pater, J. [The Univiersty of Manchester, Manchester (United Kingdom)

    2013-12-11

    The ATLAS pixel detector for the HL-LHC requires the development of large area pixel modules that can withstand doses up to 10{sup 16} 1 MeV n{sub eq} cm{sup −2}. The area of the pixel detector system will be over 5 m{sup 2} and as such low cost, large area modules are required. The development of a quad module based on 4 FE-I4 readout integrated chips (ROIC) will be discussed. The FE-I4 ROIC is a large area chip and the yield of the flip-chip process to form an assembly is discussed for single chip assemblies. The readout of the quad module for laboratory tests will be reported.

  10. Planar pixel detector module development for the HL-LHC ATLAS pixel system

    Science.gov (United States)

    Bates, Richard L.; Buttar, C.; Stewart, A.; Blue, A.; Doonan, K.; Ashby, J.; Casse, G.; Dervan, P.; Forshaw, D.; Tsurin, I.; Brown, S.; Pater, J.

    2013-12-01

    The ATLAS pixel detector for the HL-LHC requires the development of large area pixel modules that can withstand doses up to 1016 1 MeV neq cm-2. The area of the pixel detector system will be over 5 m2 and as such low cost, large area modules are required. The development of a quad module based on 4 FE-I4 readout integrated chips (ROIC) will be discussed. The FE-I4 ROIC is a large area chip and the yield of the flip-chip process to form an assembly is discussed for single chip assemblies. The readout of the quad module for laboratory tests will be reported.

  11. Planar pixel detector module development for the HL-LHC ATLAS pixel system

    International Nuclear Information System (INIS)

    The ATLAS pixel detector for the HL-LHC requires the development of large area pixel modules that can withstand doses up to 1016 1 MeV neq cm−2. The area of the pixel detector system will be over 5 m2 and as such low cost, large area modules are required. The development of a quad module based on 4 FE-I4 readout integrated chips (ROIC) will be discussed. The FE-I4 ROIC is a large area chip and the yield of the flip-chip process to form an assembly is discussed for single chip assemblies. The readout of the quad module for laboratory tests will be reported

  12. Simulating the ATLAS Distributed Data Management System

    CERN Document Server

    Barisits, M; Lassnig, M; Molfetas, M

    2012-01-01

    The ATLAS Distributed Data Management system organizes more than 90PB of physics data across more than 100 sites globally. Over 14 million files are transferred daily with strongly varying usage patterns. For performance and scalability reasons it is imperative to adapt and improve the data management system continuously. Therefore future system modifications in hardware, software, as well as policy, need to be evaluated to accomplish the intended results and to avoid unwanted side effects. Due to the complexity of large- scale distributed systems this evaluation process is primarily based on expert knowledge, as conventional evaluation methods are inadequate. However, this error-prone process lacks quantitative estimations and leads to inaccuracy as well as incorrect conclusions. In this work we present a novel, full-scale simulation framework. This modular simulator is able to accurately model the ATLAS Distributed Data Management system. The design and architecture of the component-based software is presen...

  13. Feasibility of Silicon strip detectors and low noise multichannel readout system for medical digital radiography

    International Nuclear Information System (INIS)

    A x-ray detection system based on Silicon strip detectors and low noise multichannel readout system was developed in the framework of the collaboration project. The study of the feasibility of this detector system for medical applications was done. Our system has characteristics that match the requirements of a digital image system

  14. Multilevel Workflow System in the ATLAS Experiment

    CERN Document Server

    Borodin, M; The ATLAS collaboration; Golubkov, D; Klimentov, A; Maeno, T; Vaniachine, A

    2015-01-01

    The ATLAS experiment is scaling up Big Data processing for the next LHC run using a multilevel workflow system comprised of many layers. In Big Data processing ATLAS deals with datasets, not individual files. Similarly a task (comprised of many jobs) has become a unit of the ATLAS workflow in distributed computing, with about 0.8M tasks processed per year. In order to manage the diversity of LHC physics (exceeding 35K physics samples per year), the individual data processing tasks are organized into workflows. For example, the Monte Carlo workflow is composed of many steps: generate or configure hard-processes, hadronize signal and minimum-bias (pileup) events, simulate energy deposition in the ATLAS detector, digitize electronics response, simulate triggers, reconstruct data, convert the reconstructed data into ROOT ntuples for physics analysis, etc. Outputs are merged and/or filtered as necessary to optimize the chain. The bi-level workflow manager - ProdSys2 - generates actual workflow tasks and their jobs...

  15. Multilevel Workflow System in the ATLAS Experiment

    CERN Document Server

    Borodin, M; The ATLAS collaboration; De, K; Golubkov, D; Klimentov, A; Maeno, T; Vaniachine, A

    2014-01-01

    The ATLAS experiment is scaling up Big Data processing for the next LHC run using a multilevel workflow system comprised of many layers. In Big Data processing ATLAS deals with datasets, not individual files. Similarly a task (comprised of many jobs) has become a unit of the ATLAS workflow in distributed computing, with about 0.8M tasks processed per year. In order to manage the diversity of LHC physics (exceeding 35K physics samples per year), the individual data processing tasks are organized into workflows. For example, the Monte Carlo workflow is composed of many steps: generate or configure hard-processes, hadronize signal and minimum-bias (pileup) events, simulate energy deposition in the ATLAS detector, digitize electronics response, simulate triggers, reconstruct data, convert the reconstructed data into ROOT ntuples for physics analysis, etc. Outputs are merged and/or filtered as necessary to optimize the chain. The bi-level workflow manager - ProdSys2 - generates actual workflow tasks and their jobs...

  16. Simulation of the upgraded Phase-1 Trigger Readout Electronics of the Liquid-Argon Calorimeter of the ATLAS Detector at the LHC

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00338138

    In the context of an intensive upgrade plan for the LHC in order to provide proton beams of increased luminosity, a revision of the data readout electronics of the Liquid-Argon-Calorimeter of the ATLAS detector is scheduled. This is required to retain the efficiency of the trigger at increased event rates despite its fixed bandwidth. The focus lies on the early digitization and finer segmentation of the data provided to the trigger. Furthermore, there is the possibility to implement new energy reconstruction algorithms which are adapted to the specific requirements of the trigger. In order to constitute crucial design decisions, such as the digitization scale or the choice of digital signal processing algorithms, comprehensive simulations are required. High trigger efficiencies are decisive at it for the successful continuation of the measurements of rare Standard Model processes as well as for a high sensitivity to new physics beyond the established theories. It can be shown that a significantly improved res...

  17. The ATLAS Data Flow system for the Second LHC Run

    CERN Document Server

    Hauser, Reiner; The ATLAS collaboration

    2015-01-01

    After its first shutdown, LHC will provide pp collisions with increased luminosity and energy. In the ATLAS experiment the Trigger and Data Acquisition (TDAQ) system has been upgraded to deal with the increased event rates. The Data Flow (DF) element of the TDAQ is a distributed hardware and software system responsible for buffering and transporting event data from the Readout system to the High Level Trigger (HLT) and to the event storage. The DF has been reshaped in order to profit from the technological progress and to maximize the flexibility and efficiency of the data selection process. The updated DF is radically different from the previous implementation both in terms of architecture and expected performance. The pre-existing two level software filtering, known as L2 and the Event Filter, and the Event Building are now merged into a single process, performing incremental data collection and analysis. This design has many advantages, among which are: the radical simplification of the architecture, the f...

  18. The ATLAS Distributed Data Management System & Databases

    CERN Document Server

    Garonne, V; The ATLAS collaboration; Barisits, M; Beermann, T; Vigne, R; Serfon, C

    2013-01-01

    The ATLAS Distributed Data Management (DDM) System is responsible for the global management of petabytes of high energy physics data. The current system, DQ2, has a critical dependency on Relational Database Management Systems (RDBMS), like Oracle. RDBMS are well-suited to enforcing data integrity in online transaction processing applications, however, concerns have been raised about the scalability of its data warehouse-like workload. In particular, analysis of archived data or aggregation of transactional data for summary purposes is problematic. Therefore, we have evaluated new approaches to handle vast amounts of data. We have investigated a class of database technologies commonly referred to as NoSQL databases. This includes distributed filesystems, like HDFS, that support parallel execution of computational tasks on distributed data, as well as schema-less approaches via key-value stores, like HBase. In this talk we will describe our use cases in ATLAS, share our experiences with various databases used ...

  19. Development and characterization of the readout system for POLARBEAR-2

    CERN Document Server

    Barron, D; Akiba, Y; Aleman, C; Arnold, K; Atlas, M; Bender, A; Borrill, J; Chapman, S; Chinone, Y; Cukierman, A; Dobbs, M; Elleflot, T; Errard, J; Fabbian, G; Feng, G; Gilbert, A; Halverson, N W; Hasegawa, M; Hattori, K; Hazumi, M; Holzapfel, W L; Hori, Y; Inoue, Y; Jaehnig, G C; Katayama, N; Keating, B; Kermish, Z; Keskitalo, R; Kisner, T; Jeune, M Le; Lee, A T; Matsuda, F; Matsumura, T; Morii, H; Myers, M J; Navaroli, M; Nishino, H; Okamura, T; Peloton, J; Rebeiz, G; Reichardt, C L; Richards, P L; Ross, C; Sholl, M; Siritanasak, P; Smecher, G; Stebor, N; Steinbach, B; Stompor, R; Suzuki, A; Suzuki, J; Takada, S; Takakura, S; Tomaru, T; Wilson, B; Yamaguchi, H; Zahn, O

    2014-01-01

    POLARBEAR-2 is a next-generation receiver for precision measurements of the polarization of the cosmic microwave background (Cosmic Microwave Background (CMB)). Scheduled to deploy in early 2015, it will observe alongside the existing POLARBEAR-1 receiver, on a new telescope in the Simons Array on Cerro Toco in the Atacama desert of Chile. For increased sensitivity, it will feature a larger area focal plane, with a total of 7,588 polarization sensitive antenna-coupled Transition Edge Sensor (TES) bolometers, with a design sensitivity of 4.1 uKrt(s). The focal plane will be cooled to 250 milliKelvin, and the bolometers will be read-out with 40x frequency domain multiplexing, with 36 optical bolometers on a single SQUID amplifier, along with 2 dark bolometers and 2 calibration resistors. To increase the multiplexing factor from 8x for POLARBEAR-1 to 40x for POLARBEAR-2 requires additional bandwidth for SQUID readout and well-defined frequency channel spacing. Extending to these higher frequencies requires new c...

  20. A new readout control system for the LHCb upgrade at CERN

    CERN Document Server

    Alessio, Federico

    2012-01-01

    The LHCb experiment has proposed an upgrade towards a full 40 MHz readout system in order to run between five and ten times its initial design luminosity. The entire readout architecture will be upgraded in order to cope with higher sub-detector occupancies, higher rate and higher network load. In this paper, we describe the architecture, functionalities and a first hardware implementation of a new fast Readout Control system for the LHCb upgrade, which will be entirely based on FPGAs and bi-directional links. We also outline the real-time implementations of the new Readout Control system, together with solutions on how to handle the synchronous distribution of timing and synchronous information to the complex upgraded LHCb readout architecture. One section will also be dedicated to the control and usage of the newly developed CERN GBT chipset to transmit fast and slow control commands to the upgraded LHCb Front-End electronics. At the end, we outline the plans for the deployment of the system in the global L...

  1. Test of CMS tracker silicon detector modules with the ARC readout system

    CERN Document Server

    Axer, M; Flügge, G; Franke, T; Hegner, B; Hermanns, T; Kasselmann, S T; Mnich, J; Nowack, A; Pooth, O; Pottgens, M

    2004-01-01

    The CMS tracker will be equipped with 16,000 silicon microstrip detector modules covering a surface of approximately 220 m**2. For quality control, a compact and inexpensive DAQ system is needed to monitor the mass production in industry and in the CMS production centres. To meet these requirements a set-up called APV Readout Controller (ARC) system was developed and distributed among all collaborating institutes to perform full readout tests of hybrids and modules at each production step. The system consists of all necessary hardware components, C++ based readout software using LabVIEW **1 Lab VIEW is a product of National Instruments, Austin, USA. as graphical user interface and provides full database connection to track every single module component during the production phase. Two preseries of Tracker End Cap (TEC) silicon detector modules have been produced by the TEC community and tested with the ARC system at Aachen. The results of the second series are presented.

  2. Upgraded Readout and Trigger Electronics for the ATLAS Liquid Argon Calorimeter at the LHC at the Horizons 2018-2022

    CERN Document Server

    Oliveira Damazio, Denis; The ATLAS collaboration

    2013-01-01

    The ATLAS Liquid Argon (LAr) calorimeters produce a total of 182,486 signals which are digitized and processed by the front-end and back-end electronics at every triggered event. In addition, the front-end electronics is summing analog signals to provide coarsely grained energy sums, called trigger towers, to the first-level trigger system, which is optimized for nominal LHC luminosities. However, the pile-up noise expected during the High Luminosity phases of LHC will be increased by factors of 3 to 7. An improved spatial granularity of the trigger primitives is therefore proposed in order to improve the identification performance for trigger signatures, like electrons, photons, tau leptons, jets, total and missing energy, at high background rejection rates. For the first upgrade phase in 2018, new LAr Trigger Digitizer Board (LTDB) are being designed to receive higher granularity signals, digitize them on detector and send them via fast optical links to a new digital processing system (DPS). The DPS applies...

  3. Grain-A Java data analysis system for Total Data Readout

    International Nuclear Information System (INIS)

    Grain is a data analysis system developed to be used with the novel Total Data Readout data acquisition system. In Total Data Readout all the electronics channels are read out asynchronously in singles mode and each data item is timestamped. Event building and analysis has to be done entirely in the software post-processing the data stream. A flexible and efficient event parser and the accompanying software system have been written entirely in Java. The design and implementation of the software are discussed along with experiences gained in running real-life experiments

  4. A camac based data acquisition system for flat-panel image array readout

    International Nuclear Information System (INIS)

    A readout system has been developed to facilitate the digitization and subsequent display of image data from two-dimensional, pixellated, flat-panel, amorphous silicon imaging arrays. These arrays have been designed specifically for medical x-ray imaging applications. The readout system is based on hardware and software developed for various experiments at CERN and Fermi National Accelerator Laboratory. Additional analog signal processing and digital control electronics were constructed specifically for this application. The authors report on the form of the resulting data acquisition system, discuss aspects of its performance, and consider the compromises which were involved in its design

  5. A 40 GByte/s read-out system for GEM

    International Nuclear Information System (INIS)

    The preliminary design of the read-out system for the GEM (Gammas, Electrons, Muons) detector at the Superconducting Super Collider is presented. The system reads all digitized data from the detector data sources at a Level 1 trigger rate of up to 100 kHz. A total read-out bandwidth of 40 GBytes/s is available. Data are stored in buffers that are accessible for further event filtering by an on-line, processor farm. Data are transported to the farm only as they are needed by the higher-level trigger algorithms, leading to a reduced bandwidth requirement in the Data Acquisition System

  6. The electronic readout system used on the Mk II R.A.L. positron camera

    International Nuclear Information System (INIS)

    The paper describes the operating principles of the electronic readout system as used on the Mk II R.A.L. positron camera. The individual modules are described in detail, and the specifications and the performance figures for the individual units, and of the complete system are given. Some early results obtained with the full system are presented. (author)

  7. Fabrication and test of a 70000 channels electronic pad readout system for multi-step avalanche chambers

    International Nuclear Information System (INIS)

    A new readout concept based on a custom-design chip containing both analog and digital functions as well as ultra-thin mounting with the chip-on-board technique is presented. The full readout system as well as fabrication and testing is described. A 70000 channels system based on this concept was installed in the WA98 experiment at the CERN SPS. The performance of the readout electronics is presented. (orig.)

  8. Modular pixelated detector system with the spectroscopic capability and fast parallel read-out

    International Nuclear Information System (INIS)

    A modular pixelated detector system was developed for imaging applications, where spectroscopic analysis of detected particles is advantageous e.g. for energy sensitive X-ray radiography, fluorescent and high resolution neutron imaging etc. The presented system consists of an arbitrary number of independent versatile modules. Each module is equipped with pixelated edgeless detector with spectroscopic ability and has its own fast read-out electronics. Design of the modules allows assembly of various planar and stacked detector configurations, to enlarge active area or/and to improve detection efficiency, while each detector is read-out separately. Consequently read-out speed is almost the same as that for a single module (up to 850 fps). The system performance and application examples are presented

  9. Architecture of a modular, multichannel readout system for dense electrochemical biosensor microarrays

    International Nuclear Information System (INIS)

    The architecture of a modular, multichannel readout system for dense electrochemical microarrays, targeting Lab-on-a-Chip applications, is presented. This approach promotes efficient component reusability through a hybrid multiplexing methodology, maintaining high levels of sampling performance and accuracy. Two readout modes are offered, which can be dynamically interchanged following signal profiling, to cater for both rapid signal transitions and weak current responses. Additionally, functional extensions to the described architecture are discussed, which provide the system with multi-biasing capabilities. A prototype integrated circuit of the proposed architecture’s analog core and a supporting board were implemented to verify the working principles. The system was evaluated using standard loads, as well as electrochemical sensor arrays. Through a range of operating conditions and loads, the prototype exhibited a highly linear response and accurately delivered the readout of input signals with fast transitions and wide dynamic ranges. (paper)

  10. The clock modules in TOF readout system for heavy ion experiments at IMP

    International Nuclear Information System (INIS)

    This paper describes two clock modules in the time-of-flight readout systems, which are applied in the Cold Target Recoil Ion Momentum Spectrometer(COLTRIMS) system and the Cooler Storage Ring (CSRm) at Institute of Modern Physics, Chinese Academy of Sciences.The two clock modules are designed on 3U PXI module and 6U PXI module. With high precision crystal oscillators and clock distribution, these two modules offer clock signal to the TOF readout electronics modules with low jitter less than 11 ps(RMS) and 12 ps(RMS). (authors)

  11. A digital readout system for the CMS Phase I Pixel Upgrade

    International Nuclear Information System (INIS)

    The Phase I Upgrade to the CMS Pixel Detector at the LHC features a new 400 Mb/s digital readout system. This new system utilizes upgraded custom ASICs, PSI46digv2.1 Read Out Chips and Token Bit Manager for data packaging, new optical links and changes to the Front End Drivers. We are reporting on the new architecture of the full readout chain, the new schema for data encoding/transmission, and the results of preliminary testing of the new optical components

  12. A digital readout system for the CMS Phase I Pixel Upgrade

    CERN Document Server

    Stringer, Robert Wayne

    2015-01-01

    The Phase I Upgrade to the CMS Pixel Detector at the LHC features a new 400 Mb/s digital readout system. This new system utilizes upgraded custom ASICs, PSI46dig Read Out Chips (ROC) and Token Bit Manager (TBM08/09) for data packaging, new optical links, and changes to the Front End Drivers (FEDs). We will be presenting the new architecture of the full readout chain, the new schema for data encoding/transmission, and the results of preliminary testing of the new components.

  13. Two-dimensional electronic readout system for multi-step-avalanche chambers

    International Nuclear Information System (INIS)

    We present prototype studies of a new technical solution of detector readout for measurements of charged particles at very high particle densities. In particular, this paper describes a readout system for multi-step avalanche chambers designed for the WA98 experiment at the CERN SPS. Results from the prototype studies are used for the design parameters of a readout chip containing both analog and digital functions. Simulations of the final system show that the position of the electron cloud can be reconstructed for single particles to an accuracy of 100 and 300 μm in the horizontal and vertical directions, respectively. Separation of two tracks about 5 mm apart is also obtained from the simulation. (orig.)

  14. A readout system for a cosmic ray telescope using Resistive Plate Chambers

    International Nuclear Information System (INIS)

    Resistive Plate Chambers (RPCs) are widely used in high energy physics for both tracking and triggering purposes. They have good time resolution and with finely segmented readout can also give a spatial resolution of better than 1 mm. RPCs can be produced cost-effectively on large scales, are of rugged build, and have excellent detection efficiency for charged particles. Our group has successfully built a Muon Scattering Tomography (MST) prototype, using 12 RPCs to obtain tracking information of muons going through a target volume of ∼ 50 cm × 50 cm × 70 cm, reconstructing both the incoming and outgoing muon tracks. We describe a readout system for fine-pitch RPCs using MAROC3 readout chips capable of scaling to a large system.

  15. Two-dimensional electronic readout system for multi-step-avalanche chambers

    Energy Technology Data Exchange (ETDEWEB)

    Carlen, L. [Lund Univ. (Sweden). Div. of Cosmic and Subatomic Phys.; Garpman, S. [Lund Univ. (Sweden). Div. of Cosmic and Subatomic Phys.; Gustafsson, H.-Aa. [Lund Univ. (Sweden). Div. of Cosmic and Subatomic Phys.; Loehner, H. [Kernfysisch Versneller Instituut, Zernikelaan 25, Nl-9747 AA Groningen (Netherlands); Nystrand, J. [Lund Univ. (Sweden). Div. of Cosmic and Subatomic Phys.; Oskarsson, A. [Lund Univ. (Sweden). Div. of Cosmic and Subatomic Phys.; Otterlund, I. [Lund Univ. (Sweden). Div. of Cosmic and Subatomic Phys.; Svensson, T. [Lund Univ. (Sweden). Div. of Cosmic and Subatomic Phys.; Stenlund, E. [Lund Univ. (Sweden). Div. of Cosmic and Subatomic Phys.; Soederstroem, K. [Lund Univ. (Sweden). Div. of Cosmic and Subatomic Phys.; Whitlow, H.J. [Department of Nuclear Physics, Lund Institute of Technology, Soelvegatan 14, S-223 62 Lund (Sweden)

    1997-06-11

    We present prototype studies of a new technical solution of detector readout for measurements of charged particles at very high particle densities. In particular, this paper describes a readout system for multi-step avalanche chambers designed for the WA98 experiment at the CERN SPS. Results from the prototype studies are used for the design parameters of a readout chip containing both analog and digital functions. Simulations of the final system show that the position of the electron cloud can be reconstructed for single particles to an accuracy of 100 and 300 {mu}m in the horizontal and vertical directions, respectively. Separation of two tracks about 5 mm apart is also obtained from the simulation. (orig.).

  16. Advanced Technology Lifecycle Analysis System (ATLAS)

    Science.gov (United States)

    O'Neil, Daniel A.; Mankins, John C.

    2004-01-01

    Developing credible mass and cost estimates for space exploration and development architectures require multidisciplinary analysis based on physics calculations, and parametric estimates derived from historical systems. Within the National Aeronautics and Space Administration (NASA), concurrent engineering environment (CEE) activities integrate discipline oriented analysis tools through a computer network and accumulate the results of a multidisciplinary analysis team via a centralized database or spreadsheet Each minute of a design and analysis study within a concurrent engineering environment is expensive due the size of the team and supporting equipment The Advanced Technology Lifecycle Analysis System (ATLAS) reduces the cost of architecture analysis by capturing the knowledge of discipline experts into system oriented spreadsheet models. A framework with a user interface presents a library of system models to an architecture analyst. The analyst selects models of launchers, in-space transportation systems, and excursion vehicles, as well as space and surface infrastructure such as propellant depots, habitats, and solar power satellites. After assembling the architecture from the selected models, the analyst can create a campaign comprised of missions spanning several years. The ATLAS controller passes analyst specified parameters to the models and data among the models. An integrator workbook calls a history based parametric analysis cost model to determine the costs. Also, the integrator estimates the flight rates, launched masses, and architecture benefits over the years of the campaign. An accumulator workbook presents the analytical results in a series of bar graphs. In no way does ATLAS compete with a CEE; instead, ATLAS complements a CEE by ensuring that the time of the experts is well spent Using ATLAS, an architecture analyst can perform technology sensitivity analysis, study many scenarios, and see the impact of design decisions. When the analyst is

  17. Coupling Influence on Signal Readout of a Dual-Parameter LC Resonant System

    Directory of Open Access Journals (Sweden)

    Jijun Xiong

    2015-01-01

    Full Text Available Dual-parameter inductive-capacitive (LC resonant sensor is gradually becoming the measurement trend in complex harsh environments; however, the coupling between inductors greatly affects the readout signal, which becomes very difficult to resolve by means of simple mathematical tools. By changing the values of specific variables in a MATLAB code, the influence of coupling between coils on the readout signal is analyzed. Our preliminary conclusions underline that changing the coupling to antenna greatly affects the readout signal, but it simultaneously influences the other signal. When f01=f02, it is better to broaden the difference between the two coupling coefficients k1 and k2. On the other side, when f01 is smaller than f02, it is better to decrease the coupling between sensor inductors k12, in order to obtain two readout signals averaged in strength. Finally, a test system including a discrete capacitor soldered to a printed circuit board (PCB based planar spiral coil is built, and the readout signals under different relative inductors positions are analyzed. All experimental results are in good agreement with the results of the MATLAB simulation.

  18. Superconductor Microwave Kinetic Inductance Detectors: System Model of the Readout Electronics

    Directory of Open Access Journals (Sweden)

    F. Alimenti

    2009-06-01

    Full Text Available This paper deals with the readout electronics needed by superconductor Microwave Kinetic Inductance Detectors (MKIDs. MKIDs are typically implemented in the form of cryogenic-cooled high quality factor microwave resonator. The natural frequency of these resonators changes as a millimeter or sub-millimeter wave radiation impinges on the resonator itself. A quantitative system model of the readout electronics (very similar to that of a vector network analyzer has been implemented under ADS environment and tested by several simulation experiments. The developed model is a tool to further optimize the readout electronic and to design the frequency allocation of parallel-connected MKIDs resonators. The applications of MKIDs will be in microwave and millimeter-wave radiometric imaging as well as in radio-astronomy focal plane arrays.

  19. The baseline dataflow system of the ATLAS trigger and DAQ

    CERN Document Server

    Vermeulen, J C; Dos Anjos, A; Barisonzi, M; Beck, H P; Beretta, M; Blair, R; Bogaerts, J A C; Boterenbrood, H; Botterill, David R; Ciobotaru, M; Palencia-Cortezon, E; Cranfield, R; Crone, G J; Dawson, J; Di Girolamo, B; Dobinson, Robert W; Ermoline, Y; Ferrer, M L; Francis, D; Gadomski, S; Gameiro, S; Golonka, P; Gorini, B; Green, B; Gruwé, M; Haas, S; Haeberli, C; Hasegawa, Y; Hauser, R; Hinkelbein, C; Hughes-Jones, R E; Jansweijer, P; Joos, M; Kaczmarska, A; Knezo, E; Kieft, G; Korcyl, K; Kugel, A; Lankford, A; Lehmann, G; Le Vine, M J; Liu, W; Maeno, T; Losada-Maia, L; Mapelli, L; Martin, B; McLaren, R; Meirosu, C; Misiejuk, A; Mommsen, R K; Mornacchi, Giuseppe; Müller, M; Nagasaka, Y; Nakayoshi, K; Papadopoulos, I M; Petersen, J; De Matos-Lopes-Pinto, P; Prigent, D; Pérez-Réale, V; Schlereth, J L; Shimojima, M; Spiwoks, R; Stancu, S; Strong, J; Tremblet, L; Werner, P; Wickens, F J; Yasu, Y; Yu, M; Zobernig, H; Zurek, M

    2003-01-01

    In this paper the baseline design of the ATLAS High Level Trigger and Data Acquisition system with respect to the DataFlow aspects, as presented in the recently submitted ATLAS Trigger/DAQ/Controls Technical Design Report [1], is reviewed and recent results of testbed measurements and from modelling are discussed. [1] ATLAS-TDR-016; CERN-LHCC-2003-022, http://cdsweb.cern.ch/search.py?recid=616089

  20. Pulse mode actuation-readout system based on MEMS resonator for liquid sensing

    DEFF Research Database (Denmark)

    Tang, Meng; Cagliani, Alberto; Davis, Zachary James;

    2014-01-01

    A MEMS (Micro-Electro-Mechanical Systems) bulk disk resonator is applied for mass sensing under its dynamic mode. The classical readout circuitry involves sophisticated feedback loop and feedthrough compensation. We propose a simple straightforward non-loop pulse mode actuation and capacitive...... readout scheme. In order to verify its feasibility in liquid bio-chemical sensing environment, an experimental measurement is conducted with humidity sensing application. The measured resonant frequency changes 60kHz of 67.7MHz with a humidity change of 0~80%....

  1. Comparisons of the MINOS near and far detector readout systems at a test beam

    International Nuclear Information System (INIS)

    MINOS is a long baseline neutrino oscillation experiment that uses two detectors separated by 734 km. The readout systems used for the two detectors are different and have to be independently calibrated. To verify and make a direct comparison of the calibrated response of the two readout systems, test beam data were acquired using a smaller calibration detector. This detector was simultaneously instrumented with both readout systems and exposed to the CERN PS T7 test beam. Differences in the calibrated response of the two systems are shown to arise from differences in response non-linearity, photomultiplier tube crosstalk, and threshold effects at the few percent level. These differences are reproduced by the Monte Carlo (MC) simulation to better than 1% and a scheme that corrects for these differences by calibrating the MC to match the data in each detector separately is presented. The overall difference in calorimetric response between the two readout systems is shown to be consistent with zero to a precision of 1.3% in data and 0.3% in MC with no significant energy dependence.

  2. Readout Circuit System for In2O3/RGO Nanocomposite Gas Sensors

    Science.gov (United States)

    Lin, Cheng-Yi

    A readout circuit system for In2O3/RGO nanocomposite gas sensors using open-source software has been developed for the first time. The readout system adopts a Raspberry Pi as an electronic control unit and incorporates different electronics components to realize the function of a source measure unit (SMU). During the operation, real-time results of measured gas concentrations can be accessed through the Internet and alarm functions are also included. All control programs were written in Python language. Using this readout system, current response of gas sensors toward oxygen concentrations (2,000---32,000 ppm) in argon environment at 140 °C are in a good agreement with the data measured by Agilent SMU (B2902A). Furthermore, temperature effects and transient response of the proposed system are investigated. The success of this readout system demonstrates the potential use of open-source hardware to construct scientific instruments with the advantages of miniaturization, low cost, flexible design, and Internet access.

  3. Comparisons of the MINOS Near and Far Detector Readout Systems at a Test Beam

    CERN Document Server

    Cabrera, A

    2009-01-01

    MINOS is a long baseline neutrino oscillation experiment that uses two detectors separated by 734 km. The readout systems used for the two detectors are different and have to be independently calibrated. To verify and make a direct comparison of the calibrated response of the two readout systems, test beam data were acquired using a smaller calibration detector. This detector was simultaneously instrumented with both readout systems and exposed to the CERN PS T7 test beam. Differences in the calibrated response of the two systems are shown to arise from differences in response non-linearity, photomultiplier crosstalk, and threshold effects at the few percent level. These differences are reproduced by the Monte Carlo (MC) simulation to better than 1% and a scheme that corrects for these differences by calibrating the MC to match the data in each detector separately is presented. The overall difference in calorimetric response between the two readout systems is shown to be consistent with zero to a precision of...

  4. A real-time data transmission method based on Linux for physical experimental readout systems

    International Nuclear Information System (INIS)

    In a typical physical experimental instrument, such as a fusion or particle physical application, the readout system generally implements an interface between the data acquisition (DAQ) system and the front-end electronics (FEE). The key task of a readout system is to read, pack, and forward the data from the FEE to the back-end data concentration center in real time. To guarantee real-time performance, the VxWorks operating system (OS) is widely used in readout systems. However, VxWorks is not an open-source OS, which gives it has many disadvantages. With the development of multi-core processor and new scheduling algorithm, Linux OS exhibits performance in real-time applications similar to that of VxWorks. It has been successfully used even for some hard real-time systems. Discussions and evaluations of real-time Linux solutions for a possible replacement of VxWorks arise naturally. In this paper, a real-time transmission method based on Linux is introduced. To reduce the number of transfer cycles for large amounts of data, a large block of contiguous memory buffer for DMA transfer is allocated by modifying the Linux Kernel (version 2.6) source code slightly. To increase the throughput for network transmission, the user software is designed into formation of parallelism. To achieve high performance in real-time data transfer from hardware to software, mapping techniques must be used to avoid unnecessary data copying. A simplified readout system is implemented with 4 readout modules in a PXI crate. This system can support up to 48 MB/s data throughput from the front-end hardware to the back-end concentration center through a Gigabit Ethernet connection. There are no restrictions on the use of this method, hardware or software, which means that it can be easily migrated to other interrupt related applications.

  5. The Monitoring and Calibration Web Systems for the ATLAS Tile Calorimeter Data Quality Analysis

    Science.gov (United States)

    Sivolella, A.; Maidantchik, C.; Ferreira, F.

    2012-12-01

    The Tile Calorimeter (TileCal) is one of the ATLAS sub-detectors. The read-out is performed by about 10,000 PhotoMultiplier Tubes (PMTs). The signal of each PMT is digitized by an electronic channel. The Monitoring and Calibration Web System (MCWS) supports the data quality analysis of the electronic channels. This application was developed to assess the detector status and verify its performance. It can provide to the user the list of TileCal known problematic channels, that is stored in the ATLAS condition database (COOL DB). The bad channels list guides the data quality validator in identifying new problematic channels and is used in data reconstruction and the system allows to update the channels list directly in the COOL database. MCWS can generate summary results, such as eta-phi plots and comparative tables of the masked channels percentage. Regularly, during the LHC (Large Hadron Collider) shutdown a maintenance of the detector equipments is performed. When a channel is repaired, its calibration constants stored in the COOL database have to be updated. Additionally MCWS system manages the update of these calibration constants values in the COOL database. The MCWS has been used by the Tile community since 2008, during the commissioning phase, and was upgraded to comply with ATLAS operation specifications. Among its future developments, it is foreseen an integration of MCWS with the TileCal control Web system (DCS) in order to identify high voltage problems automatically.

  6. Advanced Alignment of the ATLAS Tracking System

    CERN Document Server

    Butti, P; The ATLAS collaboration

    2014-01-01

    In order to reconstruct the trajectories of charged particles, the ATLAS experiment exploits a tracking system built using different technologies, planar silicon modules or microstrips (PIX and SCT detectors) and gaseous drift tubes (TRT), all embedded in a 2T solenoidal magnetic field. Misalignments and deformations of the active detector elements deteriorate the track reconstruction resolution and lead to systematic biases on the measured track parameters. The alignment procedures exploits various advanced tools and techniques in order to determine for module positions and correct for deformations. For the LHC Run II, the system is being upgraded with the installation of a new pixel layer, the Insertable B-layer (IBL).

  7. Wire spark chamber capacitive readout system with low leakage current and small systematic error

    Energy Technology Data Exchange (ETDEWEB)

    Anderhub, H.B.; Boecklin, J.; von Gunten, H.P.; Koenig, H.; Le Coultre, P.; Makowiecki, D.; Seiler, P.G. (Eidgenoessische Technische Hochschule, Zurich (Switzerland). Lab. fuer Hochenergiephysik)

    1983-02-15

    A wire spark chamber capacitive readout system with analog FET switch multiplexing and CAMAC interface is described. Two wire planes per chamber are read out. The information of each plane is sequentially digitized in one ADC. This and the low leakage current of the FET switches guarantee a small systematic error of the measurement of the spark position.

  8. A wire spark chamber capacitive readout system woth low leakage current and small systematic error

    International Nuclear Information System (INIS)

    A wire spark chamber capacitive readout system with analog FET switch multiplexing and CAMAC interface is described. Two wire planes per chamber are read out. The information of each plane is sequentially digitized in one ADC. This and the low leakage current of the FET switches guarantee a small systematic error of the measurement of the spark position. (orig.)

  9. Development of microwave kinetic inductance detectors and their readout system for LiteBIRD

    Energy Technology Data Exchange (ETDEWEB)

    Hattori, K.; Hazumi, M. [High Energy Accelerator Research Organization, Tsukuba, Ibaraki 305-0801 (Japan); Ishino, H.; Kibayashi, A. [Okayama University, Okayama 700-8530 (Japan); Kibe, Y., E-mail: kibe@fphy.hep.okayama-u.ac.jp [Okayama University, Okayama 700-8530 (Japan); Mima, S. [Terahertz-wave Research Group, RIKEN, 2-1 Hirosawa, Wako, Saitama 351-0198 (Japan); Okamura, T.; Sato, N.; Tomaru, T. [High Energy Accelerator Research Organization, Tsukuba, Ibaraki 305-0801 (Japan); Yamada, Y. [Okayama University, Okayama 700-8530 (Japan); Yoshida, M. [High Energy Accelerator Research Organization, Tsukuba, Ibaraki 305-0801 (Japan); Yuasa, T. [Okayama University, Okayama 700-8530 (Japan); Watanabe, H. [SOKENDAI, Tsukuba, Ibaraki 305-0801 (Japan)

    2013-12-21

    Primordial gravitational waves generated by inflation have produced an odd-parity pattern B-mode in the cosmic microwave background (CMB) polarization. LiteBIRD (Light satellite for the studies of B-mode polarization and Inflation from cosmic background Radiation Detection) aims at detecting this B-mode polarization precisely. It requires about 2000 detectors capable of detecting a frequency range from 50 GHz to 250 GHz with ultra low noise. Superconducting detectors are suitable for this requirement. We have fabricated and tested microwave kinetic inductance detectors (MKIDs) and developed a new readout system. We have designed antenna-coupled MKIDs. Quasi-particles are created by incident radiation and are detected as a change of the surface impedance of a superconductor strip. This change of the surface impedance is translated into the change of the resonant frequency of a microwave signal transmitted through the resonator. We also have developed a new readout system for MKIDs. The newly developed readout system is not only able to read out the amplitude and the phase data with the homodyne detection for multi-channels, but also provides a unique feature of tracking the resonant frequency of the target resonator. This mechanism enables us to detect signals with a large dynamic range. We report on the recent R and D status of the developing MKIDs and on the read-out system for LiteBIRD.

  10. Establishing the test platform for the Daya Bay RPC readout prototype system on NIM

    International Nuclear Information System (INIS)

    It describes the establishing of the test platform for the Daya Bay RPC Readout Prototype System on NIM. Based on the test platform, a series of tests have been done for the RPC detector and FEC card. The data from those tests provide valuable references for future works. (authors)

  11. Development of microwave kinetic inductance detectors and their readout system for LiteBIRD

    International Nuclear Information System (INIS)

    Primordial gravitational waves generated by inflation have produced an odd-parity pattern B-mode in the cosmic microwave background (CMB) polarization. LiteBIRD (Light satellite for the studies of B-mode polarization and Inflation from cosmic background Radiation Detection) aims at detecting this B-mode polarization precisely. It requires about 2000 detectors capable of detecting a frequency range from 50 GHz to 250 GHz with ultra low noise. Superconducting detectors are suitable for this requirement. We have fabricated and tested microwave kinetic inductance detectors (MKIDs) and developed a new readout system. We have designed antenna-coupled MKIDs. Quasi-particles are created by incident radiation and are detected as a change of the surface impedance of a superconductor strip. This change of the surface impedance is translated into the change of the resonant frequency of a microwave signal transmitted through the resonator. We also have developed a new readout system for MKIDs. The newly developed readout system is not only able to read out the amplitude and the phase data with the homodyne detection for multi-channels, but also provides a unique feature of tracking the resonant frequency of the target resonator. This mechanism enables us to detect signals with a large dynamic range. We report on the recent R and D status of the developing MKIDs and on the read-out system for LiteBIRD

  12. Design and performance of the ABCD3TA ASIC for readout of silicon strip detectors in the ATLAS semiconductor tracker

    Czech Academy of Sciences Publication Activity Database

    Campabadal, F.; Fleta, C.; Key, M.; Böhm, Jan; Mikeštíková, Marcela; Šťastný, Jan

    2005-01-01

    Roč. 552, - (2005), s. 292-328. ISSN 0168-9002 R&D Projects: GA MŠk 1P04LA212 Institutional research plan: CEZ:AV0Z10100502 Keywords : front-end electronics * binary readout * silicon strip detectors * application specific integrated circuits * quality assurance Subject RIV: BF - Elementary Particles and High Energy Physics Impact factor: 1.224, year: 2005

  13. A fast embedded readout system for large-area Medipix and Timepix systems

    International Nuclear Information System (INIS)

    In this work we present a novel readout electronics for an X-ray sensor based on a Si crystal bump-bonded to an array of 3 × 2 Medipix ASICs. The pixel size is 55 μm × 55 μm with a total number of ∼ 400k pixels and a sensitive area of 42 mm × 28 mm. The readout electronics operate Medipix-2 MXR or Timepix ASICs with a clock speed of 125 MHz. The data acquisition system is centered around an FPGA and each of the six ASICs has a dedicated I/O port for simultaneous data acquisition. The settings of the auxiliary devices (ADCs and DACs) are also processed in the FPGA. Moreover, a high-resolution timer operates the electronic shutter to select the exposure time from 8 ns to several milliseconds. A sophisticated trigger is available in hardware and software to synchronize the acquisition with external electro-mechanical motors. The system includes a diagnostic subsystem to check the sensor temperature and to control the cooling Peltier cells and a programmable high-voltage generator to bias the crystal. A network cable transfers the data, encapsulated into the UDP protocol and streamed at 1 Gb/s. Therefore most notebooks or personal computers are able to process the data and to program the system without a dedicated interface. The data readout software is compatible with the well-known Pixelman 2.x running both on Windows and GNU/Linux. Furthermore the open architecture encourages users to write their own applications. With a low-level interface library which implements all the basic features, a MATLAB or Python script can be implemented for special manipulations of the raw data. In this paper we present selected images taken with a microfocus X-ray tube to demonstrate the capability to collect the data at rates up to 120 fps corresponding to 0.76 Gb/s

  14. A fast embedded readout system for large-area Medipix and Timepix systems

    Science.gov (United States)

    Brogna, A. S.; Balzer, M.; Smale, S.; Hartmann, J.; Bormann, D.; Hamann, E.; Cecilia, A.; Zuber, M.; Koenig, T.; Zwerger, A.; Weber, M.; Fiederle, M.; Baumbach, T.

    2014-05-01

    In this work we present a novel readout electronics for an X-ray sensor based on a Si crystal bump-bonded to an array of 3 × 2 Medipix ASICs. The pixel size is 55 μm × 55 μm with a total number of ~ 400k pixels and a sensitive area of 42 mm × 28 mm. The readout electronics operate Medipix-2 MXR or Timepix ASICs with a clock speed of 125 MHz. The data acquisition system is centered around an FPGA and each of the six ASICs has a dedicated I/O port for simultaneous data acquisition. The settings of the auxiliary devices (ADCs and DACs) are also processed in the FPGA. Moreover, a high-resolution timer operates the electronic shutter to select the exposure time from 8 ns to several milliseconds. A sophisticated trigger is available in hardware and software to synchronize the acquisition with external electro-mechanical motors. The system includes a diagnostic subsystem to check the sensor temperature and to control the cooling Peltier cells and a programmable high-voltage generator to bias the crystal. A network cable transfers the data, encapsulated into the UDP protocol and streamed at 1 Gb/s. Therefore most notebooks or personal computers are able to process the data and to program the system without a dedicated interface. The data readout software is compatible with the well-known Pixelman 2.x running both on Windows and GNU/Linux. Furthermore the open architecture encourages users to write their own applications. With a low-level interface library which implements all the basic features, a MATLAB or Python script can be implemented for special manipulations of the raw data. In this paper we present selected images taken with a microfocus X-ray tube to demonstrate the capability to collect the data at rates up to 120 fps corresponding to 0.76 Gb/s.

  15. Simulating the ATLAS Distributed Data Management System

    International Nuclear Information System (INIS)

    The ATLAS Distributed Data Management system organizes more than 90PB of physics data across more than 100 sites globally. Over 5 million files are transferred daily with strongly varying usage patterns. For performance and scalability reasons it is imperative to adapt and improve the data management system continuously. Therefore future system modifications in hardware, software, as well as policy, need to be evaluated to accomplish the intended results and to avoid unwanted side effects. Due to the complexity of large-scale distributed systems this evaluation process is primarily based on expert knowledge, as conventional evaluation methods are inadequate. However, this error-prone process lacks quantitative estimations and leads to inaccuracy as well as incorrect conclusions. In this work we present a novel, full-scale simulation framework. This modular simulator is able to accurately model the ATLAS Distributed Data Management system. The design and architecture of the component-based software is presented and discussed. The evaluation is based on the comparison with historical workloads and concentrates on the accuracy of the simulation framework. Our results show that we can accurately model the distributed data management system within 80%.

  16. DAQ Hardware and software development for the ATLAS Pixel Detector

    CERN Document Server

    Stramaglia, Maria Elena; The ATLAS collaboration

    2015-01-01

    In 2014, the Pixel Detector of the ATLAS experiment was extended by about 12 million pixels with the installation of the Insertable B-Layer (IBL). Data-taking and tuning procedures have been implemented by employing newly designed read-out hardware, which supports the full detector bandwidth even for calibration. The hardware is supported by an embedded software stack running on the read-out boards. The same boards will be used to upgrade the read-out bandwidth for the two outermost layers of the ATLAS Pixel Barrel (54 million pixels). We present the IBL read-out hardware and the supporting software architecture used to calibrate and operate the 4-layer ATLAS Pixel detector. We discuss the technical implementations and status for data taking, validation of the DAQ system in recent cosmic ray data taking, in-situ calibrations, and results from additional tests in preparation for Run 2 at the LHC.

  17. The First Result of Global Commissioning of the ATLAS Endcap Muon Trigger System in ATLAS Cavern

    CERN Document Server

    Sugimoto, T; Takahashi, Y; Tomoto, M; Fukunaga, C; Ikeno, M; Iwasaki, H; Nagano, K; Nozaki, M; Sasaki, O; Tanaka, S; Yasu, Y; Hasegawa, Y; Oshita, H; Takeshita, T; Nomachi, M; Sugaya, Y; Kubota, T; Ishino, M; Kanaya, N; Kawamoto, T; Kobayashi, T; Kuwabara, T; Nomoto, H; Sakamoto, H; Yamaguchi, T; Kadosaka, T; Kawagoe, K; Kiyamura, H; Kurashige, H; Niwa, T; Ochi, A; Omachi, C; Takeda, H; Lifshitz, R; Lupu, N; Bressler, S; Tarem, S; Kajomovitz, E; Ben Ami, S; Bahat Treidel, O; Benhammou, Ya; Etzion, E; Lellouch, D; Levinson, L; Mikenberg, G; Roich, A

    2007-01-01

    We report on the ATLAS commissioning run from the view point of the Thin Gap Chamber (TGC), which is the ATLAS end cap muon trigger detector. All the TGC sectors with on-detector electronics are going to be installed to the ATLAS cavern by the end of September 2007. To integrate all sub-detectors before the physics run starting from early 2008, the global commissioning run together with other sub-detectors has been performed from June 2007. We have evaluated the performance of the complete trigger chain of the TGC electronics and provide the trigger signal using cosmic-ray to the sub-systems in the global run environment.

  18. Development and Characterization of Diamond and 3D-Silicon Pixel Detectors with ATLAS-Pixel Readout Electronics

    CERN Document Server

    Mathes, Markus

    2008-01-01

    Abstract: Hybrid pixel detectors are used for particle tracking in the innermost layers of current high energy experiments like ATLAS. After the proposed luminosity upgrade of the LHC, they will have to survive very high radiation fluences of up to 10^16 particles per cm^2 per life time. New sensor concepts and materials are required, which promise to be more radiation tolerant than the currently used planar silicon sensors. Most prominent candidates are so-called 3D-silicon and single crystal or poly-crystalline diamond sensors. Using the ATLAS pixel electronics different detector prototypes with a pixel geometry of 400 × 50 um^2 have been built. In particular three devices have been studied in detail: a 3D-silicon and a single crystal diamond detector with an active area of about 1 cm^2 and a poly-crystalline diamond detector of the same size as a current ATLAS pixel detector module (2 × 6 cm^2). To characterize the devices regarding their particle detection efficiency and spatial resolution, the charge c...

  19. Development and characterization of diamond and 3D-silicon pixel detectors with ATLAS-pixel readout electronics

    International Nuclear Information System (INIS)

    Hybrid pixel detectors are used for particle tracking in the innermost layers of current high energy experiments like ATLAS. After the proposed luminosity upgrade of the LHC, they will have to survive very high radiation fluences of up to 1016 particles per cm2 per life time. New sensor concepts and materials are required, which promise to be more radiation tolerant than the currently used planar silicon sensors. Most prominent candidates are so-called 3D-silicon and single crystal or poly-crystalline diamond sensors. Using the ATLAS pixel electronics different detector prototypes with a pixel geometry of 400 x 50 μm2 have been built. In particular three devices have been studied in detail: a 3D-silicon and a single crystal diamond detector with an active area of about 1 cm2 and a poly-crystalline diamond detector of the same size as a current ATLAS pixel detector module (2 x 6 cm2). To characterize the devices regarding their particle detection efficiency and spatial resolution, the charge collection inside a pixel cell as well as the charge sharing between adjacent pixels was studied using a high energy particle beam. (orig.)

  20. Development and characterization of diamond and 3D-silicon pixel detectors with ATLAS-pixel readout electronics

    Energy Technology Data Exchange (ETDEWEB)

    Mathes, Markus

    2008-12-15

    Hybrid pixel detectors are used for particle tracking in the innermost layers of current high energy experiments like ATLAS. After the proposed luminosity upgrade of the LHC, they will have to survive very high radiation fluences of up to 10{sup 16} particles per cm{sup 2} per life time. New sensor concepts and materials are required, which promise to be more radiation tolerant than the currently used planar silicon sensors. Most prominent candidates are so-called 3D-silicon and single crystal or poly-crystalline diamond sensors. Using the ATLAS pixel electronics different detector prototypes with a pixel geometry of 400 x 50 {mu}m{sup 2} have been built. In particular three devices have been studied in detail: a 3D-silicon and a single crystal diamond detector with an active area of about 1 cm{sup 2} and a poly-crystalline diamond detector of the same size as a current ATLAS pixel detector module (2 x 6 cm{sup 2}). To characterize the devices regarding their particle detection efficiency and spatial resolution, the charge collection inside a pixel cell as well as the charge sharing between adjacent pixels was studied using a high energy particle beam. (orig.)

  1. The Run-2 ATLAS Trigger System

    CERN Document Server

    Ruiz-Martinez, Aranzazu; The ATLAS collaboration

    2016-01-01

    The ATLAS trigger successfully collected collision data during the first run of the LHC between 2009-2013 at different centre-of-mass energies between 900 GeV and 8 TeV. The trigger system consists of a hardware Level-1 and a software-based high level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. In Run-2, the LHC will operate at centre-of-mass energies of 13 and 14 TeV and higher luminosity, resulting in roughly five times higher trigger rates. A brief review of the ATLAS trigger system upgrades that were implemented between Run-1 and Run-2, allowing to cope with the increased trigger rates while maintaining or even improving the efficiency to select physics processes of interest, will be given. This includes changes to the Level-1 calorimeter and muon trigger systems, the introduction of a new Level-1 topological trigger module and the merging of the previously two-level HLT system into a single event filter farm. A ...

  2. The Run-2 ATLAS Trigger System

    CERN Document Server

    Shaw, Savanna Marie; The ATLAS collaboration

    2016-01-01

    The ATLAS trigger has been successfully collecting collision data during the first run of the LHC between 2009-2013 at a centre-of-mass energy between 900 GeV and 8 TeV. The trigger system consists of a hardware Level-1 (L1) and a software based high-level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. In Run-2, the LHC will operate at centre-of-mass energies of 13 and 14 TeV resulting in roughly five times higher trigger rates. We will briefly review the ATLAS trigger system upgrades that were implemented during the shutdown, allowing us to cope with the increased trigger rates while maintaining or even improving our efficiency to select relevant physics processes. This includes changes to the L1 calorimeter and muon trigger systems, the introduction of a new L1 topological trigger module and the merging of the previously two-level HLT system into a single event filter farm. At hand of a few examples, we will show the ...

  3. System test and noise performance studies at the ATLAS pixel detector

    International Nuclear Information System (INIS)

    The central component of the ATLAS Inner Tracker is the pixel detector. It consists of three barrel layers and three disk-layers in the end-caps in both forward directions. The innermost barrel layer is mounted at a distance of about 5 cm from the interaction region. With its very high granularity, truly two-dimensional hit information, and fast readout it is well suited to cope with the high densities of charged tracks, expected this close to the interaction region. The huge number of readout channels necessitates a very complex services infrastructure for powering, readout and safety. After a description of the pixel detector and its services infrastructure, key results from the system test at CERN are presented. Furthermore the noise performance of the pixel detector, crucial for high tracking and vertexing efficiencies, is studied. Measurements of the single-channel random noise are presented together with studies of common mode noise and measurements of the noise occupancy using a random trigger generator. (orig.)

  4. System test and noise performance studies at the ATLAS pixel detector

    Energy Technology Data Exchange (ETDEWEB)

    Weingarten, J.

    2007-09-15

    The central component of the ATLAS Inner Tracker is the pixel detector. It consists of three barrel layers and three disk-layers in the end-caps in both forward directions. The innermost barrel layer is mounted at a distance of about 5 cm from the interaction region. With its very high granularity, truly two-dimensional hit information, and fast readout it is well suited to cope with the high densities of charged tracks, expected this close to the interaction region. The huge number of readout channels necessitates a very complex services infrastructure for powering, readout and safety. After a description of the pixel detector and its services infrastructure, key results from the system test at CERN are presented. Furthermore the noise performance of the pixel detector, crucial for high tracking and vertexing efficiencies, is studied. Measurements of the single-channel random noise are presented together with studies of common mode noise and measurements of the noise occupancy using a random trigger generator. (orig.)

  5. Online Radiation Dose Measurement System for ATLAS experiment

    CERN Document Server

    Mandić, I; The ATLAS collaboration

    2012-01-01

    Particle detectors and readout electronics in the high energy physics experiment ATLAS at the Large Hadron Collider at CERN operate in radiation field containing photons, charged particles and neutrons. The particles in the radiation field originate from proton-proton interactions as well as from interactions of these particles with material in the experimental apparatus. In the innermost parts of ATLAS detector components will be exposed to ionizing doses exceeding 100 kGy. Energetic hadrons will also cause displacement damage in silicon equivalent to fluences of several times 10e14 1 MeV-neutrons per cm2. Such radiation doses can have severe influence on the performance of detectors. It is therefore very important to continuously monitor the accumulated doses to understand the detector performance and to correctly predict the lifetime of radiation sensitive components. Measurements of doses are important also to verify the simulations and represent a crucial input into the models used for predicting future ...

  6. The detector control system for the ATLAS semi conductor tracker assembly phase

    CERN Document Server

    Sfyrla, Anna

    2004-01-01

    The ATLAS semi conductor tracker (SCT) consists of approximately 16000 silicon micro-strip detectors with 6.3 million readout channels, built into 4088 modules. These are arranged into 2 endcaps of 9 disks each and 4 concentrical barrel layers. A number of tests are performed before the final detector assembly. Initially, the individual components of the SCT are tested. Prior to assembly, tests on the full system setup, which includes other detectors, will be carried out. The coherent and safe operation of the detectors in each step of the tests is the basic task of the detector control system (DCS). The main building blocks of the DCS are the cooling system, the power supplies and the environmental interlock system. The main features of the SCT test setup are described and the monitoring and control systems are presented.

  7. A photonic readout and data acquisition system for deep-sea neutrino telescopes

    International Nuclear Information System (INIS)

    In the context of the KM3NeT Design Study and building on the experience with the data acquisition system of the ANTARES telescope, an alternative readout and DAQ architecture has been developed for deep-sea neutrino telescopes. The system relies on sensor technology using photonic readout and a 10 Gb/s optical network for data acquisition and communication. Compared to ANTARES, more functionality has been migrated to the shore, thus allowing for timely deployment of the telescope components and easy access to the system during the long lifetime of neutrino telescopes. Also the reconfiguration of the DAQ system is located on shore. Timing calibration is an integral part of the network architecture providing an event timing integrity with less than 1 ns. Although developed for use in the deep-sea, the concept of the system can be used in other applications, e.g. in the LHC experiments.

  8. ATLAS Maintenance and Operation management system

    CERN Multimedia

    Copy, B

    2007-01-01

    The maintenance and operation of the ATLAS detector will involve thousands of contributors from 170 physics institutes. Planning and coordinating the action of ATLAS members, ensuring their expertise is properly leveraged and that no parts of the detector are understaffed or overstaffed will be a challenging task. The ATLAS Maintenance and Operation application (referred to as Operation Task Planner inside the ATLAS experiment) offers a fluent web based interface that combines the flexibility and comfort of a desktop application, intuitive data visualization and navigation techniques, with a lightweight service oriented architecture. We will review the application, its usage within the ATLAS experiment, its underlying design and implementation.

  9. Monitoring the atlas distributed data management system

    International Nuclear Information System (INIS)

    The ATLAS Distributed Data Management (DDM) system is evolving to provide a production-quality service for data distribution and data management support for production and users' analysis. Monitoring the different components in the system has emerged as one of the key issues to achieve this goal. Its distributed nature over different grid infrastructures (EGEE, OSG and NDGF) with infrastructure-specific data management components makes the task particularly challenging. Providing simple views over the status of the DDM components and data to users and site administrators is essential to effectively operate the system under realistic conditions. In this paper we present the design of the DDM monitor system, the information flow, data aggregation. We discuss the available usage, the interactive functionality for end-users and the alarm system

  10. Integration of the EventIndex with other ATLAS systems

    CERN Document Server

    Barberis, Dario; The ATLAS collaboration; Gallas, Elizabeth; Prokoshin, Fedor

    2015-01-01

    The ATLAS EventIndex System, developed for use in LHC Run 2, is designed to index every processed event in ATLAS, replacing the TAG System used in Run 1. Its storage infrastructure, based on Hadoop, necessitates revamping how information in this system relates to other ATLAS systems. In addition, the scope of this new application is different from that of the TAG System. It will store fewer derived quantities, but store more indexes since the fundamental mechanisms for retrieving these indexes will be better integrated into all stages of processing, allowing more events from later stages of processing to be indexed than was possible with the previous system. Connections with other systems are fundamentally critical to assess dataset completeness, identify data duplication, and check data integrity, but also needed to enhance user and system interfaces accessing information in EventIndex. This presentation will give an overview of the ATLAS systems involved, the relevant metadata, and describe the technologies...

  11. Coherent operation of detector systems and their readout electronics in a complex experiment control environment

    International Nuclear Information System (INIS)

    With the increasing size and degree of complexity of today's experiments in high energy physics the required amount of work and complexity to integrate a complete subdetector into an experiment control system is often underestimated. We report here on the layered software structure and protocols used by the LHCb experiment to control its detectors and readout boards. The experiment control system of LHCb is based on the commercial SCADA system PVSS II. Readout boards which are outside the radiation area are accessed via embedded credit card sized PCs which are connected to a large local area network. The SPECS protocol is used for control of the front end electronics. Finite state machines are introduced to facilitate the control of a large number of electronic devices and to model the whole experiment at the level of an expert system.

  12. Coherent operation of detector systems and their readout electronics in a complex experiment control environment

    Science.gov (United States)

    Koestner, Stefan

    2009-09-01

    With the increasing size and degree of complexity of today's experiments in high energy physics the required amount of work and complexity to integrate a complete subdetector into an experiment control system is often underestimated. We report here on the layered software structure and protocols used by the LHCb experiment to control its detectors and readout boards. The experiment control system of LHCb is based on the commercial SCADA system PVSS II. Readout boards which are outside the radiation area are accessed via embedded credit card sized PCs which are connected to a large local area network. The SPECS protocol is used for control of the front end electronics. Finite state machines are introduced to facilitate the control of a large number of electronic devices and to model the whole experiment at the level of an expert system.

  13. Development of the scalable readout system for micro-pattern gas detectors and other applications

    International Nuclear Information System (INIS)

    Developed within RD51 Collaboration for the Development of Micro-Pattern Gas Detectors Technologies, the Scalable Readout System (SRS) is intended as a general purpose multi-channel readout solution for a wide range of detector types and detector complexities. The scalable architecture, achieved using multi-Gbps point-to-point links with no buses involved, allows the user to tailor the system size to his needs. The modular topology enables the integration of different front-end ASICs, giving the user the possibility to use the most appropriate front-end for his purpose or to build a heterogeneous experimental apparatus which integrates different front-ends into the same DAQ system. Current applications include LHC upgrade activities, geophysics or homeland security applications as well as detector R and D. The system architecture, development and running experience will be presented, together with future prospects, ATCA implementation options and application possibilities.

  14. Development of the scalable readout system for micro-pattern gas detectors and other applications

    CERN Document Server

    Martoiu, S.; Tarazona, A; Toledo, J

    2013-01-01

    Developed within RD51 Collaboration for the Development of Micro-Pattern Gas Detectors Technologies, the Scalable Readout System (SRS) is intended as a general purpose multi-channel readout solution for a wide range of detector types and detector complexities. The scalable architecture, achieved using multi-Gbps point-to-point links with no buses involved, allows the user to tailor the system size to his needs. The modular topology enables the integration of different front-end ASICs, giving the user the possibility to use the most appropriate front-end for his purpose or to build a heterogeneous experimental apparatus which integrates different front-ends into the same DAQ system. Current applications include LHC upgrade activities, geophysics or homeland security applications as well as detector R&D. The system architecture, development and running experience will be presented, together with future prospects, ATCA implementation options and application possibilities.

  15. Modular pixelated detector system with the spectroscopic capability and fast parallel read-out

    OpenAIRE

    Vavřík, D. (Daniel); Holík, M.; Jakůbek, J; Jakůbek, M.; Kraus, V.; Krejčí, F.; Soukup, P. (Pavel); Tureček, D.; Vacík, J. (Jiří); Žemlička, J.

    2014-01-01

    A modular pixelated detector system was developed for imaging applications, where spectroscopic analysis of detected particles is advantageous e.g. for energy sensitive X-ray radiography, fluorescent and high resolution neutron imaging etc. The presented system consists of an arbitrary number of independent versatile modules. Each module is equipped with pixelated edgeless detector with spectroscopic ability and has its own fast read-out electronics. Design of the modules allows assembly of v...

  16. Small-Scale Readout Systems Prototype for the STAR PIXEL Detector

    OpenAIRE

    Szelezniak, Michal A.

    2008-01-01

    A prototype readout system for the STAR PIXEL detector in the Heavy Flavor Tracker (HFT) vertex detector upgrade is presented. The PIXEL detector is a Monolithic Active Pixel Sensor (MAPS) based silicon pixel vertex detector fabricated in a commercial CMOS process that integrates the detector and front-end electronics layers in one silicon die. Two generations of MAPS prototypes designed specifically for the PIXEL are discussed. We have constructed a prototype telescope system consisting of t...

  17. An active terminating pre-amplifier for delay line readout systems

    International Nuclear Information System (INIS)

    This report describes a pre-amplifier specifically designed for use with artificial delay lines in positional readout systems for multiwire proportional detectors. The pre-amplifier provides an active delay line termination which gives reduced noise compared with simple resistive termination whilst maintaining the signal rise-time to allow optimum timing discriminator performance. Test results show a significant improvement in resolution compared with resistive terminated systems

  18. Integration of the Eventlndex with other ATLAS systems

    Science.gov (United States)

    Barberis, D.; Cárdenas Zárate, S. E.; Gallas, E. J.; Prokoshin, F.

    2015-12-01

    The ATLAS EventIndex System, developed for use in LHC Run 2, is designed to index every processed event in ATLAS, replacing the TAG System used in Run 1. Its storage infrastructure, based on Hadoop open-source software framework, necessitates revamping how information in this system relates to other ATLAS systems. It will store more indexes since the fundamental mechanisms for retrieving these indexes will be better integrated into all stages of data processing, allowing more events from later stages of processing to be indexed than was possible with the previous system. Connections with other systems (conditions database, monitoring) are fundamentally critical to assess dataset completeness, identify data duplication, and check data integrity, and also enhance access to information in EventIndex by user and system interfaces. This paper gives an overview of the ATLAS systems involved, the relevant metadata, and describe the technologies we are deploying to complete these connections.

  19. The performance of a high speed pipelined photomultiplier readout system in the Fermilab KTe V experiment

    International Nuclear Information System (INIS)

    The KTeV fixed target experiment at Fermilab is using an innovative scheme for reading out its 3100 channel CsI electromagnetic calorimeter. This pipelined readout system digitizes photomultiplier tube (PMT) signals over a 16-bit dynamic range with 8-bits of resolution at 53 MHz. The crucial element of the system is a custom Bi-CMOS integrated circuit which, in conjunction with an 8-bit Flash ADC, integrates and digitizes the PMT signal charge over each 18.9 nsec clock cycle (53 MHz) in a deadtimeless fashion.The digitizer circuit is local to the PMT base, and has an in-situ charge integration noise figure of 3 fC/sample. In this article, the readout system will be described and its performance including noise, cross-talk, linearity, stability, and reliability will be discussed

  20. Beam test results for the upgraded LHCb RICH opto-electronic readout system

    CERN Document Server

    Carniti, Paolo

    2016-01-01

    The LHCb experiment is devoted to high-precision measurements of CP violation and search for New Physics by studying the decays of beauty and charmed hadrons produced at the Large Hadron Collider (LHC). Two RICH detectors are currently installed and operating successfully, providing a crucial role in the particle identification system of the LHCb experiment. Starting from 2019, the LHCb experiment will be upgraded to operate at higher luminosity, extending its potential for discovery and study of new phenomena. Both the RICH detectors will be upgraded and the entire opto-electronic system has been redesigned in order to cope with the new specifications, namely higher readout rates, and increased occupancies. The new photodetectors, readout electronics, mechanical assembly and cooling system have reached the final phase of development and their performance was thoroughly and successfully validated during several beam test sessions in 2014 and 2015 at the SPS facility at CERN. Details of the test setup and perf...

  1. Advanced Technology Lifecycle Analysis System (ATLAS) Technology Tool Box (TTB)

    Science.gov (United States)

    Doyle, Monica; ONeil, Daniel A.; Christensen, Carissa B.

    2005-01-01

    The Advanced Technology Lifecycle Analysis System (ATLAS) is a decision support tool designed to aid program managers and strategic planners in determining how to invest technology research and development dollars. It is an Excel-based modeling package that allows a user to build complex space architectures and evaluate the impact of various technology choices. ATLAS contains system models, cost and operations models, a campaign timeline and a centralized technology database. Technology data for all system models is drawn from a common database, the ATLAS Technology Tool Box (TTB). The TTB provides a comprehensive, architecture-independent technology database that is keyed to current and future timeframes.

  2. ATLAS detector control system data viewer

    International Nuclear Information System (INIS)

    The ATLAS experiment at CERN is one of the four Large Hadron Collider experiments. DCS Data Viewer (DDV) is a web interface application that provides access to historical data of ATLAS Detector Control System (DCS) parameters written to the database (DB). It has a modular and flexible design and is structured using a client-server architecture. The server can be operated stand alone with a command-line interface to the data while the client offers a user friendly, browser independent interface. The selection of the meta-data of DCS parameters is done via a column-tree view or with a powerful search engine. The final visualisation of the data is done using various plug-ins such as 'value over time' charts, data tables, raw ASCII or structured export to ROOT. Excessive access or malicious use of the database is prevented by dedicated protection mechanisms, allowing the exposure of the tool to hundreds of inexperienced users. The meta-data selection and data output features can be used separately by XML configuration files. Security constraints have been taken into account in the implementation allowing the access of DDV by collaborators worldwide. (authors)

  3. ATLAS Detector Control System Data Viewer

    CERN Document Server

    Tsarouchas, Charilaos; Roe, S; Bitenc, U; Fehling-Kaschek, ML; Winkelmann, S; D’Auria, S; Hoffmann, D; Pisano, O

    2011-01-01

    The ATLAS experiment at CERN is one of the four Large Hadron Collider experiments. DCS Data Viewer (DDV) is a web interface application that provides access to historical data of ATLAS Detector Control System [1] (DCS) parameters written to the database (DB). It has a modular andflexible design and is structured using a clientserver architecture. The server can be operated stand alone with a command-line interface to the data while the client offers a user friendly, browser independent interface. The selection of the metadata of DCS parameters is done via a column-tree view or with a powerful search engine. The final visualisation of the data is done using various plugins such as “value over time” charts, data tables, raw ASCII or structured export to ROOT. Excessive access or malicious use of the database is prevented by dedicated protection mechanisms, allowing the exposure of the tool to hundreds of inexperienced users. The metadata selection and data output features can be used separately by XML con...

  4. The upgrade of the ATLAS High Level Trigger and Data Acquisition systems and their integration

    CERN Document Server

    Abreu, R; The ATLAS collaboration

    2014-01-01

    The Data Acquisition (DAQ) and High Level Trigger (HLT) systems that served the ATLAS experiment during LHC's first run are being upgraded in the first long LHC shutdown period, from 2013 to 2015. This contribution describes the elements that are vital for the new interaction between the two systems. The central architectural enhancement is the fusion of the once separate Level 2, Event Building (EB), and Event Filter steps. Through the factorization of previously disperse functionality and better exploitation of caching mechanisms, the inherent simplification carries with it an increase in performance. Flexibility to different running conditions is improved by an automatic balance of formerly separate tasks. Incremental EB is the principle of the new Data Collection, whereby the HLT farm avoids duplicate requests to the detector Read-Out System (ROS) by preserving and reusing previously obtained data. Moreover, requests are packed and fetched together to avoid redundant trips to the ROS. Anticipated EB is ac...

  5. AGIS: Evolution of Distributed Computing information system for ATLAS

    Science.gov (United States)

    Anisenkov, A.; Di Girolamo, A.; Alandes, M.; Karavakis, E.

    2015-12-01

    ATLAS, a particle physics experiment at the Large Hadron Collider at CERN, produces petabytes of data annually through simulation production and tens of petabytes of data per year from the detector itself. The ATLAS computing model embraces the Grid paradigm and a high degree of decentralization of computing resources in order to meet the ATLAS requirements of petabytes scale data operations. It has been evolved after the first period of LHC data taking (Run-1) in order to cope with new challenges of the upcoming Run- 2. In this paper we describe the evolution and recent developments of the ATLAS Grid Information System (AGIS), developed in order to integrate configuration and status information about resources, services and topology of the computing infrastructure used by the ATLAS Distributed Computing applications and services.

  6. ATLAS Grid Data Processing: system evolution and scalability

    International Nuclear Information System (INIS)

    The production system for Grid Data Processing handles petascale ATLAS data reprocessing and Monte Carlo activities. The production system empowered further data processing steps on the Grid performed by dozens of ATLAS physics groups with coordinated access to computing resources worldwide, including additional resources sponsored by regional facilities. The system provides knowledge management of configuration parameters for massive data processing tasks, reproducibility of results, scalable database access, orchestrated workflow and performance monitoring, dynamic workload sharing, automated fault tolerance and petascale data integrity control. The system evolves to accommodate a growing number of users and new requirements from our contacts in ATLAS main areas: Trigger, Physics, Data Preparation and Software and Computing. To assure scalability, the next generation production system architecture development is in progress. We report on scaling up the production system for a growing number of users providing data for physics analysis and other ATLAS main activities.

  7. A read-out system for the Medipix2 chip capable of 500 frames per second

    International Nuclear Information System (INIS)

    High-speed X-ray-imaging acquisition technique is a growing field that can be used to understand microscopic mechanism of different phenomena in biology and material science. IFAE and CNM developed a very high-speed readout system, named DEMAS, for the Medipix2. The system is able to read a single Medipix2 chip through the parallel bus at a rate of 1 kHz.With a duty cycle of 50%, the real sampling speed is 500 frames per second (fps). This implies that 1 ms is allocated to the exposure time and another millisecond is devoted to the read-out of the chip. In such configuration, the raw data throughput is about 500 Mbit/s. For the first time we present examples of acquisition at 500 fps of moving samples with X-rays working in direct capture and photon counting mode

  8. Cold front-end electronics and Ethernet-based DAQ systems for large LAr TPC readout

    CERN Document Server

    D.Autiero,; B.Carlus,; Y.Declais,; S.Gardien,; C.Girerd,; J.Marteau; H.Mathez

    2010-01-01

    Large LAr TPCs are among the most powerful detectors to address open problems in particle and astro-particle physics, such as CP violation in leptonic sector, neutrino properties and their astrophysical implications, proton decay search etc. The scale of such detectors implies severe constraints on their readout and DAQ system. We are carrying on a R&D in electronics on a complete readout chain including an ASIC located close to the collecting planes in the argon gas phase and a DAQ system based on smart Ethernet sensors implemented in a µTCA standard. The choice of the latter standard is motivated by the similarity in the constraints with those existing in Network Telecommunication Industry. We also developed a synchronization scheme developed from the IEEE1588 standard integrated by the use of the recovered clock from the Gigabit link

  9. Development of the Photomultiplier-Tube Readout System for the CTA Large Size Telescope

    CERN Document Server

    Kubo, H; Awane, Y; Bamba, A; Barcelo, M; Barrio, J A; Blanch, O; Boix, J; Delgado, C; Fink, D; Gascon, D; Gunji, S; Hagiwara, R; Hanabata, Y; Hatanaka, K; Hayashida, M; Ikeno, M; Kabuki, S; Katagiri, H; Kataoka, J; Konno, Y; Koyama, S; Kishimoto, T; Kushida, J; Martinez, G; Masuda, S; Miranda, J M; Mirzoyan, R; Mizuno, T; Nagayoshi, T; Nakajima, D; Nakamori, T; Ohoka, H; Okumura, A; Orito, R; Saito, T; Sanuy, A; Sasaki, H; Sawada, M; Schweizer, T; Sugawara, R; Sulanke, K -H; Tajima, H; Tanaka, M; Tanaka, S; Tejedor, L A; Terada, Y; Teshima, M; Tokanai, F; Tsuchiya, Y; Uchida, T; Ueno, H; Umehara, K; Yamamoto, T

    2013-01-01

    We have developed a prototype of the photomultiplier tube (PMT) readout system for the Cherenkov Telescope Array (CTA) Large Size Telescope (LST). Two thousand PMTs along with their readout systems are arranged on the focal plane of each telescope, with one readout system per 7-PMT cluster. The Cherenkov light pulses generated by the air showers are detected by the PMTs and amplified in a compact, low noise and wide dynamic range gain block. The output of this block is then digitized at a sampling rate of the order of GHz using the Domino Ring Sampler DRS4, an analog memory ASIC developed at Paul Scherrer Institute. The sampler has 1,024 capacitors per channel and four channels are cascaded for increased depth. After a trigger is generated in the system, the charges stored in the capacitors are digitized by an external slow sampling ADC and then transmitted via Gigabit Ethernet. An onboard FPGA controls the DRS4, trigger threshold, and Ethernet transfer. In addition, the control and monitoring of the Cockcrof...

  10. Glance Information System for ATLAS Management

    CERN Document Server

    De Oliveira Fernandes Moraes, L; The ATLAS collaboration; Ramos De Azevedo Evora, LH; Karam, K; Fink Grael, F; Pommes, K; Nessi, M; Cirilli, M

    2011-01-01

    ATLAS Experiment is an international collaboration where more than 37 countries, 172 institutes and laboratories, 2900 physicists, engineers, and computer scientists plus 700 students participate. The management of this teamwork involves several aspects such as institute contribution, employment records, members' appointment, authors' list, preparation and publication of papers and speakers nomination. Previously, most of the information was accessible by a limited group of people and the system used was not designed to handle new requirements easily. Moreover, developers had to face problems such as different terminology, diverse data modeling, heterogeneous databases and unlike users needs. Besides that, the maintenance has to be an easy task considering the long lifetime experiment and professionals turnover. The Glance system, a generic mechanism for accessing any database, acts as an intermediate layer isolating the user from the particularities of each database. It retrieves, inserts and updates the dat...

  11. ATLAS

    Data.gov (United States)

    Federal Laboratory Consortium — ATLAS is a particle physics experiment at the Large Hadron Collider at CERN, the European Organization for Nuclear Research. Scientists from Brookhaven have played...

  12. Systematic Comparison of the MINOS Near and Far Detector Readout Systems

    Energy Technology Data Exchange (ETDEWEB)

    Cabrera, Anatael

    2005-05-01

    The MINOS experiment is a neutrino oscillation baseline experiment intending to use high resolution L/E neutrinos to measure the atmospheric neutrino oscillations parameters to unprecedented precision. Two detectors have been built to realize the measurements, a Near detector, located about 1km downstream from the beam target at the Fermi Laboratory, and a Far detector, located at 736km, at the Soudan Laboratory. The technique relies on the Near detector to measure the un-oscillated neutrino spectrum, while the Far detector measures the neutrino spectrum once oscillated. The comparison between the two measurements is expected to allow MINOS to measure {Delta}m{sup 2} beyond 10% precision level. The Near and Far detectors have been built similarly to minimize possible systematic effects. Both detectors have been endowed with different readout systems, as the beam event rates are very different. The MINOS calibration detector (CalDet), installed at CERN, was instrumented with both readout systems such that they can simultaneously measure and characterize the energy deposition (response and event topology) of incident known particle from test-beams. This thesis presents the investigations to quantify the impact of the performance of both readout systems on the MINOS results using the measurements obtained with CalDet. The relative comparison of the responses of both readout systems have been measured to be consistent with being identical within a systematic uncertainty of 0.6%. The event topologies have been found to be negligibly affected. In addition, the performance of the detector simulations have been thoroughly investigated and validated to be in agreement with data within similar level of uncertainties.

  13. Results from a Prototype MAPS Sensor Telescope and Readout System with Zero Suppression for the Heavy Flavor Tracker at STAR

    OpenAIRE

    Greiner, Leo C.; Matis, Howard S.; Ritter, Hans G.; Rose, Andrew A.; Stezelberger, Thorsten; Sun, Xiangming; Szelezniak, Michal A.; Thomas, James H.; Vu, Chinh Q.; Wieman, Howard H.

    2008-01-01

    We describe a three Mimostar-2 Monolithic Active Pixel Sensor (MAPS) sensor telescope prototype with an accompanying readout system incorporating on-the-fly data sparsification. The system has been characterized and we report on the measured performance of the sensor telescope and readout system in beam tests conducted both at the Advanced Light Source (ALS) at Lawrence Berkeley National Laboratory (LBNL) and in the STAR experiment at the Relativistic Heavy Ion Collider (RHIC). This effo...

  14. The readout and control system of the Dark Energy Camera

    Science.gov (United States)

    Honscheid, Klaus; Elliott, Ann; Annis, James; Bonati, Marco; Buckley-Geer, Elizabeth; Castander, Francisco; daCosta, Luiz; Fausti, Angelo; Karliner, Inga; Kuhlmann, Steve; Neilsen, Eric; Patton, Kenneth; Reil, Kevin; Roodman, Aaron; Thaler, Jon; Serrano, Santiago; Soares Santos, Marcelle; Suchyta, Eric

    2012-09-01

    The Dark Energy Camera (DECam) is a new 520 Mega Pixel CCD camera with a 3 square degree field of view designed for the Dark Energy Survey (DES). DES is a high precision, multi-bandpass, photometric survey of 5000 square degrees of the southern sky. DECam is currently being installed at the prime focus of the Blanco 4-m telescope at the Cerro- Tololo International Observatory (CTIO). In this paper we describe SISPI, the data acquisition and control system of the Dark Energy Camera. SISPI is implemented as a distributed multi-processor system with a software architecture based on the Client-Server and Publish-Subscribe design patterns. The underlying message passing protocol is based on PYRO, a powerful distributed object technology system written entirely in Python. A distributed shared variable system was added to support exchange of telemetry data and other information between different components of the system. We discuss the SISPI infrastructure software, the image pipeline, the observer console and user interface architecture, image quality monitoring, the instrument control system, and the observation strategy tool.

  15. Development of a modular test system for the silicon sensor R&D of the ATLAS Upgrade

    OpenAIRE

    H. Liu; Benoit, M; H Chen; Chen, K.; Di Bello, F. A.; Iacobucci, G.; Lanni, F.; Peric, I.; Ristic, B.; Pinto, M. Vicente Barreto; Wu, W.; Xu, L; Jin, G.

    2016-01-01

    High Voltage CMOS sensors are a promising technology for tracking detectors in collider experiments. Extensive R&D studies are being carried out by the ATLAS Collaboration for a possible use of HV-CMOS in the High Luminosity LHC upgrade of the Inner Tracker detector. CaRIBOu (Control and Readout Itk BOard) is a modular test system developed to test Silicon based detectors. It currently includes five custom designed boards, a Xilinx ZC706 development board, FELIX (Front-End LInk eXchange) PCIe...

  16. Front-end readout electronics considerations for Silicon Tracking System and Muon Chamber

    International Nuclear Information System (INIS)

    Silicon Tracking System (STS) and Muon Chamber (MUCH) are components of the Compressed Baryonic Matter (CBM) experiment at FAIR, Germany. STS will be built from 8 detector stations located in the aperture of the magnet. Each station will be built from double-sided silicon strip detectors and connected via kapton microcables to the readout electronics at the perimeter of each station. The challenging physics program of the CBM experiment requires from the detector systems very high performance. Design of the readout ASIC requires finding an optimal solution for interaction time and input charge measurements in the presence of: tight area (channel pitch: 58 μ m), noise (< 1000 e- rms), power (< 10 mW/channel), radiation hardness and speed requirements (average hit rate: 250 khit/s/channel). This paper presents the front-end electronics' analysis towards prototype STS and MUCH readout ASIC implementation in the UMC 180 nm CMOS process and in-system performance with the emphasis on preferable detector and kapton microcable parameters and input amplifiers' architecture and design

  17. A Prototype Scalable Readout System for Micro-pattern Gas Detectors

    CERN Document Server

    Zheng, Qi-Bin; Tian, Jing; Li, Cheng; Feng, Chang-Qing; An, Qi

    2016-01-01

    A scalable readout system (SRS) is designed to provide a general solution for different micro-pattern gas detectors. The system mainly consists of three kinds of modules: the ASIC card, the Adapter card and the Front-End Card (FEC). The ASIC cards, mounted with particular ASIC chips, are designed for receiving detector signals. The Adapter card is in charge of digitizing the output signals from several ASIC cards. The FEC, edged-mounted with the Adapter, has a FPGA-based reconfigurable logic and I/O interfaces, allowing users to choose various ASIC cards and Adapters for different types of detectors. The FEC transfers data through Gigabit Ethernet protocol realized by a TCP processor (SiTCP) IP core in field-programmable gate arrays (FPGA). The readout system can be tailored to specific sizes to adapt to the experiment scales and readout requirements. In this paper, two kinds of multi-channel ASIC chips, VA140 and AGET, are applied to verify the concept of this SRS architecture. Based on this VA140 or AGET SR...

  18. A Phase Readout Metho d for Wireless Passive Sensor Used in Pressure Measurement System

    Institute of Scientific and Technical Information of China (English)

    HONG Yingping; LIANG Ting; ZHENG Tingli; ZHANG Hairui; LIU Wenyi; XIONG Jijun

    2015-01-01

    A phase diff erence detection method used for reading the resonant frequency through mutual cou-pling is designed to meet the pressure measurement in harsh environments, and a testing device of integrated modular by developing the hardware circuits is proposed. Description and discussion of the novel theoretical model in the phase readout system is presented, and a pressure test platform based on the phase diff erence detection is also established, which is to measure the frequency of a LC resonant sensor on the basis of 96% alumina ceramic sub-strate. The experimental results show that the proposed phase diff erence readout system presents sweep-frequency detection in the range of 1–100MHz bandwidth, and a high frequency resolution of 0.006MHz. Sensitivity of the sensor is approximately 0.225 MHz/bar from 1 bar to 2 bar. The accuracy and functionality of the phase readout system have emphasized a wider engineering application range, so that we can make it possible that the wireless passive LC resonant sensor revolves well in practical engineering oc-casions outside the laboratory environment.

  19. The FE-I4 pixel readout system-on-chip resubmission for the insertable B-Layer project

    CERN Document Server

    Zivkovic, V; Garcia-Sciveres, M; Mekkaoui, A; Barbero, M; Darbo, G; Gnani, D; Hemperek, T; Menouni, M; Fougeron, D; Gensolen, F; Jensen, F; Caminada, L; Gromov, V; Kluit, R; Fleury, J; Krüger, H; Backhaus, M; Fang, X; Gonella, L; Rozanove, A; Arutinov, D

    2012-01-01

    The FE-I4 is a new pixel readout integrated circuit designed to meet the requirements of ATLAS experiment upgrades. The first samples of the FE-I4 engineering run (called FE-I4A) delivered promising results in terms of the requested performances. The FE-I4 team envisaged a number of modifications and fine-tuning before the actual exploitation, planned within the Insertable B-Layer (IBL) of ATLAS. As the IBL schedule was pushed significantly forward, a quick and efficient plan had to be devised for the FE-I4 redesign. This article will present the main objectives of the resubmission, together with the major changes that were a driving factor for this redesign. In addition, the top-level verification and test efforts of the FE-I4 will also be addressed.

  20. Level-1 Data Driver Card of the ATLAS New Small Wheel Upgrade Compatible with the Phase II 1 MHz Readout

    CERN Document Server

    Gkountoumis, Panagiotis; The ATLAS collaboration

    2016-01-01

    The Level-1 Data Driver Card (L1DDC) will be designed for the needs of the future upgrades of the innermost stations of the ATLAS end-cap muon spectrometer. The L1DDC is a high speed aggregator board capable of communicating with a large number of front-end electronics. It collects the Level-1 data along with monitoring data and transmits them to a network interface through a single bidirectional fiber link. In addition, the L1DDC board distributes trigger, time and configuration data coming from the network interface to the front-end boards. The L1DDC is fully compatible with the Phase II upgrade where the trigger rate is expected to reach 1 MHz. This paper describes the overall scheme of the data acquisition process and especially the L1DDC board. Finally, the electronics layout on the chamber is also mentioned

  1. The electronics readout system for the OPAL Vertex Drift Chamber

    International Nuclear Information System (INIS)

    The Vertex Drift Chamber for the OPAL experiment at LEP provides high quality track co-ordinates using multi-hit sub-nanosecond timing to detect the drifted electrons. This paper explains the electronic techniques that have been devised and implemented for the detector. The overall performance of the system is demonstrated with measurements from the final OPAL chamber. (author)

  2. Development of Frequency-Division Multiplexing Readout System for Large-Format TES X-ray Microcalorimeter Arrays

    Science.gov (United States)

    Sakai, K.; Yamamoto, R.; Takei, Y.; Mitsuda, K.; Yamasaki, N. Y.; Hidaka, M.; Nagasawa, S.; Kohjiro, S.; Miyazaki, T.

    2016-07-01

    We are developing the frequency-division multiplexing (FDM) readout system aimed to realize the 400-pixel transition edge sensor (TES) microcalorimeter array for the DIOS mission as well as large-format arrays with more than a thousand of TES for future space missions such as the ATHENA mission. The developed system consists of the low-power superconducting quantum interference device (SQUID), the digital FDM electronics, and the analog front-end to bridge the SQUID and the digital electronics. Using the developed readout system, we performed a TES readout experiment and succeeded to multiplex four TES signals with the single-staged cryogenic setup. We have experienced two issues during the experiment: an excess noise and crosstalk. The brief overview of the developed system and the details, results, and issues of the TES multiplexing readout experiment is discussed.

  3. Development of Frequency-Division Multiplexing Readout System for Large-Format TES X-ray Microcalorimeter Arrays

    Science.gov (United States)

    Sakai, K.; Yamamoto, R.; Takei, Y.; Mitsuda, K.; Yamasaki, N. Y.; Hidaka, M.; Nagasawa, S.; Kohjiro, S.; Miyazaki, T.

    2016-03-01

    We are developing the frequency-division multiplexing (FDM) readout system aimed to realize the 400-pixel transition edge sensor (TES) microcalorimeter array for the DIOS mission as well as large-format arrays with more than a thousand of TES for future space missions such as the ATHENA mission. The developed system consists of the low-power superconducting quantum interference device (SQUID), the digital FDM electronics, and the analog front-end to bridge the SQUID and the digital electronics. Using the developed readout system, we performed a TES readout experiment and succeeded to multiplex four TES signals with the single-staged cryogenic setup. We have experienced two issues during the experiment: an excess noise and crosstalk. The brief overview of the developed system and the details, results, and issues of the TES multiplexing readout experiment is discussed.

  4. Small-Scale Readout System Prototype for the STAR PIXEL Detector

    Energy Technology Data Exchange (ETDEWEB)

    Szelezniak, Michal; Anderssen, Eric; Greiner, Leo; Matis, Howard; Ritter, Hans Georg; Stezelberger, Thorsten; Sun, Xiangming; Thomas, James; Vu, Chinh; Wieman, Howard

    2008-10-10

    Development and prototyping efforts directed towards construction of a new vertex detector for the STAR experiment at the RHIC accelerator at BNL are presented. This new detector will extend the physics range of STAR by allowing for precision measurements of yields and spectra of particles containing heavy quarks. The innermost central part of the new detector is a high resolution pixel-type detector (PIXEL). PIXEL requirements are discussed as well as a conceptual mechanical design, a sensor development path, and a detector readout architecture. Selected progress with sensor prototypes dedicated to the PIXEL detector is summarized and the approach chosen for the readout system architecture validated in tests of hardware prototypes is discussed.

  5. Advanced alignment of the ATLAS tracking system

    CERN Document Server

    Butti, Pierfrancesco; The ATLAS collaboration

    2014-01-01

    In order to reconstruct the trajectories of charged particles, the ATLAS experiment exploits a tracking system built using different technologies, silicon planar modules or microstrips (PIX and SCT detectors) and gaseous drift tubes (TRT), all embedded in a 2T solenoidal magnetic field. Misalignments of the active detector elements and deformations of the structures (which can lead to \\textit{Weak Modes}) deteriorate resolution of the track reconstruction and lead to systematic biases on the measured track parameters. The applied alignment procedures exploit various advanced techniques in order to minimise track-hit residuals and remove detector deformations. For the LHC Run II, the Pixel Detector has been refurbished and upgraded with the installation of a new pixel layer, the Insertable B-layer (IBL).

  6. System Architecture of the Dark Energy Survey Camera Readout Electronics

    Energy Technology Data Exchange (ETDEWEB)

    Shaw, Theresa; /FERMILAB; Ballester, Otger; Cardiel-Sas, Laia; Castilla, Javier; /Barcelona, IFAE; Chappa, Steve; /Fermilab; de Vicente, Juan; /Madrid, CIEMAT; Holm, Scott; Huffman, Dave; Kozlovsky, Mark; /Fermilab; Martinez, Gustavo; /Madrid, CIEMAT; Moore, Todd; /Madrid, CIEMAT /Fermilab /Illinois U., Urbana /Fermilab

    2010-05-27

    The Dark Energy Survey makes use of a new camera, the Dark Energy Camera (DECam). DECam will be installed in the Blanco 4M telescope at Cerro Tololo Inter-American Observatory (CTIO). DECam is presently under construction and is expected to be ready for observations in the fall of 2011. The focal plane will make use of 62 2Kx4K and 12 2kx2k fully depleted Charge-Coupled Devices (CCDs) for guiding, alignment and focus. This paper will describe design considerations of the system; including, the entire signal path used to read out the CCDs, the development of a custom crate and backplane, the overall grounding scheme and early results of system tests.

  7. Monitoring the pre-processor system of the ATLAS level-1 calorimeter trigger

    International Nuclear Information System (INIS)

    The Pre-Processor (PPr) System of the ATLAS Level-1 Calorimeter Trigger is a highly parallel system, with hard-wired algorithms implemented in ASICs, to receive, digitise and process over 7000 analogue trigger tower signals from the entire ATLAS Calorimetry, and to transmit the determined transverse energy deposits to the object-finding processors of the calorimeter trigger: Cluster Processor and Jet/Energy-sum Processor. The PPr System consists of 8 crates, each of which being equipped with 16 Preprocessor Modules, that can each receive and process 64 analogue input signals. The Preprocessor System provides facilities to monitor the operation and performance of both its individual components and the Level-1 Calorimeter Trigger: pipelined readout of event based monitoring data to the DAQ System, in order to document the Level-1 Trigger decision, diagnostic features implemented in PPrASIC to establish rate maps and energy spectra per trigger tower, and output interface to the crate controller CPU. Monitoring software for trigger-specific applications is developed and presented in this talk. (orig.)

  8. The ATLAS Data Flow system in Run2: Design and Performance

    CERN Document Server

    Rifki, Othmane; The ATLAS collaboration

    2016-01-01

    The ATLAS detector uses a real time selective triggering system to reduce the high interaction rate from 40 MHz to its data storage capacity of 1 kHz. A hardware first level trigger limits the rate to 100 kHz and a software high level trigger selects events for offline analysis. By building on the experience gained during the successful first run of the LHC, the ATLAS Trigger and Data Acquisition system has been simplified and upgraded to take advantage of state of the art technologies. The Dataflow element of the system is composed of distributed hardware and software responsible for buffering and transporting event data from the Readout system to the High Level Trigger and to the event storage. This system has been reshaped in order to maximize the flexibility and efficiency of the data selection process. The updated dataflow is different from the previous implementation both in terms of architecture and performance. The biggest difference is within the high level trigger, where the merger of region-of-inte...

  9. The consistency service of the ATLAS Distributed Data Management system

    International Nuclear Information System (INIS)

    With the continuously increasing volume of data produced by ATLAS and stored on the WLCG sites, the probability of data corruption or data losses, due to software and hardware failures is increasing. In order to ensure the consistency of all data produced by ATLAS a Consistency Service has been developed as part of the DQ2 Distributed Data Management system. This service is fed by the different ATLAS tools, i.e. the analysis tools, production tools, DQ2 site services or by site administrators that report corrupted or lost files. It automatically corrects the errors reported and informs the users in case of irrecoverable file loss.

  10. Testing System Based on Virtual Instrument for Readout Circuit of Infrared Focal Plane Array

    Institute of Scientific and Technical Information of China (English)

    XUE Lian; MENG Li-ya; YUAN Xiang-hui

    2008-01-01

    Readout integrated circuit(ROIC) is one of the most important components for hybrid-integrated infrared focal plane array(IRFPA). And it should be tested to ensure the product yield before bonding. This paper presents an on-wafer testing system based on Labview for ROIC of IRFPA. The quantitative measurement can be conducted after determining whether there is row crosstalk or not in this system. This low-cost system has the benefits of easy expansion, upgrading, and flexibility, and it has been employed in the testing of several kinds of IRFPA ROICs to measure the parameters of saturated output voltage, non-uniformity, dark noise and dynamic range, etc.

  11. Performance of the prototype readout system for the CMS endcap hadron calorimeter upgrade

    CERN Document Server

    Pastika, Nathaniel Joseph

    2015-01-01

    The CMS experiment at the CERN Large Hadron Collider (LHC) will upgrade the photon detection and readout systems of its barrel and endcap hadron calorimeters (HCAL) through the second long shutdown of the LHC in 2018. The upgrade includes new silicon photomultipliers (SiPMs), SiPM control electronics, signal digitization via the Fermilab QIE11 ASIC, data formatting and serialization via a Microsemi FPGA, and data transmission via CERN Versatile Link technology. The first prototype system for the endcap HCAL has been assembled and characterized on the bench and in a test beam. The design of this new system and prototype performance is described.

  12. Performance of the prototype readout system for the CMS endcap hadron calorimeter upgrade

    Science.gov (United States)

    Pastika, N. J.

    2016-03-01

    The CMS experiment at the CERN Large Hadron Collider (LHC) will upgrade the photon detection and readout systems of its barrel and endcap hadron calorimeters (HCAL) through the second long shutdown of the LHC in 2018. The upgrade includes new silicon photomultipliers (SiPMs), SiPM control electronics, signal digitization via the Fermilab QIE11 ASIC, data formatting and serialization via a Microsemi FPGA, and data transmission via CERN Versatile Link technology. The first prototype system for the endcap HCAL has been assembled and characterized on the bench and in a test beam. The design of this new system and prototype performance are described.

  13. A front-end readout Detector Board for the OpenPET electronics system

    OpenAIRE

    Choong, W. -S.; Abu-Nimeh, F.; Moses, W. W.; Peng, Q.; Vu, C.Q.; Wu, J.-Y.

    2015-01-01

    We present a 16-channel front-end readout board for the OpenPET electronics system. A major task in developing a nuclear medical imaging system, such as a positron emission computed tomograph (PET) or a single-photon emission computed tomograph (SPECT), is the electronics system. While there are a wide variety of detector and camera design concepts, the relatively simple nature of the acquired data allows for a common set of electronics requirements that can be met by a flexible, scalable, an...

  14. Upgrade of the TOTEM DAQ using the Scalable Readout System (SRS)

    Science.gov (United States)

    Quinto, M.; Cafagna, F.; Fiergolski, A.; Radicioni, E.

    2013-11-01

    The main goals of the TOTEM Experiment at the LHC are the measurements of the elastic and total p-p cross sections and the studies of the diffractive dissociation processes. At LHC, collisions are produced at a rate of 40 MHz, imposing strong requirements for the Data Acquisition Systems (DAQ) in terms of trigger rate and data throughput. The TOTEM DAQ adopts a modular approach that, in standalone mode, is based on VME bus system. The VME based Front End Driver (FED) modules, host mezzanines that receive data through optical fibres directly from the detectors. After data checks and formatting are applied in the mezzanine, data is retransmitted to the VME interface and to another mezzanine card plugged in the FED module. The VME bus maximum bandwidth limits the maximum first level trigger (L1A) to 1 kHz rate. In order to get rid of the VME bottleneck and improve scalability and the overall capabilities of the DAQ, a new system was designed and constructed based on the Scalable Readout System (SRS), developed in the framework of the RD51 Collaboration. The project aims to increase the efficiency of the actual readout system providing higher bandwidth, and increasing data filtering, implementing a second-level trigger event selection based on hardware pattern recognition algorithms. This goal is to be achieved preserving the maximum back compatibility with the LHC Timing, Trigger and Control (TTC) system as well as with the CMS DAQ. The obtained results and the perspectives of the project are reported. In particular, we describe the system architecture and the new Opto-FEC adapter card developed to connect the SRS with the FED mezzanine modules. A first test bench was built and validated during the last TOTEM data taking period (February 2013). Readout of a set of 3 TOTEM Roman Pot silicon detectors was carried out to verify performance in the real LHC environment. In addition, the test allowed a check of data consistency and quality.

  15. Design of a large dynamics fast acquisition device: application to readout of the electromagnetic calorimeter in the ATLAS experiment; Conception d`un dispositif d`acquisition rapide de grande dynamique: application a la lecture du calorimetre electromagnetique de l`experience ATLAS

    Energy Technology Data Exchange (ETDEWEB)

    Bussat, Jean-Marie [Universite de Paris Sud, 91 - Orsay (France)

    1998-06-05

    The construction of the new particle accelerator, the LHC (Large Hadron Collider) at CERN is entails many research and development projects. It is the case in electronics where the problem of the acquisition of large dynamic range signals at high sampling frequencies occurs. Typically, the requirements are a dynamic range of about 65,000 (around 16 bits) at 40 MHz. Some solutions to this problem will be presented. One of them is using a commercial analog-to-digital converter. This case brings up the necessity of a signal conditioning equipment. This thesis describes a way of building such a system that will be called `multi-gain system`. Then, an application of this method is presented. It involves the realization of an automatic gain switching integrated circuit. It is designed for the readout of the ATLAS electromagnetic calorimeter. The choice and the calculation of the components of this systems are described. They are followed by the results of some measurements done on a prototype made using the AMS 1.2{mu}m BiCMOS foundry. Possible enhancements are also presented. We conclude on the feasibility of such a system and its various applications in a number of fields that are not restricted to particle physics. (author) 33 refs., 132 figs., 22 tabs.

  16. Alignment of the ATLAS Inner Detector Tracking System

    CERN Document Server

    Heller, C; The ATLAS collaboration

    2011-01-01

    ATLAS is one of the multipurpose experiments that records the products of the LHC proton-proton and heavy ion collisions. In order to reconstruct trajectories of charged particles produced in these collisions, ATLAS is equipped with a tracking system built using two different technologies, silicon planar sensors (pixel and microstrips) and drift-tube based detectors. Together they constitute the ATLAS Inner Detector, which is embedded in a 2 T axial field. Efficiently reconstructing tracks from charged particles traversing the detector, and precisely measure their momenta is of crucial importance for physics analyses. In order to achieve its scientific goals, an alignment of the ATLAS Inner Detector is required to accurately determine its more than 700,000 degrees of freedom. The goal of the alignment is set such that the limited knowledge of the sensor locations should not deteriorate the resolution of track parameters by more than 20% with respect to the intrinsic tracker resolution. The implementation of t...

  17. Alignment of the ATLAS Inner Detector Tracking System

    CERN Document Server

    Heller, C; The ATLAS collaboration

    2011-01-01

    ATLAS is one of four multipurpose experiments that records the products of the LHC proton-proton collisions. In order to reconstruct trajectories of charged particles produced in these collisions, ATLAS is equipped with a tracking system built using two different technologies, silicon planar sensors (pixel and microstrips) and drift-tube based detectors. Together they constitute the ATLAS Inner Detector, which is embedded in a 2 T solenoidal field. Efficiently reconstructing tracks from charged particles traversing the detector, and precisely measure their momenta, is of crucial importance for physics analyses. In order to achieve its scientific goals, an alignment of the ATLAS Inner Detector is required to accurately determine its almost 36,000 degrees of freedom. The goal of the alignment is set such that the limited knowledge of the sensor locations should not deteriorate the resolution of track parameters by more than 20% with respect to the intrinsic tracker resolution. The resulting required precision f...

  18. Performance of the ATLAS Trigger System in 2010

    CERN Document Server

    Aad, Georges; Abdallah, Jalal; Abdelalim, Ahmed Ali; Abdesselam, Abdelouahab; Abdinov, Ovsat; Abi, Babak; Abolins, Maris; Abramowicz, Halina; Abreu, Henso; Acerbi, Emilio; Acharya, Bobby Samir; Adams, David; Addy, Tetteh; Adelman, Jahred; Aderholz, Michael; Adomeit, Stefanie; Adragna, Paolo; Adye, Tim; Aefsky, Scott; Aguilar-Saavedra, Juan Antonio; Aharrouche, Mohamed; Ahlen, Steven; Ahles, Florian; Ahmad, Ashfaq; Ahsan, Mahsana; Aielli, Giulio; Akdogan, Taylan; Åkesson, Torsten Paul Ake; Akimoto, Ginga; Akimov, Andrei; Akiyama, Kunihiro; Alam, Mohammad; Alam, Muhammad Aftab; Albrand, Solveig; Aleksa, Martin; Aleksandrov, Igor; Alessandria, Franco; Alexa, Calin; Alexander, Gideon; Alexandre, Gauthier; Alexopoulos, Theodoros; Alhroob, Muhammad; Aliev, Malik; Alimonti, Gianluca; Alison, John; Aliyev, Magsud; Allport, Phillip; Allwood-Spiers, Sarah; Almond, John; Aloisio, Alberto; Alon, Raz; Alonso, Alejandro; Alviggi, Mariagrazia; Amako, Katsuya; Amaral, Pedro; Amelung, Christoph; Ammosov, Vladimir; Amorim, Antonio; Amorós, Gabriel; Amram, Nir; Anastopoulos, Christos; Andari, Nansi; Andeen, Timothy; Anders, Christoph Falk; Anderson, Kelby; Andreazza, Attilio; Andrei, George Victor; Andrieux, Marie-Laure; Anduaga, Xabier; Angerami, Aaron; Anghinolfi, Francis; Anjos, Nuno; Annovi, Alberto; Antonaki, Ariadni; Antonelli, Mario; Antonelli, Stefano; Antonov, Alexey; Antos, Jaroslav; Anulli, Fabio; Aoun, Sahar; Aperio Bella, Ludovica; Apolle, Rudi; Arabidze, Giorgi; Aracena, Ignacio; Arai, Yasuo; Arce, Ayana; Archambault, John-Paul; Arfaoui, Samir; Arguin, Jean-Francois; Arik, Engin; Arik, Metin; Armbruster, Aaron James; Arnaez, Olivier; Arnault, Christian; Artamonov, Andrei; Artoni, Giacomo; Arutinov, David; Asai, Shoji; Asfandiyarov, Ruslan; Ask, Stefan; Åsman, Barbro; Asquith, Lily; Assamagan, Ketevi; Astbury, Alan; Astvatsatourov, Anatoli; Atoian, Grigor; Aubert, Bernard; Auerbach, Benjamin; Auge, Etienne; Augsten, Kamil; Aurousseau, Mathieu; Austin, Nicholas; Avolio, Giuseppe; Avramidou, Rachel Maria; Axen, David; Ay, Cano; Azuelos, Georges; Azuma, Yuya; Baak, Max; Baccaglioni, Giuseppe; Bacci, Cesare; Bach, Andre; Bachacou, Henri; Bachas, Konstantinos; Bachy, Gerard; Backes, Moritz; Backhaus, Malte; Badescu, Elisabeta; Bagnaia, Paolo; Bahinipati, Seema; Bai, Yu; Bailey, David; Bain, Travis; Baines, John; Baker, Oliver Keith; Baker, Mark; Baker, Sarah; Baltasar Dos Santos Pedrosa, Fernando; Banas, Elzbieta; Banerjee, Piyali; Banerjee, Swagato; Banfi, Danilo; Bangert, Andrea Michelle; Bansal, Vikas; Bansil, Hardeep Singh; Barak, Liron; Baranov, Sergei; Barashkou, Andrei; Barbaro Galtieri, Angela; Barber, Tom; Barberio, Elisabetta Luigia; Barberis, Dario; Barbero, Marlon; Bardin, Dmitri; Barillari, Teresa; Barisonzi, Marcello; Barklow, Timothy; Barlow, Nick; Barnett, Bruce; Barnett, Michael; Baroncelli, Antonio; Barone, Gaetano; Barr, Alan; Barreiro, Fernando; Barreiro Guimarães da Costa, João; Barrillon, Pierre; Bartoldus, Rainer; Barton, Adam Edward; Bartsch, Detlef; Bartsch, Valeria; Bates, Richard; Batkova, Lucia; Batley, Richard; Battaglia, Andreas; Battistin, Michele; Battistoni, Giuseppe; Bauer, Florian; Bawa, Harinder Singh; Beare, Brian; Beau, Tristan; Beauchemin, Pierre-Hugues; Beccherle, Roberto; Bechtle, Philip; Beck, Hans Peter; Beckingham, Matthew; Becks, Karl-Heinz; Beddall, Andrew; Beddall, Ayda; Bedikian, Sourpouhi; Bednyakov, Vadim; Bee, Christopher; Begel, Michael; Behar Harpaz, Silvia; Behera, Prafulla; Beimforde, Michael; Belanger-Champagne, Camille; Bell, Paul; Bell, William; Bella, Gideon; Bellagamba, Lorenzo; Bellina, Francesco; Bellomo, Massimiliano; Belloni, Alberto; Beloborodova, Olga; Belotskiy, Konstantin; Beltramello, Olga; Ben Ami, Sagi; Benary, Odette; Benchekroun, Driss; Benchouk, Chafik; Bendel, Markus; Benedict, Brian Hugues; Benekos, Nektarios; Benhammou, Yan; Benjamin, Douglas; Benoit, Mathieu; Bensinger, James; Benslama, Kamal; Bentvelsen, Stan; Berge, David; Bergeaas Kuutmann, Elin; Berger, Nicolas; Berghaus, Frank; Berglund, Elina; Beringer, Jürg; Bernardet, Karim; Bernat, Pauline; Bernhard, Ralf; Bernius, Catrin; Berry, Tracey; Bertin, Antonio; Bertinelli, Francesco; Bertolucci, Federico; Besana, Maria Ilaria; Besson, Nathalie; Bethke, Siegfried; Bhimji, Wahid; Bianchi, Riccardo-Maria; Bianco, Michele; Biebel, Otmar; Bieniek, Stephen Paul; Biesiada, Jed; Biglietti, Michela; Bilokon, Halina; Bindi, Marcello; Binet, Sebastien; Bingul, Ahmet; Bini, Cesare; Biscarat, Catherine; Bitenc, Urban; Black, Kevin; Blair, Robert; Blanchard, Jean-Baptiste; Blanchot, Georges; Blazek, Tomas; Blocker, Craig; Blocki, Jacek; Blondel, Alain; Blum, Walter; Blumenschein, Ulrike; Bobbink, Gerjan; Bobrovnikov, Victor; Bocchetta, Simona Serena; Bocci, Andrea; Boddy, Christopher Richard; Boehler, Michael; Boek, Jennifer; Boelaert, Nele; Böser, Sebastian; Bogaerts, Joannes Andreas; Bogdanchikov, Alexander; Bogouch, Andrei; Bohm, Christian; Boisvert, Veronique; Bold, Tomasz; Boldea, Venera; Bolnet, Nayanka Myriam; Bona, Marcella; Bondarenko, Valery; Boonekamp, Maarten; Boorman, Gary; Booth, Chris; Bordoni, Stefania; Borer, Claudia; Borisov, Anatoly; Borissov, Guennadi; Borjanovic, Iris; Borroni, Sara; Bos, Kors; Boscherini, Davide; Bosman, Martine; Boterenbrood, Hendrik; Botterill, David; Bouchami, Jihene; Boudreau, Joseph; Bouhova-Thacker, Evelina Vassileva; Boulahouache, Chaouki; Bourdarios, Claire; Bousson, Nicolas; Boveia, Antonio; Boyd, James; Boyko, Igor; Bozhko, Nikolay; Bozovic-Jelisavcic, Ivanka; Bracinik, Juraj; Braem, André; Branchini, Paolo; Brandenburg, George; Brandt, Andrew; Brandt, Gerhard; Brandt, Oleg; Bratzler, Uwe; Brau, Benjamin; Brau, James; Braun, Helmut; Brelier, Bertrand; Bremer, Johan; Brenner, Richard; Bressler, Shikma; Breton, Dominique; Britton, Dave; Brochu, Frederic; Brock, Ian; Brock, Raymond; Brodbeck, Timothy; Brodet, Eyal; Broggi, Francesco; Bromberg, Carl; Brooijmans, Gustaaf; Brooks, William; Brown, Gareth; Brown, Heather; Brubaker, Erik; Bruckman de Renstrom, Pawel; Bruncko, Dusan; Bruneliere, Renaud; Brunet, Sylvie; Bruni, Alessia; Bruni, Graziano; Bruschi, Marco; Buanes, Trygve; Bucci, Francesca; Buchanan, James; Buchanan, Norman; Buchholz, Peter; Buckingham, Ryan; Buckley, Andrew; Buda, Stelian Ioan; Budagov, Ioulian; Budick, Burton; Büscher, Volker; Bugge, Lars; Buira-Clark, Daniel; Bulekov, Oleg; Bunse, Moritz; Buran, Torleiv; Burckhart, Helfried; Burdin, Sergey; Burgess, Thomas; Burke, Stephen; Busato, Emmanuel; Bussey, Peter; Buszello, Claus-Peter; Butin, François; Butler, Bart; Butler, John; Buttar, Craig; Butterworth, Jonathan; Buttinger, William; Byatt, Tom; Cabrera Urbán, Susana; Caforio, Davide; Cakir, Orhan; Calafiura, Paolo; Calderini, Giovanni; Calfayan, Philippe; Calkins, Robert; Caloba, Luiz; Caloi, Rita; Calvet, David; Calvet, Samuel; Camacho Toro, Reina; Camard, Arnaud; Camarri, Paolo; Cambiaghi, Mario; Cameron, David; Cammin, Jochen; Campana, Simone; Campanelli, Mario; Canale, Vincenzo; Canelli, Florencia; Canepa, Anadi; Cantero, Josu; Capasso, Luciano; Capeans Garrido, Maria Del Mar; Caprini, Irinel; Caprini, Mihai; Capriotti, Daniele; Capua, Marcella; Caputo, Regina; Caramarcu, Costin; Cardarelli, Roberto; Carli, Tancredi; Carlino, Gianpaolo; Carminati, Leonardo; Caron, Bryan; Caron, Sascha; Carrillo Montoya, German D; Carter, Antony; Carter, Janet; Carvalho, João; Casadei, Diego; Casado, Maria Pilar; Cascella, Michele; Caso, Carlo; Castaneda Hernandez, Alfredo Martin; Castaneda-Miranda, Elizabeth; Castillo Gimenez, Victoria; Castro, Nuno Filipe; Cataldi, Gabriella; Cataneo, Fernando; Catinaccio, Andrea; Catmore, James; Cattai, Ariella; Cattani, Giordano; Caughron, Seth; Cauz, Diego; Cavalleri, Pietro; Cavalli, Donatella; Cavalli-Sforza, Matteo; Cavasinni, Vincenzo; Cazzato, Antonio; Ceradini, Filippo; Cerqueira, Augusto Santiago; Cerri, Alessandro; Cerrito, Lucio; Cerutti, Fabio; Cetin, Serkant Ali; Cevenini, Francesco; Chafaq, Aziz; Chakraborty, Dhiman; Chan, Kevin; Chapleau, Bertrand; Chapman, John Derek; Chapman, John Wehrley; Chareyre, Eve; Charlton, Dave; Chavda, Vikash; Cheatham, Susan; Chekanov, Sergei; Chekulaev, Sergey; Chelkov, Gueorgui; Chelstowska, Magda Anna; Chen, Chunhui; Chen, Hucheng; Chen, Li; Chen, Shenjian; Chen, Tingyang; Chen, Xin; Cheng, Shaochen; Cheplakov, Alexander; Chepurnov, Vladimir; Cherkaoui El Moursli, Rajaa; Chernyatin, Valeriy; Cheu, Elliott; Cheung, Sing-Leung; Chevalier, Laurent; Chiefari, Giovanni; Chikovani, Leila; Childers, John Taylor; Chilingarov, Alexandre; Chiodini, Gabriele; Chislett, Rebecca Thalatta; Chizhov, Mihail; Choudalakis, Georgios; Chouridou, Sofia; Christidi, Illectra-Athanasia; Christov, Asen; Chromek-Burckhart, Doris; Chu, Ming-Lee; Chudoba, Jiri; Ciapetti, Guido; Ciba, Krzysztof; Ciftci, Abbas Kenan; Ciftci, Rena; Cinca, Diane; Cindro, Vladimir; Ciobotaru, Matei Dan; Ciocca, Claudia; Ciocio, Alessandra; Cirilli, Manuela; Ciubancan, Mihai; Clark, Allan G; Clark, Philip James; Cleland, Bill; Clemens, Jean-Claude; Clement, Benoit; Clement, Christophe; Clifft, Roger; Coadou, Yann; Cobal, Marina; Coccaro, Andrea; Cochran, James H; Coe, Paul; Cogan, Joshua Godfrey; Coggeshall, James; Cogneras, Eric; Cojocaru, Claudiu; Colas, Jacques; Colijn, Auke-Pieter; Collard, Caroline; Collins, Neil; Collins-Tooth, Christopher; Collot, Johann; Colon, German; Conde Muiño, Patricia; Coniavitis, Elias; Conidi, Maria Chiara; Consonni, Michele; Consorti, Valerio; Constantinescu, Serban; Conta, Claudio; Conventi, Francesco; Cook, James; Cooke, Mark; Cooper, Ben; Cooper-Sarkar, Amanda; Cooper-Smith, Neil; Copic, Katherine; Cornelissen, Thijs; Corradi, Massimo; Corriveau, Francois; Cortes-Gonzalez, Arely; Cortiana, Giorgio; Costa, Giuseppe; Costa, María José; Costanzo, Davide; Costin, Tudor; Côté, David; Coura Torres, Rodrigo; Courneyea, Lorraine; Cowan, Glen; Cowden, Christopher; Cox, Brian; Cranmer, Kyle; Crescioli, Francesco; Cristinziani, Markus; Crosetti, Giovanni; Crupi, Roberto; Crépé-Renaudin, Sabine; Cuciuc, Constantin-Mihai; Cuenca Almenar, Cristóbal; Cuhadar Donszelmann, Tulay; Cuneo, Stefano; Curatolo, Maria; Curtis, Chris; Cwetanski, Peter; Czirr, Hendrik; Czyczula, Zofia; D'Auria, Saverio; D'Onofrio, Monica; D'Orazio, Alessia; Da Rocha Gesualdi Mello, Aline; Da Silva, Paulo Vitor; Da Via, Cinzia; Dabrowski, Wladyslaw; Dahlhoff, Andrea; Dai, Tiesheng; Dallapiccola, Carlo; Dam, Mogens; Dameri, Mauro; Damiani, Daniel; Danielsson, Hans Olof; Dannheim, Dominik; Dao, Valerio; Darbo, Giovanni; Darlea, Georgiana Lavinia; Daum, Cornelis; Dauvergne, Jean-Pierre; Davey, Will; Davidek, Tomas; Davidson, Nadia; Davidson, Ruth; Davies, Eleanor; Davies, Merlin; Davison, Adam; Davygora, Yuriy; Dawe, Edmund; Dawson, Ian; Dawson, John; Daya, Rozmin; De, Kaushik; de Asmundis, Riccardo; De Castro, Stefano; De Castro Faria Salgado, Pedro; De Cecco, Sandro; de Graat, Julien; De Groot, Nicolo; de Jong, Paul; De La Taille, Christophe; De la Torre, Hector; De Lotto, Barbara; De Mora, Lee; De Nooij, Lucie; De Oliveira Branco, Miguel; De Pedis, Daniele; de Saintignon, Paul; De Salvo, Alessandro; De Sanctis, Umberto; De Santo, Antonella; De Vivie De Regie, Jean-Baptiste; Dean, Simon; Dedovich, Dmitri; Degenhardt, James; Dehchar, Mohamed; Deile, Mario; Del Papa, Carlo; Del Peso, Jose; Del Prete, Tarcisio; Dell'Acqua, Andrea; Dell'Asta, Lidia; Della Pietra, Massimo; della Volpe, Domenico; Delmastro, Marco; Delpierre, Pierre; Delruelle, Nicolas; Delsart, Pierre-Antoine; Deluca, Carolina; Demers, Sarah; Demichev, Mikhail; Demirkoz, Bilge; Deng, Jianrong; Denisov, Sergey; Derendarz, Dominik; Derkaoui, Jamal Eddine; Derue, Frederic; Dervan, Paul; Desch, Klaus Kurt; Devetak, Erik; Deviveiros, Pier-Olivier; Dewhurst, Alastair; DeWilde, Burton; Dhaliwal, Saminder; Dhullipudi, Ramasudhakar; Di Ciaccio, Anna; Di Ciaccio, Lucia; Di Girolamo, Alessandro; Di Girolamo, Beniamino; Di Luise, Silvestro; Di Mattia, Alessandro; Di Micco, Biagio; Di Nardo, Roberto; Di Simone, Andrea; Di Sipio, Riccardo; Diaz, Marco Aurelio; Diblen, Faruk; Diehl, Edward; Dieli, Michele Vincenzo; Dietl, Hans; Dietrich, Janet; Dietzsch, Thorsten; Diglio, Sara; Dindar Yagci, Kamile; Dingfelder, Jochen; Dionisi, Carlo; Dita, Petre; Dita, Sanda; Dittus, Fridolin; Djama, Fares; Djilkibaev, Rashid; Djobava, Tamar; do Vale, Maria Aline Barros; Do Valle Wemans, André; Doan, Thi Kieu Oanh; Dobbs, Matt; Dobinson, Robert; Dobos, Daniel; Dobson, Ellie; Dobson, Marc; Dodd, Jeremy; Dogan, Ozgen Berkol; Doglioni, Caterina; Doherty, Tom; Doi, Yoshikuni; Dolejsi, Jiri; Dolenc, Irena; Dolezal, Zdenek; Dolgoshein, Boris; Dohmae, Takeshi; Donadelli, Marisilvia; Donega, Mauro; Donini, Julien; Dopke, Jens; Doria, Alessandra; Dos Anjos, Andre; Dosil, Mireia; Dotti, Andrea; Dova, Maria-Teresa; Dowell, John; Doxiadis, Alexander; Doyle, Tony; Drasal, Zbynek; Drees, Jürgen; Dressnandt, Nandor; Drevermann, Hans; Driouichi, Chafik; Dris, Manolis; Dubbert, Jörg; Dubbs, Tim; Dube, Sourabh; Duchovni, Ehud; Duckeck, Guenter; Dudarev, Alexey; Dudziak, Fanny; Dührssen, Michael; Duerdoth, Ian; Duflot, Laurent; Dufour, Marc-Andre; Dunford, Monica; Duran Yildiz, Hatice; Duxfield, Robert; Dwuznik, Michal; Dydak, Friedrich; Dzahini, Daniel; Düren, Michael; Ebenstein, William; Ebke, Johannes; Eckert, Simon; Eckweiler, Sebastian; Edmonds, Keith; Edwards, Clive; Edwards, Nicholas Charles; Ehrenfeld, Wolfgang; Ehrich, Thies; Eifert, Till; Eigen, Gerald; Einsweiler, Kevin; Eisenhandler, Eric; Ekelof, Tord; El Kacimi, Mohamed; Ellert, Mattias; Elles, Sabine; Ellinghaus, Frank; Ellis, Katherine; Ellis, Nicolas; Elmsheuser, Johannes; Elsing, Markus; Ely, Robert; Emeliyanov, Dmitry; Engelmann, Roderich; Engl, Albert; Epp, Brigitte; Eppig, Andrew; Erdmann, Johannes; Ereditato, Antonio; Eriksson, Daniel; Ernst, Jesse; Ernst, Michael; Ernwein, Jean; Errede, Deborah; Errede, Steven; Ertel, Eugen; Escalier, Marc; Escobar, Carlos; Espinal Curull, Xavier; Esposito, Bellisario; Etienne, Francois; Etienvre, Anne-Isabelle; Etzion, Erez; Evangelakou, Despoina; Evans, Hal; Fabbri, Laura; Fabre, Caroline; Fakhrutdinov, Rinat; Falciano, Speranza; Falou, Alain; Fang, Yaquan; Fanti, Marcello; Farbin, Amir; Farilla, Addolorata; Farley, Jason; Farooque, Trisha; Farrington, Sinead; Farthouat, Philippe; Fassnacht, Patrick; Fassouliotis, Dimitrios; Fatholahzadeh, Baharak; Favareto, Andrea; Fayard, Louis; Fazio, Salvatore; Febbraro, Renato; Federic, Pavol; Fedin, Oleg; Fedorko, Ivan; Fedorko, Woiciech; Fehling-Kaschek, Mirjam; Feligioni, Lorenzo; Fellmann, Denis; Felzmann, Ulrich; Feng, Cunfeng; Feng, Eric; Fenyuk, Alexander; Ferencei, Jozef; Ferland, Jonathan; Fernando, Waruna; Ferrag, Samir; Ferrando, James; Ferrara, Valentina; Ferrari, Arnaud; Ferrari, Pamela; Ferrari, Roberto; Ferrer, Antonio; Ferrer, Maria Lorenza; Ferrere, Didier; Ferretti, Claudio; Ferretto Parodi, Andrea; Fiascaris, Maria; Fiedler, Frank; Filipčič, Andrej; Filippas, Anastasios; Filthaut, Frank; Fincke-Keeler, Margret; Fiolhais, Miguel; Fiorini, Luca; Firan, Ana; Fischer, Gordon; Fischer, Peter; Fisher, Matthew; Fisher, Steve; Flechl, Martin; Fleck, Ivor; Fleckner, Johanna; Fleischmann, Philipp; Fleischmann, Sebastian; Flick, Tobias; Flores Castillo, Luis; Flowerdew, Michael; Föhlisch, Florian; Fokitis, Manolis; Fonseca Martin, Teresa; Forbush, David Alan; Formica, Andrea; Forti, Alessandra; Fortin, Dominique; Foster, Joe; Fournier, Daniel; Foussat, Arnaud; Fowler, Andrew; Fowler, Ken; Fox, Harald; Francavilla, Paolo; Franchino, Silvia; Francis, David; Frank, Tal; Franklin, Melissa; Franz, Sebastien; Fraternali, Marco; Fratina, Sasa; French, Sky; Froeschl, Robert; Froidevaux, Daniel; Frost, James; Fukunaga, Chikara; Fullana Torregrosa, Esteban; Fuster, Juan; Gabaldon, Carolina; Gabizon, Ofir; Gadfort, Thomas; Gadomski, Szymon; Gagliardi, Guido; Gagnon, Pauline; Galea, Cristina; Gallas, Elizabeth; Gallas, Manuel; Gallo, Valentina Santina; Gallop, Bruce; Gallus, Petr; Galyaev, Eugene; Gan, KK; Gao, Yongsheng; Gapienko, Vladimir; Gaponenko, Andrei; Garberson, Ford; Garcia-Sciveres, Maurice; García, Carmen; García Navarro, José Enrique; Gardner, Robert; Garelli, Nicoletta; Garitaonandia, Hegoi; Garonne, Vincent; Garvey, John; Gatti, Claudio; Gaudio, Gabriella; Gaumer, Olivier; Gaur, Bakul; Gauthier, Lea; Gavrilenko, Igor; Gay, Colin; Gaycken, Goetz; Gayde, Jean-Christophe; Gazis, Evangelos; Ge, Peng; Gee, Norman; Geerts, Daniël Alphonsus Adrianus; Geich-Gimbel, Christoph; Gellerstedt, Karl; Gemme, Claudia; Gemmell, Alistair; Genest, Marie-Hélène; Gentile, Simonetta; George, Matthias; George, Simon; Gerlach, Peter; Gershon, Avi; Geweniger, Christoph; Ghazlane, Hamid; Ghez, Philippe; Ghodbane, Nabil; Giacobbe, Benedetto; Giagu, Stefano; Giakoumopoulou, Victoria; Giangiobbe, Vincent; Gianotti, Fabiola; Gibbard, Bruce; Gibson, Adam; Gibson, Stephen; Gilbert, Laura; Gilchriese, Murdock; Gilewsky, Valentin; Gillberg, Dag; Gillman, Tony; Gingrich, Douglas; Ginzburg, Jonatan; Giokaris, Nikos; Giordano, Raffaele; Giorgi, Francesco Michelangelo; Giovannini, Paola; Giraud, Pierre-Francois; Giugni, Danilo; Giunta, Michele; Giusti, Paolo; Gjelsten, Børge Kile; Gladilin, Leonid; Glasman, Claudia; Glatzer, Julian; Glazov, Alexandre; Glitza, Karl-Walter; Glonti, George; Godfrey, Jennifer; Godlewski, Jan; Goebel, Martin; Göpfert, Thomas; Goeringer, Christian; Gössling, Claus; Göttfert, Tobias; Goldfarb, Steven; Goldin, Daniel; Golling, Tobias; Golovnia, Serguei; Gomes, Agostinho; Gomez Fajardo, Luz Stella; Gonçalo, Ricardo; Goncalves Pinto Firmino Da Costa, Joao; Gonella, Laura; Gonidec, Allain; Gonzalez, Saul; González de la Hoz, Santiago; Gonzalez Silva, Laura; Gonzalez-Sevilla, Sergio; Goodson, Jeremiah Jet; Goossens, Luc; Gorbounov, Petr Andreevich; Gordon, Howard; Gorelov, Igor; Gorfine, Grant; Gorini, Benedetto; Gorini, Edoardo; Gorišek, Andrej; Gornicki, Edward; Gorokhov, Serguei; Goryachev, Vladimir; Gosdzik, Bjoern; Gosselink, Martijn; Gostkin, Mikhail Ivanovitch; Gouanère, Michel; Gough Eschrich, Ivo; Gouighri, Mohamed; Goujdami, Driss; Goulette, Marc Phillippe; Goussiou, Anna; Goy, Corinne; Grabowska-Bold, Iwona; Grabski, Varlen; Grafström, Per; Grah, Christian; Grahn, Karl-Johan; Grancagnolo, Francesco; Grancagnolo, Sergio; Grassi, Valerio; Gratchev, Vadim; Grau, Nathan; Gray, Heather; Gray, Julia Ann; Graziani, Enrico; Grebenyuk, Oleg; Greenfield, Debbie; Greenshaw, Timothy; Greenwood, Zeno Dixon; Gregor, Ingrid-Maria; Grenier, Philippe; Griesmayer, Erich; Griffiths, Justin; Grigalashvili, Nugzar; Grillo, Alexander; Grinstein, Sebastian; Grishkevich, Yaroslav; Grivaz, Jean-Francois; Grognuz, Joel; Groh, Manfred; Gross, Eilam; Grosse-Knetter, Joern; Groth-Jensen, Jacob; Grybel, Kai; Guarino, Victor; Guest, Daniel; Guicheney, Christophe; Guida, Angelo; Guillemin, Thibault; Guindon, Stefan; Guler, Hulya; Gunther, Jaroslav; Guo, Bin; Guo, Jun; Gupta, Ambreesh; Gusakov, Yury; Gushchin, Vladimir; Gutierrez, Andrea; Gutierrez, Phillip; Guttman, Nir; Gutzwiller, Olivier; Guyot, Claude; Gwenlan, Claire; Gwilliam, Carl; Haas, Andy; Haas, Stefan; Haber, Carl; Hackenburg, Robert; Hadavand, Haleh Khani; Hadley, David; Haefner, Petra; Hahn, Ferdinand; Haider, Stefan; Hajduk, Zbigniew; Hakobyan, Hrachya; Haller, Johannes; Hamacher, Klaus; Hamal, Petr; Hamilton, Andrew; Hamilton, Samuel; Han, Hongguang; Han, Liang; Hanagaki, Kazunori; Hance, Michael; Handel, Carsten; Hanke, Paul; Hansen, John Renner; Hansen, Jørgen Beck; Hansen, Jorn Dines; Hansen, Peter Henrik; Hansson, Per; Hara, Kazuhiko; Hare, Gabriel; Harenberg, Torsten; Harkusha, Siarhei; Harper, Devin; Harrington, Robert; Harris, Orin; Harrison, Karl; Hartert, Jochen; Hartjes, Fred; Haruyama, Tomiyoshi; Harvey, Alex; Hasegawa, Satoshi; Hasegawa, Yoji; Hassani, Samira; Hatch, Mark; Hauff, Dieter; Haug, Sigve; Hauschild, Michael; Hauser, Reiner; Havranek, Miroslav; Hawes, Brian; Hawkes, Christopher; Hawkings, Richard John; Hawkins, Donovan; Hayakawa, Takashi; Hayden, Daniel; Hayward, Helen; Haywood, Stephen; Hazen, Eric; He, Mao; Head, Simon; Hedberg, Vincent; Heelan, Louise; Heim, Sarah; Heine, Kristin; Heinemann, Beate; Heisterkamp, Simon; Helary, Louis; Heldmann, Michael; Heller, Mathieu; Hellman, Sten; Helsens, Clement; Henderson, Robert; Henke, Michael; Henrichs, Anna; Henriques Correia, Ana Maria; Henrot-Versille, Sophie; Henry-Couannier, Frédéric; Hensel, Carsten; Henß, Tobias; Hernandez, Carlos Medina; Hernández Jiménez, Yesenia; Herrberg, Ruth; Hershenhorn, Alon David; Herten, Gregor; Hertenberger, Ralf; Hervas, Luis; Hessey, Nigel; Hidvegi, Attila; Higón-Rodriguez, Emilio; Hill, Daniel; Hill, John; Hill, Norman; Hiller, Karl Heinz; Hillert, Sonja; Hillier, Stephen; Hinchliffe, Ian; Hines, Elizabeth; Hirose, Minoru; Hirsch, Florian; Hirschbuehl, Dominic; Hobbs, John; Hod, Noam; Hodgkinson, Mark; Hodgson, Paul; Hoecker, Andreas; Hoeferkamp, Martin; Hoffman, Julia; Hoffmann, Dirk; Hohlfeld, Marc; Holder, Martin; Holmes, Alan; Holmgren, Sven-Olof; Holy, Tomas; Holzbauer, Jenny; Homma, Yasuhiro; Hong, Tae Min; Hooft van Huysduynen, Loek; Horazdovsky, Tomas; Horn, Claus; Horner, Stephan; Horton, Katherine; Hostachy, Jean-Yves; Hou, Suen; Houlden, Michael; Hoummada, Abdeslam; Howarth, James; Howell, David; Hristova, Ivana; Hrivnac, Julius; Hruska, Ivan; Hryn'ova, Tetiana; Hsu, Pai-hsien Jennifer; Hsu, Shih-Chieh; Huang, Guang Shun; Hubacek, Zdenek; Hubaut, Fabrice; Huegging, Fabian; Huffman, Todd Brian; Hughes, Emlyn; Hughes, Gareth; Hughes-Jones, Richard; Huhtinen, Mika; Hurst, Peter; Hurwitz, Martina; Husemann, Ulrich; Huseynov, Nazim; Huston, Joey; Huth, John; Iacobucci, Giuseppe; Iakovidis, Georgios; Ibbotson, Michael; Ibragimov, Iskander; Ichimiya, Ryo; Iconomidou-Fayard, Lydia; Idarraga, John; Idzik, Marek; Iengo, Paolo; Igonkina, Olga; Ikegami, Yoichi; Ikeno, Masahiro; Ilchenko, Yuri; Iliadis, Dimitrios; Imbault, Didier; Imhaeuser, Martin; Imori, Masatoshi; Ince, Tayfun; Inigo-Golfin, Joaquin; Ioannou, Pavlos; Iodice, Mauro; Ionescu, Gelu; Irles Quiles, Adrian; Ishii, Koji; Ishikawa, Akimasa; Ishino, Masaya; Ishmukhametov, Renat; Issever, Cigdem; Istin, Serhat; Itoh, Yuki; Ivashin, Anton; Iwanski, Wieslaw; Iwasaki, Hiroyuki; Izen, Joseph; Izzo, Vincenzo; Jackson, Brett; Jackson, John; Jackson, Paul; Jaekel, Martin; Jain, Vivek; Jakobs, Karl; Jakobsen, Sune; Jakubek, Jan; Jana, Dilip; Jankowski, Ernest; Jansen, Eric; Jantsch, Andreas; Janus, Michel; Jarlskog, Göran; Jeanty, Laura; Jelen, Kazimierz; Jen-La Plante, Imai; Jenni, Peter; Jeremie, Andrea; Jež, Pavel; Jézéquel, Stéphane; Jha, Manoj Kumar; Ji, Haoshuang; Ji, Weina; Jia, Jiangyong; Jiang, Yi; Jimenez Belenguer, Marcos; Jin, Ge; Jin, Shan; Jinnouchi, Osamu; Joergensen, Morten Dam; Joffe, David; Johansen, Lars; Johansen, Marianne; Johansson, Erik; Johansson, Per; Johnert, Sebastian; Johns, Kenneth; Jon-And, Kerstin; Jones, Graham; Jones, Roger; Jones, Tegid; Jones, Tim; Jonsson, Ove; Joram, Christian; Jorge, Pedro; Joseph, John; Ju, Xiangyang; Juranek, Vojtech; Jussel, Patrick; Kabachenko, Vasily; Kabana, Sonja; Kaci, Mohammed; Kaczmarska, Anna; Kadlecik, Peter; Kado, Marumi; Kagan, Harris; Kagan, Michael; Kaiser, Steffen; Kajomovitz, Enrique; Kalinin, Sergey; Kalinovskaya, Lidia; Kama, Sami; Kanaya, Naoko; Kaneda, Michiru; Kanno, Takayuki; Kantserov, Vadim; Kanzaki, Junichi; Kaplan, Benjamin; Kapliy, Anton; Kaplon, Jan; Kar, Deepak; Karagoz, Muge; Karnevskiy, Mikhail; Karr, Kristo; Kartvelishvili, Vakhtang; Karyukhin, Andrey; Kashif, Lashkar; Kasmi, Azzedine; Kass, Richard; Kastanas, Alex; Kataoka, Mayuko; Kataoka, Yousuke; Katsoufis, Elias; Katzy, Judith; Kaushik, Venkatesh; Kawagoe, Kiyotomo; Kawamoto, Tatsuo; Kawamura, Gen; Kayl, Manuel; Kazanin, Vassili; Kazarinov, Makhail; Keates, James Robert; Keeler, Richard; Kehoe, Robert; Keil, Markus; Kekelidze, George; Kelly, Marc; Kennedy, John; Kenney, Christopher John; Kenyon, Mike; Kepka, Oldrich; Kerschen, Nicolas; Kerševan, Borut Paul; Kersten, Susanne; Kessoku, Kohei; Ketterer, Christian; Keung, Justin; Khakzad, Mohsen; Khalil-zada, Farkhad; Khandanyan, Hovhannes; Khanov, Alexander; Kharchenko, Dmitri; Khodinov, Alexander; Kholodenko, Anatoli; Khomich, Andrei; Khoo, Teng Jian; Khoriauli, Gia; Khoroshilov, Andrey; Khovanskiy, Nikolai; Khovanskiy, Valery; Khramov, Evgeniy; Khubua, Jemal; Kim, Hyeon Jin; Kim, Min Suk; Kim, Peter; Kim, Shinhong; Kimura, Naoki; Kind, Oliver; King, Barry; King, Matthew; King, Robert Steven Beaufoy; Kirk, Julie; Kirsch, Guillaume; Kirsch, Lawrence; Kiryunin, Andrey; Kisielewska, Danuta; Kittelmann, Thomas; Kiver, Andrey; Kiyamura, Hironori; Kladiva, Eduard; Klaiber-Lodewigs, Jonas; Klein, Max; Klein, Uta; Kleinknecht, Konrad; Klemetti, Miika; Klier, Amit; Klimentov, Alexei; Klingenberg, Reiner; Klinkby, Esben; Klioutchnikova, Tatiana; Klok, Peter; Klous, Sander; Kluge, Eike-Erik; Kluge, Thomas; Kluit, Peter; Kluth, Stefan; Kneringer, Emmerich; Knobloch, Juergen; Knoops, Edith; Knue, Andrea; Ko, Byeong Rok; Kobayashi, Tomio; Kobel, Michael; Kocian, Martin; Kocnar, Antonin; Kodys, Peter; Köneke, Karsten; König, Adriaan; Koenig, Sebastian; Köpke, Lutz; Koetsveld, Folkert; Koevesarki, Peter; Koffas, Thomas; Koffeman, Els; Kohn, Fabian; Kohout, Zdenek; Kohriki, Takashi; Koi, Tatsumi; Kokott, Thomas; Kolachev, Guennady; Kolanoski, Hermann; Kolesnikov, Vladimir; Koletsou, Iro; Koll, James; Kollar, Daniel; Kollefrath, Michael; Kolya, Scott; Komar, Aston; Komaragiri, Jyothsna Rani; Komori, Yuto; Kondo, Takahiko; Kono, Takanori; Kononov, Anatoly; Konoplich, Rostislav; Konstantinidis, Nikolaos; Kootz, Andreas; Koperny, Stefan; Kopikov, Sergey; Korcyl, Krzysztof; Kordas, Kostantinos; Koreshev, Victor; Korn, Andreas; Korol, Aleksandr; Korolkov, Ilya; Korolkova, Elena; Korotkov, Vladislav; Kortner, Oliver; Kortner, Sandra; Kostyukhin, Vadim; Kotamäki, Miikka Juhani; Kotov, Sergey; Kotov, Vladislav; Kotwal, Ashutosh; Kourkoumelis, Christine; Kouskoura, Vasiliki; Koutsman, Alex; Kowalewski, Robert Victor; Kowalski, Tadeusz; Kozanecki, Witold; Kozhin, Anatoly; Kral, Vlastimil; Kramarenko, Viktor; Kramberger, Gregor; Krasel, Olaf; Krasny, Mieczyslaw Witold; Krasznahorkay, Attila; Kraus, James; Kreisel, Arik; Krejci, Frantisek; Kretzschmar, Jan; Krieger, Nina; Krieger, Peter; Kroeninger, Kevin; Kroha, Hubert; Kroll, Joe; Kroseberg, Juergen; Krstic, Jelena; Kruchonak, Uladzimir; Krüger, Hans; Kruker, Tobias; Krumshteyn, Zinovii; Kruth, Andre; Kubota, Takashi; Kuehn, Susanne; Kugel, Andreas; Kuhl, Thorsten; Kuhn, Dietmar; Kukhtin, Victor; Kulchitsky, Yuri; Kuleshov, Sergey; Kummer, Christian; Kuna, Marine; Kundu, Nikhil; Kunkle, Joshua; Kupco, Alexander; Kurashige, Hisaya; Kurata, Masakazu; Kurochkin, Yurii; Kus, Vlastimil; Kuykendall, William; Kuze, Masahiro; Kuzhir, Polina; Kvasnicka, Ondrej; Kvita, Jiri; Kwee, Regina; La Rosa, Alessandro; La Rotonda, Laura; Labarga, Luis; Labbe, Julien; Lablak, Said; Lacasta, Carlos; Lacava, Francesco; Lacker, Heiko; Lacour, Didier; Lacuesta, Vicente Ramón; Ladygin, Evgueni; Lafaye, Rémi; Laforge, Bertrand; Lagouri, Theodota; Lai, Stanley; Laisne, Emmanuel; Lamanna, Massimo; Lampen, Caleb; Lampl, Walter; Lancon, Eric; Landgraf, Ulrich; Landon, Murrough; Landsman, Hagar; Lane, Jenna; Lange, Clemens; Lankford, Andrew; Lanni, Francesco; Lantzsch, Kerstin; Lapin, Vladimir; Laplace, Sandrine; Lapoire, Cecile; Laporte, Jean-Francois; Lari, Tommaso; Larionov, Anatoly; Larner, Aimee; Lasseur, Christian; Lassnig, Mario; Lau, Wing; Laurelli, Paolo; Lavorato, Antonia; Lavrijsen, Wim; Laycock, Paul; Lazarev, Alexandre; Lazzaro, Alfio; Le Dortz, Olivier; Le Guirriec, Emmanuel; Le Maner, Christophe; Le Menedeu, Eve; Lebedev, Alexander; Lebel, Céline; LeCompte, Thomas; Ledroit-Guillon, Fabienne Agnes Marie; Lee, Hurng-Chun; Lee, Jason; Lee, Shih-Chang; Lee, Lawrence; Lefebvre, Michel; Legendre, Marie; Leger, Annie; LeGeyt, Benjamin; Legger, Federica; Leggett, Charles; Lehmacher, Marc; Lehmann Miotto, Giovanna; Lei, Xiaowen; Leite, Marco Aurelio Lisboa; Leitner, Rupert; Lellouch, Daniel; Leltchouk, Mikhail; Lendermann, Victor; Leney, Katharine; Lenz, Tatiana; Lenzen, Georg; Lenzi, Bruno; Leonhardt, Kathrin; Leontsinis, Stefanos; Leroy, Claude; Lessard, Jean-Raphael; Lesser, Jonas; Lester, Christopher; Leung Fook Cheong, Annabelle; Levêque, Jessica; Levin, Daniel; Levinson, Lorne; Levitski, Mikhail; Lewandowska, Marta; Lewis, Adrian; Lewis, George; Leyko, Agnieszka; Leyton, Michael; Li, Bo; Li, Haifeng; Li, Shu; Li, Xuefei; Liang, Zhihua; Liang, Zhijun; Liberti, Barbara; Lichard, Peter; Lichtnecker, Markus; Lie, Ki; Liebig, Wolfgang; Lifshitz, Ronen; Lilley, Joseph; Limbach, Christian; Limosani, Antonio; Limper, Maaike; Lin, Simon; Linde, Frank; Linnemann, James; Lipeles, Elliot; Lipinsky, Lukas; Lipniacka, Anna; Liss, Tony; Lissauer, David; Lister, Alison; Litke, Alan; Liu, Chuanlei; Liu, Dong; Liu, Hao; Liu, Jianbei; Liu, Minghui; Liu, Shengli; Liu, Yanwen; Livan, Michele; Livermore, Sarah; Lleres, Annick; Llorente Merino, Javier; Lloyd, Stephen; Lobodzinska, Ewelina; Loch, Peter; Lockman, William; Lockwitz, Sarah; Loddenkoetter, Thomas; Loebinger, Fred; Loginov, Andrey; Loh, Chang Wei; Lohse, Thomas; Lohwasser, Kristin; Lokajicek, Milos; Loken, James; Lombardo, Vincenzo Paolo; Long, Robin Eamonn; Lopes, Lourenco; Lopez Mateos, David; Losada, Marta; Loscutoff, Peter; Lo Sterzo, Francesco; Losty, Michael; Lou, Xinchou; Lounis, Abdenour; Loureiro, Karina; Love, Jeremy; Love, Peter; Lowe, Andrew; Lu, Feng; Lu, Liang; Lubatti, Henry; Luci, Claudio; Lucotte, Arnaud; Ludwig, Andreas; Ludwig, Dörthe; Ludwig, Inga; Ludwig, Jens; Luehring, Frederick; Luijckx, Guy; Lumb, Debra; Luminari, Lamberto; Lund, Esben; Lund-Jensen, Bengt; Lundberg, Björn; Lundberg, Johan; Lundquist, Johan; Lungwitz, Matthias; Lupi, Anna; Lutz, Gerhard; Lynn, David; Lys, Jeremy; Lytken, Else; Ma, Hong; Ma, Lian Liang; Macana Goia, Jorge Andres; Maccarrone, Giovanni; Macchiolo, Anna; Maček, Boštjan; Machado Miguens, Joana; Mackeprang, Rasmus; Madaras, Ronald; Mader, Wolfgang; Maenner, Reinhard; Maeno, Tadashi; Mättig, Peter; Mättig, Stefan; Magalhaes Martins, Paulo Jorge; Magnoni, Luca; Magradze, Erekle; Mahalalel, Yair; Mahboubi, Kambiz; Mahout, Gilles; Maiani, Camilla; Maidantchik, Carmen; Maio, Amélia; Majewski, Stephanie; Makida, Yasuhiro; Makovec, Nikola; Mal, Prolay; Malecki, Pawel; Malecki, Piotr; Maleev, Victor; Malek, Fairouz; Mallik, Usha; Malon, David; Maltezos, Stavros; Malyshev, Vladimir; Malyukov, Sergei; Mameghani, Raphael; Mamuzic, Judita; Manabe, Atsushi; Mandelli, Luciano; Mandić, Igor; Mandrysch, Rocco; Maneira, José; Mangeard, Pierre-Simon; Manjavidze, Ioseb; Mann, Alexander; Manning, Peter; Manousakis-Katsikakis, Arkadios; Mansoulie, Bruno; Manz, Andreas; Mapelli, Alessandro; Mapelli, Livio; March, Luis; Marchand, Jean-Francois; Marchese, Fabrizio; Marchiori, Giovanni; Marcisovsky, Michal; Marin, Alexandru; Marino, Christopher; Marroquim, Fernando; Marshall, Robin; Marshall, Zach; Martens, Kalen; Marti-Garcia, Salvador; Martin, Andrew; Martin, Brian; Martin, Brian; Martin, Franck Francois; Martin, Jean-Pierre; Martin, Philippe; Martin, Tim; Martin dit Latour, Bertrand; Martinez, Mario; Martinez Outschoorn, Verena; Martyniuk, Alex; Marx, Marilyn; Marzano, Francesco; Marzin, Antoine; Masetti, Lucia; Mashimo, Tetsuro; Mashinistov, Ruslan; Masik, Jiri; Maslennikov, Alexey; Maß, Martin; Massa, Ignazio; Massaro, Graziano; Massol, Nicolas; Mastrandrea, Paolo; Mastroberardino, Anna; Masubuchi, Tatsuya; Mathes, Markus; Matricon, Pierre; Matsumoto, Hiroshi; Matsunaga, Hiroyuki; Matsushita, Takashi; Mattravers, Carly; Maugain, Jean-Marie; Maxfield, Stephen; Maximov, Dmitriy; May, Edward; Mayne, Anna; Mazini, Rachid; Mazur, Michael; Mazzanti, Marcello; Mazzoni, Enrico; Mc Kee, Shawn Patrick; McCarn, Allison; McCarthy, Robert; McCarthy, Tom; McCubbin, Norman; McFarlane, Kenneth; Mcfayden, Josh; McGlone, Helen; Mchedlidze, Gvantsa; McLaren, Robert Andrew; Mclaughlan, Tom; McMahon, Steve; McPherson, Robert; Meade, Andrew; Mechnich, Joerg; Mechtel, Markus; Medinnis, Mike; Meera-Lebbai, Razzak; Meguro, Tatsuma; Mehdiyev, Rashid; Mehlhase, Sascha; Mehta, Andrew; Meier, Karlheinz; Meinhardt, Jens; Meirose, Bernhard; Melachrinos, Constantinos; Mellado Garcia, Bruce Rafael; Mendoza Navas, Luis; Meng, Zhaoxia; Mengarelli, Alberto; Menke, Sven; Menot, Claude; Meoni, Evelin; Mercurio, Kevin Michael; Mermod, Philippe; Merola, Leonardo; Meroni, Chiara; Merritt, Frank; Messina, Andrea; Metcalfe, Jessica; Mete, Alaettin Serhan; Meuser, Stefan; Meyer, Carsten; Meyer, Jean-Pierre; Meyer, Jochen; Meyer, Joerg; Meyer, Thomas Christian; Meyer, W Thomas; Miao, Jiayuan; Michal, Sebastien; Micu, Liliana; Middleton, Robin; Miele, Paola; Migas, Sylwia; Mijović, Liza; Mikenberg, Giora; Mikestikova, Marcela; Mikuž, Marko; Miller, David; Miller, Robert; Mills, Bill; Mills, Corrinne; Milov, Alexander; Milstead, David; Milstein, Dmitry; Minaenko, Andrey; Miñano, Mercedes; Minashvili, Irakli; Mincer, Allen; Mindur, Bartosz; Mineev, Mikhail; Ming, Yao; Mir, Lluisa-Maria; Mirabelli, Giovanni; Miralles Verge, Lluis; Misiejuk, Andrzej; Mitrevski, Jovan; Mitrofanov, Gennady; Mitsou, Vasiliki A; Mitsui, Shingo; Miyagawa, Paul; Miyazaki, Kazuki; Mjörnmark, Jan-Ulf; Moa, Torbjoern; Mockett, Paul; Moed, Shulamit; Moeller, Victoria; Mönig, Klaus; Möser, Nicolas; Mohapatra, Soumya; Mohn, Bjarte; Mohr, Wolfgang; Mohrdieck-Möck, Susanne; Moisseev, Artemy; Moles-Valls, Regina; Molina-Perez, Jorge; Monk, James; Monnier, Emmanuel; Montesano, Simone; Monticelli, Fernando; Monzani, Simone; Moore, Roger; Moorhead, Gareth; Mora Herrera, Clemencia; Moraes, Arthur; Morais, Antonio; Morange, Nicolas; Morel, Julien; Morello, Gianfranco; Moreno, Deywis; Moreno Llácer, María; Morettini, Paolo; Morii, Masahiro; Morin, Jerome; Morita, Youhei; Morley, Anthony Keith; Mornacchi, Giuseppe; Morone, Maria-Christina; Morozov, Sergey; Morris, John; Morvaj, Ljiljana; Moser, Hans-Guenther; Mosidze, Maia; Moss, Josh; Mount, Richard; Mountricha, Eleni; Mouraviev, Sergei; Moyse, Edward; Mudrinic, Mihajlo; Mueller, Felix; Mueller, James; Mueller, Klemens; Müller, Thomas; Muenstermann, Daniel; Muijs, Sandra; Muir, Alex; Munwes, Yonathan; Murakami, Koichi; Murray, Bill; Mussche, Ido; Musto, Elisa; Myagkov, Alexey; Myska, Miroslav; Nadal, Jordi; Nagai, Koichi; Nagano, Kunihiro; Nagasaka, Yasushi; Nairz, Armin Michael; Nakahama, Yu; Nakamura, Koji; Nakano, Itsuo; Nanava, Gizo; Napier, Austin; Nash, Michael; Nation, Nigel; Nattermann, Till; Naumann, Thomas; Navarro, Gabriela; Neal, Homer; Nebot, Eduardo; Nechaeva, Polina; Negri, Andrea; Negri, Guido; Nektarijevic, Snezana; Nelson, Andrew; Nelson, Silke; Nelson, Timothy Knight; Nemecek, Stanislav; Nemethy, Peter; Nepomuceno, Andre Asevedo; Nessi, Marzio; Nesterov, Stanislav; Neubauer, Mark; Neusiedl, Andrea; Neves, Ricardo; Nevski, Pavel; Newman, Paul; Nguyen Thi Hong, Van; Nickerson, Richard; Nicolaidou, Rosy; Nicolas, Ludovic; Nicquevert, Bertrand; Niedercorn, Francois; Nielsen, Jason; Niinikoski, Tapio; Nikiforov, Andriy; Nikolaenko, Vladimir; Nikolaev, Kirill; Nikolic-Audit, Irena; Nikolopoulos, Konstantinos; Nilsen, Henrik; Nilsson, Paul; Ninomiya, Yoichi; Nisati, Aleandro; Nishiyama, Tomonori; Nisius, Richard; Nodulman, Lawrence; Nomachi, Masaharu; Nomidis, Ioannis; Nomoto, Hiroshi; Nordberg, Markus; Nordkvist, Bjoern; Norton, Peter; Novakova, Jana; Nozaki, Mitsuaki; Nožička, Miroslav; Nozka, Libor; Nugent, Ian Michael; Nuncio-Quiroz, Adriana-Elizabeth; Nunes Hanninger, Guilherme; Nunnemann, Thomas; Nurse, Emily; Nyman, Tommi; O'Brien, Brendan Joseph; O'Neale, Steve; O'Neil, Dugan; O'Shea, Val; Oakham, Gerald; Oberlack, Horst; Ocariz, Jose; Ochi, Atsuhiko; Oda, Susumu; Odaka, Shigeru; Odier, Jerome; Ogren, Harold; Oh, Alexander; Oh, Seog; Ohm, Christian; Ohshima, Takayoshi; Ohshita, Hidetoshi; Ohska, Tokio Kenneth; Ohsugi, Takashi; Okada, Shogo; Okawa, Hideki; Okumura, Yasuyuki; Okuyama, Toyonobu; Olcese, Marco; Olchevski, Alexander; Oliveira, Miguel Alfonso; Oliveira Damazio, Denis; Oliver Garcia, Elena; Olivito, Dominick; Olszewski, Andrzej; Olszowska, Jolanta; Omachi, Chihiro; Onofre, António; Onyisi, Peter; Oram, Christopher; Oreglia, Mark; Oren, Yona; Orestano, Domizia; Orlov, Iliya; Oropeza Barrera, Cristina; Orr, Robert; Ortega, Eduardo; Osculati, Bianca; Ospanov, Rustem; Osuna, Carlos; Otero y Garzon, Gustavo; Ottersbach, John; Ouchrif, Mohamed; Ould-Saada, Farid; Ouraou, Ahmimed; Ouyang, Qun; Owen, Mark; Owen, Simon; Øye, Ola; Ozcan, Veysi Erkcan; Ozturk, Nurcan; Pacheco Pages, Andres; Padilla Aranda, Cristobal; Paganis, Efstathios; Paige, Frank; Pajchel, Katarina; Palestini, Sandro; Pallin, Dominique; Palma, Alberto; Palmer, Jody; Pan, Yibin; Panagiotopoulou, Evgenia; Panes, Boris; Panikashvili, Natalia; Panitkin, Sergey; Pantea, Dan; Panuskova, Monika; Paolone, Vittorio; Papadelis, Aras; Papadopoulou, Theodora; Paramonov, Alexander; Park, Woochun; Parker, Andy; Parodi, Fabrizio; Parsons, John; Parzefall, Ulrich; Pasqualucci, Enrico; Passeri, Antonio; Pastore, Fernanda; Pastore, Francesca; Pásztor, Gabriella; Pataraia, Sophio; Patel, Nikhul; Pater, Joleen; Patricelli, Sergio; Pauly, Thilo; Pecsy, Martin; Pedraza Morales, Maria Isabel; Peleganchuk, Sergey; Peng, Haiping; Pengo, Ruggero; Penson, Alexander; Penwell, John; Perantoni, Marcelo; Perez, Kerstin; Perez Cavalcanti, Tiago; Perez Codina, Estel; Pérez García-Estañ, María Teresa; Perez Reale, Valeria; Peric, Ivan; Perini, Laura; Pernegger, Heinz; Perrino, Roberto; Perrodo, Pascal; Persembe, Seda; Peshekhonov, Vladimir; Peters, Onne; Petersen, Brian; Petersen, Jorgen; Petersen, Troels; Petit, Elisabeth; Petridis, Andreas; Petridou, Chariclia; Petrolo, Emilio; Petrucci, Fabrizio; Petschull, Dennis; Petteni, Michele; Pezoa, Raquel; Phan, Anna; Phillips, Alan; Phillips, Peter William; Piacquadio, Giacinto; Piccaro, Elisa; Piccinini, Maurizio; Pickford, Andrew; Piec, Sebastian Marcin; Piegaia, Ricardo; Pilcher, James; Pilkington, Andrew; Pina, João Antonio; Pinamonti, Michele; Pinder, Alex; Pinfold, James; Ping, Jialun; Pinto, Belmiro; Pirotte, Olivier; Pizio, Caterina; Placakyte, Ringaile; Plamondon, Mathieu; Plano, Will; Pleier, Marc-Andre; Pleskach, Anatoly; Poblaguev, Andrei; Poddar, Sahill; Podlyski, Fabrice; Poggioli, Luc; Poghosyan, Tatevik; Pohl, Martin; Polci, Francesco; Polesello, Giacomo; Policicchio, Antonio; Polini, Alessandro; Poll, James; Polychronakos, Venetios; Pomarede, Daniel Marc; Pomeroy, Daniel; Pommès, Kathy; Pontecorvo, Ludovico; Pope, Bernard; Popeneciu, Gabriel Alexandru; Popovic, Dragan; Poppleton, Alan; Portell Bueso, Xavier; Porter, Robert; Posch, Christoph; Pospelov, Guennady; Pospisil, Stanislav; Potrap, Igor; Potter, Christina; Potter, Christopher; Poulard, Gilbert; Poveda, Joaquin; Prabhu, Robindra; Pralavorio, Pascal; Prasad, Srivas; Pravahan, Rishiraj; Prell, Soeren; Pretzl, Klaus Peter; Pribyl, Lukas; Price, Darren; Price, Lawrence; Price, Michael John; Prichard, Paul; Prieur, Damien; Primavera, Margherita; Prokofiev, Kirill; Prokoshin, Fedor; Protopopescu, Serban; Proudfoot, James; Prudent, Xavier; Przysiezniak, Helenka; Psoroulas, Serena; Ptacek, Elizabeth; Purdham, John; Purohit, Milind; Puzo, Patrick; Pylypchenko, Yuriy; Qian, Jianming; Qian, Zuxuan; Qin, Zhonghua; Quadt, Arnulf; Quarrie, David; Quayle, William; Quinonez, Fernando; Raas, Marcel; Radescu, Voica; Radics, Balint; Rador, Tonguc; Ragusa, Francesco; Rahal, Ghita; Rahimi, Amir; Rahm, David; Rajagopalan, Srinivasan; Rammensee, Michael; Rammes, Marcus; Ramstedt, Magnus; Randrianarivony, Koloina; Ratoff, Peter; Rauscher, Felix; Rauter, Emanuel; Raymond, Michel; Read, Alexander Lincoln; Rebuzzi, Daniela; Redelbach, Andreas; Redlinger, George; Reece, Ryan; Reeves, Kendall; Reichold, Armin; Reinherz-Aronis, Erez; Reinsch, Andreas; Reisinger, Ingo; Reljic, Dusan; Rembser, Christoph; Ren, Zhongliang; Renaud, Adrien; Renkel, Peter; Rensch, Bertram; Rescigno, Marco; Resconi, Silvia; Resende, Bernardo; Reznicek, Pavel; Rezvani, Reyhaneh; Richards, Alexander; Richter, Robert; Richter-Was, Elzbieta; Ridel, Melissa; Rieke, Stefan; Rijpstra, Manouk; Rijssenbeek, Michael; Rimoldi, Adele; Rinaldi, Lorenzo; Rios, Ryan Randy; Riu, Imma; Rivoltella, Giancesare; Rizatdinova, Flera; Rizvi, Eram; Robertson, Steven; Robichaud-Veronneau, Andree; Robinson, Dave; Robinson, James; Robinson, Mary; Robson, Aidan; Rocha de Lima, Jose Guilherme; Roda, Chiara; Roda Dos Santos, Denis; Rodier, Stephane; Rodriguez, Diego; Rodriguez Garcia, Yohany; Roe, Adam; Roe, Shaun; Røhne, Ole; Rojo, Victoria; Rolli, Simona; Romaniouk, Anatoli; Romanov, Victor; Romeo, Gaston; Romero Maltrana, Diego; Roos, Lydia; Ros, Eduardo; Rosati, Stefano; Rosbach, Kilian; Rose, Matthew; Rosenbaum, Gabriel; Rosenberg, Eli; Rosendahl, Peter Lundgaard; Rosselet, Laurent; Rossetti, Valerio; Rossi, Elvira; Rossi, Leonardo Paolo; Rossi, Lucio; Rotaru, Marina; Roth, Itamar; Rothberg, Joseph; Rousseau, David; Royon, Christophe; Rozanov, Alexander; Rozen, Yoram; Ruan, Xifeng; Rubinskiy, Igor; Ruckert, Benjamin; Ruckstuhl, Nicole; Rud, Viacheslav; Rudolph, Gerald; Rühr, Frederik; Ruggieri, Federico; Ruiz-Martinez, Aranzazu; Rulikowska-Zarebska, Elzbieta; Rumiantsev, Viktor; Rumyantsev, Leonid; Runge, Kay; Runolfsson, Ogmundur; Rurikova, Zuzana; Rusakovich, Nikolai; Rust, Dave; Rutherfoord, John; Ruwiedel, Christoph; Ruzicka, Pavel; Ryabov, Yury; Ryadovikov, Vasily; Ryan, Patrick; Rybar, Martin; Rybkin, Grigori; Ryder, Nick; Rzaeva, Sevda; Saavedra, Aldo; Sadeh, Iftach; Sadrozinski, Hartmut; Sadykov, Renat; Safai Tehrani, Francesco; Sakamoto, Hiroshi; Salamanna, Giuseppe; Salamon, Andrea; Saleem, Muhammad; Salihagic, Denis; Salnikov, Andrei; Salt, José; Salvachua Ferrando, Belén; Salvatore, Daniela; Salvatore, Pasquale Fabrizio; Salvucci, Antonio; Salzburger, Andreas; Sampsonidis, Dimitrios; Samset, Björn Hallvard; Sandaker, Heidi; Sander, Heinz Georg; Sanders, Michiel; Sandhoff, Marisa; Sandoval, Tanya; Sandstroem, Rikard; Sandvoss, Stephan; Sankey, Dave; Sansoni, Andrea; Santamarina Rios, Cibran; Santoni, Claudio; Santonico, Rinaldo; Santos, Helena; Saraiva, João; Sarangi, Tapas; Sarkisyan-Grinbaum, Edward; Sarri, Francesca; Sartisohn, Georg; Sasaki, Osamu; Sasaki, Takashi; Sasao, Noboru; Satsounkevitch, Igor; Sauvage, Gilles; Sauvan, Jean-Baptiste; Savard, Pierre; Savinov, Vladimir; Savu, Dan Octavian; Savva, Panagiota; Sawyer, Lee; Saxon, David; Says, Louis-Pierre; Sbarra, Carla; Sbrizzi, Antonio; Scallon, Olivia; Scannicchio, Diana; Scarcella, Mark; Schaarschmidt, Jana; Schacht, Peter; Schäfer, Uli; Schaepe, Steffen; Schaetzel, Sebastian; Schaffer, Arthur; Schaile, Dorothee; Schamberger, R~Dean; Schamov, Andrey; Scharf, Veit; Schegelsky, Valery; Scheirich, Daniel; Schernau, Michael; Scherzer, Max; Schiavi, Carlo; Schieck, Jochen; Schioppa, Marco; Schlenker, Stefan; Schlereth, James; Schmidt, Evelyn; Schmidt, Michael; Schmieden, Kristof; Schmitt, Christian; Schmitt, Sebastian; Schmitz, Martin; Schöning, André; Schott, Matthias; Schouten, Doug; Schovancova, Jaroslava; Schram, Malachi; Schroeder, Christian; Schroer, Nicolai; Schuh, Silvia; Schuler, Georges; Schultes, Joachim; Schultz-Coulon, Hans-Christian; Schulz, Holger; Schumacher, Jan; Schumacher, Markus; Schumm, Bruce; Schune, Philippe; Schwanenberger, Christian; Schwartzman, Ariel; Schwemling, Philippe; Schwienhorst, Reinhard; Schwierz, Rainer; Schwindling, Jerome; Scott, Bill; Searcy, Jacob; Sedykh, Evgeny; Segura, Ester; Seidel, Sally; Seiden, Abraham; Seifert, Frank; Seixas, José; Sekhniaidze, Givi; Seliverstov, Dmitry; Sellden, Bjoern; Sellers, Graham; Seman, Michal; Semprini-Cesari, Nicola; Serfon, Cedric; Serin, Laurent; Seuster, Rolf; Severini, Horst; Sevior, Martin; Sfyrla, Anna; Shabalina, Elizaveta; Shamim, Mansoora; Shan, Lianyou; Shank, James; Shao, Qi Tao; Shapiro, Marjorie; Shatalov, Pavel; Shaver, Leif; Shaw, Christian; Shaw, Kate; Sherman, Daniel; Sherwood, Peter; Shibata, Akira; Shichi, Hideharu; Shimizu, Shima; Shimojima, Makoto; Shin, Taeksu; Shmeleva, Alevtina; Shochet, Mel; Short, Daniel; Shupe, Michael; Sicho, Petr; Sidoti, Antonio; Siebel, Anca-Mirela; Siegert, Frank; Siegrist, James; Sijacki, Djordje; Silbert, Ohad; Silva, José; Silver, Yiftah; Silverstein, Daniel; Silverstein, Samuel; Simak, Vladislav; Simard, Olivier; Simic, Ljiljana; Simion, Stefan; Simmons, Brinick; Simonyan, Margar; Sinervo, Pekka; Sinev, Nikolai; Sipica, Valentin; Siragusa, Giovanni; Sisakyan, Alexei; Sivoklokov, Serguei; Sjölin, Jörgen; Sjursen, Therese; Skinnari, Louise Anastasia; Skovpen, Kirill; Skubic, Patrick; Skvorodnev, Nikolai; Slater, Mark; Slavicek, Tomas; Sliwa, Krzysztof; Sloan, Terrence; Sloper, John erik; Smakhtin, Vladimir; Smirnov, Sergei; Smirnova, Lidia; Smirnova, Oxana; Smith, Ben Campbell; Smith, Douglas; Smith, Kenway; Smizanska, Maria; Smolek, Karel; Snesarev, Andrei; Snow, Steve; Snow, Joel; Snuverink, Jochem; Snyder, Scott; Soares, Mara; Sobie, Randall; Sodomka, Jaromir; Soffer, Abner; Solans, Carlos; Solar, Michael; Solc, Jaroslav; Soldatov, Evgeny; Soldevila, Urmila; Solfaroli Camillocci, Elena; Solodkov, Alexander; Solovyanov, Oleg; Sondericker, John; Soni, Nitesh; Sopko, Vit; Sopko, Bruno; Sorbi, Massimo; Sosebee, Mark; Soukharev, Andrey; Spagnolo, Stefania; Spanò, Francesco; Spighi, Roberto; Spigo, Giancarlo; Spila, Federico; Spiriti, Eleuterio; Spiwoks, Ralf; Spousta, Martin; Spreitzer, Teresa; Spurlock, Barry; St Denis, Richard Dante; Stahl, Thorsten; Stahlman, Jonathan; Stamen, Rainer; Stanecka, Ewa; Stanek, Robert; Stanescu, Cristian; Stapnes, Steinar; Starchenko, Evgeny; Stark, Jan; Staroba, Pavel; Starovoitov, Pavel; Staude, Arnold; Stavina, Pavel; Stavropoulos, Georgios; Steele, Genevieve; Steinbach, Peter; Steinberg, Peter; Stekl, Ivan; Stelzer, Bernd; Stelzer, Harald Joerg; Stelzer-Chilton, Oliver; Stenzel, Hasko; Stevenson, Kyle; Stewart, Graeme; Stillings, Jan Andre; Stockmanns, Tobias; Stockton, Mark; Stoerig, Kathrin; Stoicea, Gabriel; Stonjek, Stefan; Strachota, Pavel; Stradling, Alden; Straessner, Arno; Strandberg, Jonas; Strandberg, Sara; Strandlie, Are; Strang, Michael; Strauss, Emanuel; Strauss, Michael; Strizenec, Pavol; Ströhmer, Raimund; Strom, David; Strong, John; Stroynowski, Ryszard; Strube, Jan; Stugu, Bjarne; Stumer, Iuliu; Stupak, John; Sturm, Philipp; Soh, Dart-yin; Su, Dong; Subramania, Halasya Siva; Succurro, Antonella; Sugaya, Yorihito; Sugimoto, Takuya; Suhr, Chad; Suita, Koichi; Suk, Michal; Sulin, Vladimir; Sultansoy, Saleh; Sumida, Toshi; Sun, Xiaohu; Sundermann, Jan Erik; Suruliz, Kerim; Sushkov, Serge; Susinno, Giancarlo; Sutton, Mark; Suzuki, Yu; Svatos, Michal; Sviridov, Yuri; Swedish, Stephen; Sykora, Ivan; Sykora, Tomas; Szeless, Balazs; Sánchez, Javier; Ta, Duc; Tackmann, Kerstin; Taffard, Anyes; Tafirout, Reda; Taga, Adrian; Taiblum, Nimrod; Takahashi, Yuta; Takai, Helio; Takashima, Ryuichi; Takeda, Hiroshi; Takeshita, Tohru; Talby, Mossadek; Talyshev, Alexey; Tamsett, Matthew; Tanaka, Junichi; Tanaka, Reisaburo; Tanaka, Satoshi; Tanaka, Shuji; Tanaka, Yoshito; Tani, Kazutoshi; Tannoury, Nancy; Tappern, Geoffrey; Tapprogge, Stefan; Tardif, Dominique; Tarem, Shlomit; Tarrade, Fabien; Tartarelli, Giuseppe Francesco; Tas, Petr; Tasevsky, Marek; Tassi, Enrico; Tatarkhanov, Mous; Tayalati, Yahya; Taylor, Christopher; Taylor, Frank; Taylor, Geoffrey; Taylor, Wendy; Teixeira Dias Castanheira, Matilde; Teixeira-Dias, Pedro; Temming, Kim Katrin; Ten Kate, Herman; Teng, Ping-Kun; Terada, Susumu; Terashi, Koji; Terron, Juan; Terwort, Mark; Testa, Marianna; Teuscher, Richard; Thadome, Jocelyn; Therhaag, Jan; Theveneaux-Pelzer, Timothée; Thioye, Moustapha; Thoma, Sascha; Thomas, Juergen; Thompson, Emily; Thompson, Paul; Thompson, Peter; Thompson, Stan; Thomson, Evelyn; Thomson, Mark; Thun, Rudolf; Tic, Tomáš; Tikhomirov, Vladimir; Tikhonov, Yury; Timmermans, Charles; Tipton, Paul; Tique Aires Viegas, Florbela De Jes; Tisserant, Sylvain; Tobias, Jürgen; Toczek, Barbara; Todorov, Theodore; Todorova-Nova, Sharka; Toggerson, Brokk; Tojo, Junji; Tokár, Stanislav; Tokunaga, Kaoru; Tokushuku, Katsuo; Tollefson, Kirsten; Tomoto, Makoto; Tompkins, Lauren; Toms, Konstantin; Tong, Guoliang; Tonoyan, Arshak; Topfel, Cyril; Topilin, Nikolai; Torchiani, Ingo; Torrence, Eric; Torró Pastor, Emma; Toth, Jozsef; Touchard, Francois; Tovey, Daniel; Traynor, Daniel; Trefzger, Thomas; Treis, Johannes; Tremblet, Louis; Tricoli, Alesandro; Trigger, Isabel Marian; Trincaz-Duvoid, Sophie; Trinh, Thi Nguyet; Tripiana, Martin; Trischuk, William; Trivedi, Arjun; Trocmé, Benjamin; Troncon, Clara; Trottier-McDonald, Michel; Trzupek, Adam; Tsarouchas, Charilaos; Tseng, Jeffrey; Tsiakiris, Menelaos; Tsiareshka, Pavel; Tsionou, Dimitra; Tsipolitis, Georgios; Tsiskaridze, Vakhtang; Tskhadadze, Edisher; Tsukerman, Ilya; Tsulaia, Vakhtang; Tsung, Jieh-Wen; Tsuno, Soshi; Tsybychev, Dmitri; Tua, Alan; Tuggle, Joseph; Turala, Michal; Turecek, Daniel; Turk Cakir, Ilkay; Turlay, Emmanuel; Turra, Ruggero; Tuts, Michael; Tykhonov, Andrii; Tylmad, Maja; Tyndel, Mike; Tyrvainen, Harri; Tzanakos, George; Uchida, Kirika; Ueda, Ikuo; Ueno, Ryuichi; Ugland, Maren; Uhlenbrock, Mathias; Uhrmacher, Michael; Ukegawa, Fumihiko; Unal, Guillaume; Underwood, David; Undrus, Alexander; Unel, Gokhan; Unno, Yoshinobu; Urbaniec, Dustin; Urkovsky, Evgeny; Urrejola, Pedro; Usai, Giulio; Uslenghi, Massimiliano; Vacavant, Laurent; Vacek, Vaclav; Vachon, Brigitte; Vahsen, Sven; Valenta, Jan; Valente, Paolo; Valentinetti, Sara; Valkar, Stefan; Valladolid Gallego, Eva; Vallecorsa, Sofia; Valls Ferrer, Juan Antonio; van der Graaf, Harry; van der Kraaij, Erik; Van Der Leeuw, Robin; van der Poel, Egge; van der Ster, Daniel; Van Eijk, Bob; van Eldik, Niels; van Gemmeren, Peter; van Kesteren, Zdenko; van Vulpen, Ivo; Vandelli, Wainer; Vandoni, Giovanna; Vaniachine, Alexandre; Vankov, Peter; Vannucci, Francois; Varela Rodriguez, Fernando; Vari, Riccardo; Varnes, Erich; Varouchas, Dimitris; Vartapetian, Armen; Varvell, Kevin; Vassilakopoulos, Vassilios; Vazeille, Francois; Vegni, Guido; Veillet, Jean-Jacques; Vellidis, Constantine; Veloso, Filipe; Veness, Raymond; Veneziano, Stefano; Ventura, Andrea; Ventura, Daniel; Venturi, Manuela; Venturi, Nicola; Vercesi, Valerio; Verducci, Monica; Verkerke, Wouter; Vermeulen, Jos; Vest, Anja; Vetterli, Michel; Vichou, Irene; Vickey, Trevor; Viehhauser, Georg; Viel, Simon; Villa, Mauro; Villaplana Perez, Miguel; Vilucchi, Elisabetta; Vincter, Manuella; Vinek, Elisabeth; Vinogradov, Vladimir; Virchaux, Marc; Viret, Sébastien; Virzi, Joseph; Vitale, Antonio; Vitells, Ofer; Viti, Michele; Vivarelli, Iacopo; Vives Vaque, Francesc; Vlachos, Sotirios; Vlasak, Michal; Vlasov, Nikolai; Vogel, Adrian; Vokac, Petr; Volpi, Guido; Volpi, Matteo; Volpini, Giovanni; von der Schmitt, Hans; von Loeben, Joerg; von Radziewski, Holger; von Toerne, Eckhard; Vorobel, Vit; Vorobiev, Alexander; Vorwerk, Volker; Vos, Marcel; Voss, Rudiger; Voss, Thorsten Tobias; Vossebeld, Joost; Vranjes, Nenad; Vranjes Milosavljevic, Marija; Vrba, Vaclav; Vreeswijk, Marcel; Vu Anh, Tuan; Vuillermet, Raphael; Vukotic, Ilija; Wagner, Wolfgang; Wagner, Peter; Wahlen, Helmut; Wakabayashi, Jun; Walbersloh, Jorg; Walch, Shannon; Walder, James; Walker, Rodney; Walkowiak, Wolfgang; Wall, Richard; Waller, Peter; Wang, Chiho; Wang, Haichen; Wang, Hulin; Wang, Jike; Wang, Jin; Wang, Joshua C; Wang, Rui; Wang, Song-Ming; Warburton, Andreas; Ward, Patricia; Warsinsky, Markus; Watkins, Peter; Watson, Alan; Watson, Miriam; Watts, Gordon; Watts, Stephen; Waugh, Anthony; Waugh, Ben; Weber, Jens; Weber, Marc; Weber, Michele; Weber, Pavel; Weidberg, Anthony; Weigell, Philipp; Weingarten, Jens; Weiser, Christian; Wellenstein, Hermann; Wells, Phillippa; Wen, Mei; Wenaus, Torre; Wendler, Shanti; Weng, Zhili; Wengler, Thorsten; Wenig, Siegfried; Wermes, Norbert; Werner, Matthias; Werner, Per; Werth, Michael; Wessels, Martin; Weydert, Carole; Whalen, Kathleen; Wheeler-Ellis, Sarah Jane; Whitaker, Scott; White, Andrew; White, Martin; White, Sebastian; Whitehead, Samuel Robert; Whiteson, Daniel; Whittington, Denver; Wicek, Francois; Wicke, Daniel; Wickens, Fred; Wiedenmann, Werner; Wielers, Monika; Wienemann, Peter; Wiglesworth, Craig; Wiik, Liv Antje Mari; Wijeratne, Peter Alexander; Wildauer, Andreas; Wildt, Martin Andre; Wilhelm, Ivan; Wilkens, Henric George; Will, Jonas Zacharias; Williams, Eric; Williams, Hugh; Willis, William; Willocq, Stephane; Wilson, John; Wilson, Michael Galante; Wilson, Alan; Wingerter-Seez, Isabelle; Winkelmann, Stefan; Winklmeier, Frank; Wittgen, Matthias; Wolter, Marcin Wladyslaw; Wolters, Helmut; Wooden, Gemma; Wosiek, Barbara; Wotschack, Jorg; Woudstra, Martin; Wraight, Kenneth; Wright, Catherine; Wrona, Bozydar; Wu, Sau Lan; Wu, Xin; Wu, Yusheng; Wulf, Evan; Wunstorf, Renate; Wynne, Benjamin; Xaplanteris, Leonidas; Xella, Stefania; Xie, Song; Xie, Yigang; Xu, Chao; Xu, Da; Xu, Guofa; Yabsley, Bruce; Yamada, Miho; Yamamoto, Akira; Yamamoto, Kyoko; Yamamoto, Shimpei; Yamamura, Taiki; Yamaoka, Jared; Yamazaki, Takayuki; Yamazaki, Yuji; Yan, Zhen; Yang, Haijun; Yang, Un-Ki; Yang, Yi; Yang, Yi; Yang, Zhaoyu; Yanush, Serguei; Yao, Weiming; Yao, Yushu; Yasu, Yoshiji; Ybeles Smit, Gabriel Valentijn; Ye, Jingbo; Ye, Shuwei; Yilmaz, Metin; Yoosoofmiya, Reza; Yorita, Kohei; Yoshida, Riktura; Young, Charles; Youssef, Saul; Yu, Dantong; Yu, Jaehoon; Yu, Jie; Yuan, Li; Yurkewicz, Adam; Zaets, Vassilli; Zaidan, Remi; Zaitsev, Alexander; Zajacova, Zuzana; Zalite, Youris; Zanello, Lucia; Zarzhitsky, Pavel; Zaytsev, Alexander; Zeitnitz, Christian; Zeller, Michael; Zemla, Andrzej; Zendler, Carolin; Zenin, Anton; Zenin, Oleg; Ženiš, Tibor; Zenonos, Zenonas; Zenz, Seth; Zerwas, Dirk; Zevi della Porta, Giovanni; Zhan, Zhichao; Zhang, Dongliang; Zhang, Huaqiao; Zhang, Jinlong; Zhang, Xueyao; Zhang, Zhiqing; Zhao, Long; Zhao, Tianchi; Zhao, Zhengguo; Zhemchugov, Alexey; Zheng, Shuchen; Zhong, Jiahang; Zhou, Bing; Zhou, Ning; Zhou, Yue; Zhu, Cheng Guang; Zhu, Hongbo; Zhu, Yingchun; Zhuang, Xuai; Zhuravlov, Vadym; Zieminska, Daria; Zimmermann, Robert; Zimmermann, Simone; Zimmermann, Stephanie; Ziolkowski, Michael; Zitoun, Robert; Živković, Lidija; Zmouchko, Viatcheslav; Zobernig, Georg; Zoccoli, Antonio; Zolnierowski, Yves; Zsenei, Andras; zur Nedden, Martin; Zutshi, Vishnu; Zwalinski, Lukasz

    2012-01-01

    Proton-proton collisions at $\\sqrt{s}=7$ TeV and heavy ion collisions at $\\sqrt{s_{NN}}$=2.76 TeV were produced by the LHC and recorded using the ATLAS experiment's trigger system in 2010. The LHC is designed with a maximum bunch crossing rate of 40 MHz and the ATLAS trigger system is designed to record approximately 200 of these per second. The trigger system selects events by rapidly identifying signatures of muon, electron, photon, tau lepton, jet, and B meson candidates, as well as using global event signatures, such as missing transverse energy. An overview of the ATLAS trigger system, the evolution of the system during 2010 and the performance of the trigger system components and selections based on the 2010 collision data are shown. A brief outline of plans for the trigger system in 2011 is presented

  19. Alignment of the ATLAS Inner Detector Tracking System

    CERN Document Server

    Moles-Valls, R

    2008-01-01

    The ATLAS experiment is equipped with a tracking system for c harged particles built on two technologies: silicon and drift tube base detectors. These kind of detectors compose the ATLAS Inner Detector (ID). The Alignment of the ATLAS ID tracking s ystem requires the determination of almost 36000 degrees of freedom. From the tracking point o f view, the alignment parameters should be know to a few microns precision. This permits to att ain optimal measurements of the parameters of the charged particles trajectories, thus ena bling ATLAS to achieve its physics goals. The implementation of the alignment software, its framewor k and the data flow will be discussed. Special attention will be paid to the recent challenges wher e large scale computing simulation of the ATLAS detector has been performed, mimicking the ATLAS o peration, which is going to be very important for the LHC startup scenario. The alignment r esult for several challenges (real cosmic ray data taking and computing system commissioning) will be...

  20. Atlas V Aft Bulkhead Carrier Rideshare System

    OpenAIRE

    Willcox, Maj Travis

    2012-01-01

    This paper gives the background and details of the Atlas V Aft Bulkhead Carrier to be flown on the National Recoinnassance Office Launch 36 with the Operationally Unique Technologies Satellite Auxiliary Payload. The CubeSats included are from a number of labs, universities and government entities for the purpose of technology demonstration, science experimentation and operational proof of concepts. This mission will pave the way for rideshare on NRO missions and other Atlas V launches.

  1. The Monitoring and Calibration Web Systems for the ATLAS Tile Calorimeter Data Quality Analysis

    CERN Document Server

    Sivolella, A; Ferreira, F

    2012-01-01

    The Tile Calorimeter (TileCal) is one of the ATLAS sub-detectors. The read-out is performed by about 10,000 PhotoMultiplier Tubes (PMTs). The signal of each PMT is digitized by an electronic channel. The Monitoring and Calibration Web System (MCWS) supports the data quality analysis of the electronic channels. This application was developed to assess the detector status and verify its performance. It can provide to the user the list of TileCal known problematic channels, that is stored in the ATLAS condition database (COOL DB). The bad channels list guides the data quality validator in identifying new problematic channels and is used in data reconstruction and the system allows to update the channels list directly in the COOL database. MCWS can generate summary results, such as eta-phi plots and comparative tables of the masked channels percentage. Regularly, during the LHC (Large Hadron Collider) shutdown a maintenance of the detector equipments is performed. When a channel is repaired, its calibration const...

  2. A proposal to upgrade the ATLAS RPC system for the High Luminosity LHC

    CERN Document Server

    ATLAS Collaboration; The ATLAS collaboration

    2015-01-01

    The architecture of the present trigger system in the ATLAS Muon Barrel was designed according to a reference luminosity of 10^34 cm-2 s-1 with a safety factor of 5, with respect to the simulated background rates, now confirmed by LHC Run-1 data. HL-LHC will provide a luminosity 5 times higher and an order of magnitude higher background. As a result, the performance demand increases, while the detector being susceptible to ageing effects. Moreover, the present muon trigger acceptance in the barrel is just above 70%, due to the presence of the barrel toroid structures. This scenario induced the ATLAS muon Collaboration to propose an appropriate upgrade plan, involving both detector and trigger-readout electronics, to guarantee the performance required by the physics program for the 20 years scheduled. This consists in installing a layer of new generation RPCs in the inner barrel, to increase the redundancy, the selectivity, and provide almost full acceptance. The first 10% of the system, corresponding to the e...

  3. ATLAS Grid Data Processing: system evolution and scalability

    CERN Document Server

    Golubkov, D; The ATLAS collaboration; Klimentov, A; Minaenko, A; Nevski, P; Vaniachine, A; Walker, R

    2012-01-01

    The production system for Grid Data Processing handles petascale ATLAS data reprocessing and Monte Carlo activities. The production system empowered further data processing steps on the Grid performed by dozens of ATLAS physics groups with coordinated access to computing resources worldwide, including additional resources sponsored by regional facilities. The system provides knowledge management of configuration parameters for massive data processing tasks, reproducibility of results, scalable database access, orchestrated workflow and performance monitoring, dynamic workload sharing, automated fault tolerance and petascale data integrity control. The system evolves to accommodate a growing number of users and new requirements from our contacts in ATLAS main areas: Trigger, Physics, Data Preparation and Software & Computing. To assure scalability, the next generation production system architecture development is in progress. We report on scaling up the production system for a growing number of users provi...

  4. The Silicon Drift Detector readout scheme for the Inner Tracker System of the ALICE experiment

    International Nuclear Information System (INIS)

    The Silicon Drift Detectors (SDDs) provide, through the measurement of the drift time of the charge deposited by the particle which crosses the detector, information on the impact point and on the energy deposition. The foreseen readout scheme is based on a single chip implementation of an integrated circuit that includes low-noise amplification, fast analog storage and analog to digital conversion, thus avoiding the problems related to the analog signal transmission. A multi-event buffer that reduces the transmission bandwidth and a data compression/zero suppression unit complete the architecture. In this paper the system components design is described, together with the results of the first prototypes

  5. Vol. 30 - A Novel Data Acquisition System Based on Fast Optical Links and Universal Readout Boards

    CERN Document Server

    Korcyl, Grzegorz

    2015-01-01

    Various scale measurement systems are composed of the sensors providing data through the data acquisition system to the archiving facility. The scale of such systems is determined by the number of sensors that require processing and can vary from few up to hundreds of thousands. The number and the type of sensors impose several requirements on the data acquisition system like readout frequency, measurement precision and online analysis algorithms. The most challenging application s are the large scale experiments in nuclear and particle physics . This thesis presents a concept , construction and tests of a modular and scalable, tree - structured architecture of a data acquisition system. The system is composed out of two logical elemen ts: endpoints which are the modules providing data and hubs that concentrate the data streams from the endpoints and provide connectivity with the rest of the system. Those two logica...

  6. FPGA Based Data Read-Out System of the Belle 2 Pixel Detector

    CERN Document Server

    Levit, Dmytro; Greenwald, Daniel; Paul, Stephan

    2014-01-01

    The upgrades of the Belle experiment and the KEKB accelerator aim to increase the data set of the experiment by the factor 50. This will be achieved by increasing the luminosity of the accelerator which requires a significant upgrade of the detector. A new pixel detector based on DEPFET technology will be installed to handle the increased reaction rate and provide better vertex resolution. One of the features of the DEPFET detector is a long integration time of 20 {\\mu}s, which increases detector occupancy up to 3 %. The detector will generate about 2 GB/s of data. An FPGA-based two-level read-out system, the Data Handling Hybrid, was developed for the Belle 2 pixel detector. The system consists of 40 read-out and 8 controller modules. All modules are built in {\\mu}TCA form factor using Xilinx Virtex-6 FPGA and can utilize up to 4 GB DDR3 RAM. The system was successfully tested in the beam test at DESY in January 2014. The functionality and the architecture of the Belle 2 Data Handling Hybrid system as well a...

  7. Development of A BPM data readout system using MADOCA II-LabVIEW interface

    International Nuclear Information System (INIS)

    We have developed new control framework called 'MADOCA II' as reported on this PASJ10 meeting. Main features of the MADOCA II are; 1) It can treat variable length data like a waveform or an image. 2) It can be running on a WindowsTM operating system. Using these features, we have developed a MADOCA II-LabVIEW interface program and applied it on a readout system for beam position monitors (BPM). The system consists of two NI's PXI 5922 digitizers (4 channels in total), a CPU and a PXI crate and the readout program was written using the LabVIEW on a 32 bit version of Windows 7. BPM data is digitized on the PXI 5922 with 50k samples/s and the data is decimated to several sampling rates (50∼5k samples/s) by the LabVIEW based software and these decimated data are transferred to remote client software via the MADOCA II middleware. Monitoring of the beam orbit is performed on the client with graphs for time-domain and frequency domain by FFT on the client software. It was confirmed that all decimated data were transferred to the client with sufficient speed without any lack. (author)

  8. Recent Developments on the Silicon Drift Detector readout scheme for the ALICE Inner Tracking System

    CERN Document Server

    Mazza, G; Bonazzola, G C; Bonvicini, V; Cavagnino, D; Cerello, P G; De Remigis, P; Falchieri, D; Gabrielli, A; Gandolfi, E; Giubellino, P; Hernández, R; Masetti, M; Montaño-Zetina, L M; Nouais, D; Rashevsky, A; Rivetti, A; Tosello, F

    1999-01-01

    Proposal of abstract for LEB99, Snowmass, Colorado, 20-24 September 1999Recent developments of the Silicon Drift Detector (SDD) readout system for the ALICE Experiment are presented. The foreseen readout system is based on 2 main units. The first unit consists of a low noise preamplifier, an analog memory which continuously samples the amplifier output, an A/D converter and a digital memory. When the trigger signal validates the analog data, the ADCs convert the samples into a digital form and store them into the digital memory. The second unit performs the zero suppression/data compression operations. In this paper the status of the design is presented, together with the test results of the A/D converter, the multi-event buffer and the compression unit prototype.Summary:In the Inner Tracker System (ITS) of the ALICE experiment the third and the fourth layer of the detectors are SDDs. These detectors provide the measurement of both the energy deposition and the bi-dimensional position of the track. In terms o...

  9. "ATLAS" Advanced Technology Life-cycle Analysis System

    Science.gov (United States)

    Lollar, Louis F.; Mankins, John C.; ONeil, Daniel A.

    2004-01-01

    Making good decisions concerning research and development portfolios-and concerning the best systems concepts to pursue - as early as possible in the life cycle of advanced technologies is a key goal of R&D management This goal depends upon the effective integration of information from a wide variety of sources as well as focused, high-level analyses intended to inform such decisions Life-cycle Analysis System (ATLAS) methodology and tool kit. ATLAS encompasses a wide range of methods and tools. A key foundation for ATLAS is the NASA-created Technology Readiness. The toolkit is largely spreadsheet based (as of August 2003). This product is being funded by the Human and Robotics The presentation provides a summary of the Advanced Technology Level (TRL) systems Technology Program Office, Office of Exploration Systems, NASA Headquarters, Washington D.C. and is being integrated by Dan O Neil of the Advanced Projects Office, NASA/MSFC, Huntsville, AL

  10. Performance of the ATLAS DAQ DataFlow system

    CERN Document Server

    Ünel, G; Beck, H P; Beretta, M; Blair, R; Bogaerts, J A C; Botterill, David R; Ciobotaru, M; Corso-Radu, A; Cranfield, R; Crone, G; Dawson, J; De Matos-Lopes-Pinto,; Di Girolamo, B; Dobinson, Robert W; Dos Anjos, A; Ermoline, Y; Ferrari, R; Ferrer, M L; Francis, D; Gadomski, S; Gameiro, S; Golonka, P; Gorini, B; Green, B; Gruwé, M; Haas, S; Haeberli, C; Hasegawa, Y; Hauser, R; Hinkelbein, C; Hughes-Jones, R E; Joos, M; Kaczmarska, A; Kieft, G; Korcyl, K; Kugel, A; Lankford, A; Le Vine, M J; Lehmann, G; Losada-Maia, M; Maeno, T; Mapelli, L; Martin, B; McLaren, R; Meirosu, C; Misiejuk, A; Mommsen, R K; Mornacchi, G; Müller, M; Nagasaka, Y; Nakayoshi, K; Palencia-Cortezon, E; Pasqualucci, E; Petersen, J; Prigent, D; Pérez-Réale, V; Schlereth, J L; Shimojima, M; Spiwoks, R; Stancu, S; Strong, J; Tremblet, L; Vandelli, Wainer R; Vermeulen, J C; Werner, P; Wickens, Fred J; Yasu, Y; Yu, M; Zobernig, H; Zurek, M; Computing In High Energy Physics

    2005-01-01

    The baseline DAQ architecture of the ATLAS Experiment at LHC is introduced and its present implementation and the performance of the DAQ components as measured in a laboratory environment are summarized. It will be shown that the discrete event simulation model of the DAQ system, tuned using these measurements, does predict the behaviour of the prototype configurations well, after which, predictions for the final ATLAS system are presented. With the currently available hardware and software, a system using ~140 ROSs with 3GHz single cpu, ~100 SFIs with dual 2.4 GHz cpu and ~500 L2PUs with dual 3.06 GHz cpu

  11. A double photomultiplier Compton camera and its readout system for mice imaging

    International Nuclear Information System (INIS)

    We have designed a Compton Camera (CC) to image the bio-distribution of gamma-emitting radiopharmaceuticals in mice. A CC employs the 'electronic collimation', i.e. a technique that traces the gamma-rays instead of selecting them with physical lead or tungsten collimators. To perform such a task, a CC measures the parameters of the Compton interaction that occurs in the device itself. At least two detectors are required: one (tracker), where the primary gamma undergoes a Compton interaction and a second one (calorimeter), in which the scattered gamma is completely absorbed. Eventually the polar angle and hence a 'cone' of possible incident directions are obtained (event with 'incomplete geometry'). Different solutions for the two detectors are proposed in the literature: our design foresees two similar Position Sensitive Photomultipliers (PMT, Hamamatsu H8500). Each PMT has 64 output channels that are reduced to 4 using a charge multiplexed readout system, i.e. a Series Charge Multiplexing net of resistors. Triggering of the system is provided by the coincidence of fast signals extracted at the last dynode of the PMTs. Assets are the low cost and the simplicity of design and operation, having just one type of device; among drawbacks there is a lower resolution with respect to more sophisticated trackers and full 64 channels Readout. This paper does compare our design of our two-Hamamatsu CC to other solutions and shows how the spatial and energy accuracy is suitable for the inspection of radioactivity in mice.

  12. Noise Characteristics of Readout Electronics for 64-Channel DROS Magnetocardiography System

    Energy Technology Data Exchange (ETDEWEB)

    Kim, J. M.; Kim, K. D.; Lee, Y. H.; Yu, K. K.; Kim, K. W.; Kwon, H. C. [Korea Research Institute of Standards and Science, Daejeon (Korea, Republic of); Sasada, Ichiro [Dept. of Applied Science for Electrics and Materials, Kyushu University, Fukuoka (Japan)

    2005-10-15

    We have developed control electronics to operate flux-locked loop (FLL), and analog signal filters to process FLL outputs for 64-channel Double Relaxation Oscillation SQUID (DROS) magnetocardiography (MCG) system. Control electronics consisting of a preamplifier, an integrator, and a feedback, is compact and low-cost due to larger swing voltage and flux-to-voltage transfer coefficients of DROS than those of dc SQUIDs. Analog signal filter (ASF) serially chained with a high-pass filter having a cut-off frequency of 0.1 Hz, an amplifier having a gain of 100, a low-pass filter of 100 Hz, and a notch filter of 60 Hz makes FLL output suitable for MCG. The noise of a preamplifier in FLL control electronics is 7 nV/Hz 1.5 nV/Hz at 100 Hz that contributes 6 fT/Hz at 1 Hz, 1.3 fT/Hz at 100 Hz in readout electronics, and the noise of ASF electronics is 150 {mu}V/Hz equivalent to 0.13 fT/Hz within the range of 1 - 100 Hz. When DROSs are connected to readout electronics inside a magnetically shielded room, the noise of 64-channel DROS system is 10 fT/Hz at 1 Hz, 5fT/Hz at 100 Hz on the average, low enough to measure human MCG.

  13. A readout system for the micro-vertex-detector demonstrator for the CBM experiment at FAIR

    Energy Technology Data Exchange (ETDEWEB)

    Schrader, Christoph

    2011-06-09

    The Compressed Baryonic Matter Experiment (CBM) is a fixed target heavy ion experiment currently in preparation at the future FAIR accelerator complex in Darmstadt. The CBM experiment focuses on the measurements of diagnostic probes of the early and dense phase of the fireball at beam energies from 8 up to 45 AGeV. As observables, rare hadronic, leptonic and photonic probes are used, including open charm. Open charm will be identified by reconstructing the secondary decay vertex of the corresponding short lived particles. As the central component for track reconstruction, a detector system based on silicon semiconductor detectors is planned. The first three stations of the Silicon Tracking System (STS) make up the so-called Micro-Vertex-Detector (MVD) operating in moderate vacuum. Because of the well-balanced compromise between an excellent spatial resolution (few {mu}m), low material budget ({proportional_to}50 {mu}m Si), adequate radiation tolerance and readout speed, Monolithic Active Pixel Sensors (MAPS) based on CMOS technology are more suited than any other technology for the reconstruction of the secondary vertex in CBM. A new detector concept has to be developed. Two MVD-Demonstrator modules have been successfully tested with 120 GeV pions at the CERN-SPS. The main topic of this thesis is the development of a control and readout concept of several MVD-Demonstrator modules with a common data acquisition system. In order to achieve the required results a front-end electronics device has been developed which is capable of reading the analogue signals of two sensors on a ex-print cable. The high data rate of the MAPS sensors (1.2 Gbit per second and sensor by 50 MHz and 12 bit ADC resolution) requires a readout system which processes the data on-line in a pipeline to avoid dead times. In order to implement the pipeline processing an FPGA is used, which is located on an additional hardware platform. In order to integrate the MVD-Demonstrator readout board in the

  14. A readout system for the micro-vertex-detector demonstrator for the CBM experiment at FAIR

    International Nuclear Information System (INIS)

    The Compressed Baryonic Matter Experiment (CBM) is a fixed target heavy ion experiment currently in preparation at the future FAIR accelerator complex in Darmstadt. The CBM experiment focuses on the measurements of diagnostic probes of the early and dense phase of the fireball at beam energies from 8 up to 45 AGeV. As observables, rare hadronic, leptonic and photonic probes are used, including open charm. Open charm will be identified by reconstructing the secondary decay vertex of the corresponding short lived particles. As the central component for track reconstruction, a detector system based on silicon semiconductor detectors is planned. The first three stations of the Silicon Tracking System (STS) make up the so-called Micro-Vertex-Detector (MVD) operating in moderate vacuum. Because of the well-balanced compromise between an excellent spatial resolution (few μm), low material budget (∝50 μm Si), adequate radiation tolerance and readout speed, Monolithic Active Pixel Sensors (MAPS) based on CMOS technology are more suited than any other technology for the reconstruction of the secondary vertex in CBM. A new detector concept has to be developed. Two MVD-Demonstrator modules have been successfully tested with 120 GeV pions at the CERN-SPS. The main topic of this thesis is the development of a control and readout concept of several MVD-Demonstrator modules with a common data acquisition system. In order to achieve the required results a front-end electronics device has been developed which is capable of reading the analogue signals of two sensors on a ex-print cable. The high data rate of the MAPS sensors (1.2 Gbit per second and sensor by 50 MHz and 12 bit ADC resolution) requires a readout system which processes the data on-line in a pipeline to avoid dead times. In order to implement the pipeline processing an FPGA is used, which is located on an additional hardware platform. In order to integrate the MVD-Demonstrator readout board in the HADES data

  15. Software releases management for TDAQ system in ATLAS experiment

    CERN Document Server

    Kazarov, A; The ATLAS collaboration; Hauser, R; Soloviev, I

    2010-01-01

    ATLAS is a general-purpose experiment in high-energy physics at Large Hadron Collider at CERN. ATLAS Trigger and Data Acquisition (TDAQ) system is a distributed computing system which is responsible for transferring and filtering the physics data from the experiment to mass-storage. TDAQ software is developed since 1998 by a team of few dozens developers. It is used for integration of all ATLAS subsystem participating in data-taking, providing framework and API for building the s/w pieces of TDAQ system. It is currently composed of more then 200 s/w packages which are available for ATLAS users in form of regular software releases. The s/w is available for development on a shared filesystem, on test beds and it is deployed to the ATLAS pit where it is used for data-taking. The paper describes the working model, the policies and the tools which are used by s/w developers and s/w librarians in order to develop, release, deploy and maintain the TDAQ s/w for the long period of development, commissioning and runnin...

  16. Cartographic Concept of Atlas Information System "ÖROK Atlas Online" - AIS Austria

    Directory of Open Access Journals (Sweden)

    Mirjanka Lechthaler

    2005-09-01

    Full Text Available The conception and realization of the Atlas Information System “ÖROK Atlas Online” – AIS Austria, with the point on cartographic concept is presented in the following article. It is a spatial information system that is different from pure GIS because of its strong cartographic character. The online system is not a collection of GIS based tools, but a system that subdivides all functionalities into a cartographically conceptional and structured order. It must correspond with the characteristics of a cartographical, rule-based and personalized information system. The prototype should allow the cartographical visualization of different geometry and statistical data from the elementary geo-data pool in form of thematic maps, graphics, statistics and texts, as well as queries of the database.The primary challenge and aim of the cartographic concept is the development of map graphic or rather of the cartographic design, which has to match the output media. Further, analysis, exploration and monitoring of these data sets via map-based graphical user interface for different user groups shall be possible. The restrictive-flexible user guidance in this interactive system takes responsibility of what is not in cartographic or semantical sense accessible or useful. Only in this case an atlas information system can support meaningful cartographic communication.

  17. The cryogenic readout system with GaAs JFETs for multi-pixel cameras

    Science.gov (United States)

    Hibi, Y.; Matsuo, H.; Nagata, H.; Ikeda, H.; Fujiwara, M.

    2010-11-01

    Our purpose is to realize a multi-pixel sub-millimeter/terahertz camera with the superconductor - insulator - superconductor photon detectors. These detectors must be cooled below 1 K. Since these detectors have high impedance, signal amplifiers of each pixel must be setting aside of them for precise signal readout. Therefore, it is desirable that the readout system work well even in cryogenic temperature. We selected the n-type GaAs JFETs as cryogenic circuit elements. From our previous studies, the n-type GaAs JFETs have good cryogenic properties even when those power dissipations are low. We have designed several kinds of integration circuits (ICs) and demonstrated their performance at cryogenic temperature. Contents of ICs are following; AC coupled trans-impedance amplifiers, voltage distributors for suppressing input offset voltage of AC coupled CTIAs, multiplexers with sample-and holds, and shift-registers for controlling multiplex timing. The power dissipation of each circuit is 0.5 to 3 micro watts per channel. We also have designed and manufactured 32-channel multi-chip-modules with these ICs. These modules can make 32- channel input photo current signals into one or two serial output voltage signal(s). Size of these is 40mm x 30mm x 2mm and estimated total power dissipation is around 400 micro watts.

  18. On the Integration of a Readout System Dedicated for Neutron Discrimination in Harsh Environment

    Directory of Open Access Journals (Sweden)

    Krit S. Ben

    2016-01-01

    Full Text Available New insights related to the integration of a readout system dedicated for the detection and discrimination of neutrons are presented here. This study takes place in the framework of the I_SMART European project. This system will have to work later in a harsh environment in terms of temperature and radiations, what makes not only the development of specifications for operation and reliability of the components necessary but also the investigation of margins for the interplay of the system. Implementation of the analog conditioning chain at transistor level (AMS (Analog/Mixed Signal 0.35μm CMOS technology is investigated here where electrical performances have been validated at SPICE-level simulations using “Spectre” simulator (SPICE-based under Cadence DFII.

  19. The prototype readout electronics system for the External Target Experiment in CSR of HIRFL

    International Nuclear Information System (INIS)

    A prototype readout electronics system was designed for the External Target Experiment in the Cooling Storage Ring (CSR) of the Heavy Ion Research Facility in Lanzhou (HIRFL). The kernel parts include the 128-channel 100 ps high-resolution time digitization module, the 16-channel 25 ps high-resolution time and charge measurement module, and the trigger electronics, as well as the clock generation circuits, which are all integrated within the PXI-6U crate. The laboratory test results indicate that a good resolution is achieved, better than the requirement. We also have conducted initial commissioning tests with the detectors to confirm the functions of the system. Through the research of this prototype electronics, preparation for the future extended system is made

  20. The prototype readout electronics system for the External Target Experiment in CSR of HIRFL

    Science.gov (United States)

    Zhao, L.; Kang, L.; Li, M.; Liu, S.; Zhou, J.; An, Q.

    2014-07-01

    A prototype readout electronics system was designed for the External Target Experiment in the Cooling Storage Ring (CSR) of the Heavy Ion Research Facility in Lanzhou (HIRFL). The kernel parts include the 128-channel 100 ps high-resolution time digitization module, the 16-channel 25 ps high-resolution time and charge measurement module, and the trigger electronics, as well as the clock generation circuits, which are all integrated within the PXI-6U crate. The laboratory test results indicate that a good resolution is achieved, better than the requirement. We also have conducted initial commissioning tests with the detectors to confirm the functions of the system. Through the research of this prototype electronics, preparation for the future extended system is made.

  1. Custom solution for a data readout architecture: A system level simulation

    International Nuclear Information System (INIS)

    Behavioral simulations of data readout architecture based on VME and custom high speed buses show that it is suitable as data acquisition and event building system for high energy physics experiments. This paper describes a reliable but simple auxiliary bus designed to afford asynchronous transactions up to 10 Mtransfers/s, sparse data scan operations and crate-level event building. An intercate connection is also presented to accomplish system level event building and data concentration by means of synchronous transactions of rates up to 10 Mtransfers/s. This architecture has been simulated using Verilog HDL. Preliminary performance estimates are presented and briefly discussed in view of system application in KLOE experiment at the DAΦNE Φ-factory in Frascati (Italy)

  2. ATLAS

    CERN Multimedia

    Akhnazarov, V; Canepa, A; Bremer, J; Burckhart, H; Cattai, A; Voss, R; Hervas, L; Kaplon, J; Nessi, M; Werner, P; Ten kate, H; Tyrvainen, H; Vandelli, W; Krasznahorkay, A; Gray, H; Alvarez gonzalez, B; Eifert, T F; Rolando, G; Oide, H; Barak, L; Glatzer, J; Backhaus, M; Schaefer, D M; Maciejewski, J P; Milic, A; Jin, S; Von torne, E; Limbach, C; Medinnis, M J; Gregor, I; Levonian, S; Schmitt, S; Waananen, A; Monnier, E; Muanza, S G; Pralavorio, P; Talby, M; Tiouchichine, E; Tocut, V M; Rybkin, G; Wang, S; Lacour, D; Laforge, B; Ocariz, J H; Bertoli, W; Malaescu, B; Sbarra, C; Yamamoto, A; Sasaki, O; Koriki, T; Hara, K; Da silva gomes, A; Carvalho maneira, J; Marcalo da palma, A; Chekulaev, S; Tikhomirov, V; Snesarev, A; Buzykaev, A; Maslennikov, A; Peleganchuk, S; Sukharev, A; Kaplan, B E; Swiatlowski, M J; Nef, P D; Schnoor, U; Oakham, G F; Ueno, R; Orr, R S; Abouzeid, O; Haug, S; Peng, H; Kus, V; Vitek, M; Temming, K K; Dang, N P; Meier, K; Schultz-coulon, H; Geisler, M P; Sander, H; Schaefer, U; Ellinghaus, F; Rieke, S; Nussbaumer, A; Liu, Y; Richter, R; Kortner, S; Fernandez-bosman, M; Ullan comes, M; Espinal curull, J; Chiriotti alvarez, S; Caubet serrabou, M; Valladolid gallego, E; Kaci, M; Carrasco vela, N; Lancon, E C; Besson, N E; Gautard, V; Bracinik, J; Bartsch, V C; Potter, C J; Lester, C G; Moeller, V A; Rosten, J; Crooks, D; Mathieson, K; Houston, S C; Wright, M; Jones, T W; Harris, O B; Byatt, T J; Dobson, E; Hodgson, P; Hodgkinson, M C; Dris, M; Karakostas, K; Ntekas, K; Oren, D; Duchovni, E; Etzion, E; Oren, Y; Ferrer, L M; Testa, M; Doria, A; Merola, L; Sekhniaidze, G; Giordano, R; Ricciardi, S; Milazzo, A; Falciano, S; De pedis, D; Dionisi, C; Veneziano, S; Cardarelli, R; Verzegnassi, C; Soualah, R; Ochi, A; Ohshima, T; Kishiki, S; Linde, F L; Vreeswijk, M; Werneke, P; Muijs, A; Vankov, P H; Jansweijer, P P M; Dale, O; Lund, E; Bruckman de renstrom, P; Dabrowski, W; Adamek, J D; Wolters, H; Micu, L; Pantea, D; Tudorache, V; Mjoernmark, J; Klimek, P J; Ferrari, A; Abdinov, O; Akhoundov, A; Hashimov, R; Shelkov, G; Khubua, J; Ladygin, E; Lazarev, A; Glagolev, V; Dedovich, D; Lykasov, G; Zhemchugov, A; Zolnikov, Y; Ryabenko, M; Sivoklokov, S; Vasilyev, I; Shalimov, A; Lobanov, M; Paramoshkina, E; Mosidze, M; Bingul, A; Nodulman, L J; Guarino, V J; Yoshida, R; Drake, G R; Calafiura, P; Haber, C; Quarrie, D R; Alonso, J R; Anderson, C; Evans, H; Lammers, S W; Baubock, M; Anderson, K; Petti, R; Suhr, C A; Linnemann, J T; Richards, R A; Tollefson, K A; Holzbauer, J L; Stoker, D P; Pier, S; Nelson, A J; Isakov, V; Martin, A J; Adelman, J A; Paganini, M; Gutierrez, P; Snow, J M; Pearson, B L; Cleland, W E; Savinov, V; Wong, W; Goodson, J J; Li, H; Lacey, R A; Gordeev, A; Gordon, H; Lanni, F; Nevski, P; Rescia, S; Kierstead, J A; Liu, Z; Yu, W W H; Bensinger, J; Hashemi, K S; Bogavac, D; Cindro, V; Hoeferkamp, M R; Coelli, S; Iodice, M; Piegaia, R N; Alonso, F; Wahlberg, H P; Barberio, E L; Limosani, A; Rodd, N L; Jennens, D T; Hill, E C; Pospisil, S; Smolek, K; Schaile, D A; Rauscher, F G; Adomeit, S; Mattig, P M; Wahlen, H; Volkmer, F; Calvente lopez, S; Sanchis peris, E J; Pallin, D; Podlyski, F; Says, L; Boumediene, D E; Scott, W; Phillips, P W; Greenall, A; Turner, P; Gwilliam, C B; Kluge, T; Wrona, B; Sellers, G J; Millward, G; Adragna, P; Hartin, A; Alpigiani, C; Piccaro, E; Bret cano, M; Hughes jones, R E; Mercer, D; Oh, A; Chavda, V S; Carminati, L; Cavasinni, V; Fedin, O; Patrichev, S; Ryabov, Y; Nesterov, S; Grebenyuk, O; Sasso, J; Mahmood, H; Polsdofer, E; Dai, T; Ferretti, C; Liu, H; Hegazy, K H; Benjamin, D P; Zobernig, G; Ban, J; Brooijmans, G H; Keener, P; Williams, H H; Le geyt, B C; Hines, E J; Fadeyev, V; Schumm, B A; Law, A T; Kuhl, A D; Neubauer, M S; Shang, R; Gagliardi, G; Calabro, D; Conta, C; Zinna, M; Jones, G; Li, J; Stradling, A R; Hadavand, H K; Mcguigan, P; Chiu, P; Baldelomar, E; Stroynowski, R A; Kehoe, R L; De groot, N; Timmermans, C; Lach-heb, F; Addy, T N; Nakano, I; Moreno lopez, D; Grosse-knetter, J; Tyson, B; Rude, G D; Tafirout, R; Benoit, P; Danielsson, H O; Elsing, M; Fassnacht, P; Froidevaux, D; Ganis, G; Gorini, B; Lasseur, C; Lehmann miotto, G; Kollar, D; Aleksa, M; Sfyrla, A; Duehrssen-debling, K; Fressard-batraneanu, S; Van der ster, D C; Bortolin, C; Schumacher, J; Mentink, M; Geich-gimbel, C; Yau wong, K H; Lafaye, R; Crepe-renaudin, S; Albrand, S; Hoffmann, D; Pangaud, P; Meessen, C; Hrivnac, J; Vernay, E; Perus, A; Henrot versille, S L; Le dortz, O; Derue, F; Piccinini, M; Polini, A; Terada, S; Arai, Y; Ikeno, M; Fujii, H; Nagano, K; Ukegawa, F; Aguilar saavedra, J A; Conde muino, P; Castro, N F; Eremin, V; Kopytine, M; Sulin, V; Tsukerman, I; Korol, A; Nemethy, P; Bartoldus, R; Glatte, A; Chelsky, S; Van nieuwkoop, J; Bellerive, A; Sinervo, J K; Battaglia, A; Barbier, G J; Pohl, M; Rosselet, L; Alexandre, G B; Prokoshin, F; Pezoa rivera, R A; Batkova, L; Kladiva, E; Stastny, J; Kubes, T; Vidlakova, Z; Esch, H; Homann, M; Herten, L G; Zimmermann, S U; Pfeifer, B; Stenzel, H; Andrei, G V; Wessels, M; Buescher, V; Kleinknecht, K; Fiedler, F M; Schroeder, C D; Fernandez, E; Mir martinez, L; Vorwerk, V; Bernabeu verdu, J; Salt, J; Civera navarrete, J V; Bernard, R; Berriaud, C P; Chevalier, L P; Hubbard, R; Schune, P; Nikolopoulos, K; Batley, J R; Brochu, F M; Phillips, A W; Teixeira-dias, P J; Rose, M B D; Buttar, C; Buckley, A G; Nurse, E L; Larner, A B; Boddy, C; Henderson, J; Costanzo, D; Tarem, S; Maccarrone, G; Laurelli, P F; Alviggi, M; Chiaramonte, R; Izzo, V; Palumbo, V; Fraternali, M; Crosetti, G; Marchese, F; Yamaguchi, Y; Hessey, N P; Mechnich, J M; Liebig, W; Kastanas, K A; Sjursen, T B; Zalieckas, J; Cameron, D G; Banka, P; Kowalewska, A B; Dwuznik, M; Mindur, B; Boldea, V; Hedberg, V; Smirnova, O; Sellden, B; Allahverdiyev, T; Gornushkin, Y; Koultchitski, I; Tokmenin, V; Chizhov, M; Gongadze, A; Khramov, E; Sadykov, R; Krasnoslobodtsev, I; Smirnova, L; Kramarenko, V; Minaenko, A; Zenin, O; Beddall, A J; Ozcan, E V; Hou, S; Wang, S; Moyse, E; Willocq, S; Chekanov, S; Le compte, T J; Love, J R; Ciocio, A; Hinchliffe, I; Tsulaia, V; Gomez, A; Luehring, F; Zieminska, D; Huth, J E; Gonski, J L; Oreglia, M; Tang, F; Shochet, M J; Costin, T; Mcleod, A; Uzunyan, S; Martin, S P; Pope, B G; Schwienhorst, R H; Brau, J E; Ptacek, E S; Milburn, R H; Sabancilar, E; Lauer, R; Saleem, M; Mohamed meera lebbai, M R; Lou, X; Reeves, K B; Rijssenbeek, M; Novakova, P N; Rahm, D; Steinberg, P A; Wenaus, T J; Paige, F; Ye, S; Kotcher, J R; Assamagan, K A; Oliveira damazio, D; Maeno, T; Henry, A; Dushkin, A; Costa, G; Meroni, C; Resconi, S; Lari, T; Biglietti, M; Lohse, T; Gonzalez silva, M L; Monticelli, F G; Saavedra, A F; Patel, N D; Ciodaro xavier, T; Asevedo nepomuceno, A; Lefebvre, M; Albert, J E; Kubik, P; Faltova, J; Turecek, D; Solc, J; Schaile, O; Ebke, J; Losel, P J; Zeitnitz, C; Sturm, P D; Barreiro alonso, F; Modesto alapont, P; Soret medel, J; Garzon alama, E J; Gee, C N; Mccubbin, N A; Sankey, D; Emeliyanov, D; Dewhurst, A L; Houlden, M A; Klein, M; Burdin, S; Lehan, A K; Eisenhandler, E; Lloyd, S; Traynor, D P; Ibbotson, M; Marshall, R; Pater, J; Freestone, J; Masik, J; Haughton, I; Manousakis katsikakis, A; Sampsonidis, D; Krepouri, A; Roda, C; Sarri, F; Fukunaga, C; Nadtochiy, A; Kara, S O; Timm, S; Alam, S M; Rashid, T; Goldfarb, S; Espahbodi, S; Marley, D E; Rau, A W; Dos anjos, A R; Haque, S; Grau, N C; Havener, L B; Thomson, E J; Newcomer, F M; Hansl-kozanecki, G; Deberg, H A; Takeshita, T; Goggi, V; Ennis, J S; Olness, F I; Kama, S; Ordonez sanz, G; Koetsveld, F; Elamri, M; Mansoor-ul-islam, S; Lemmer, B; Kawamura, G; Bindi, M; Schulte, S; Kugel, A; Kretz, M P; Kurchaninov, L; Blanchot, G; Chromek-burckhart, D; Di girolamo, B; Francis, D; Gianotti, F; Nordberg, M Y; Pernegger, H; Roe, S; Boyd, J; Wilkens, H G; Pauly, T; Fabre, C; Tricoli, A; Bertet, D; Ruiz martinez, M A; Arnaez, O L; Lenzi, B; Boveia, A J; Gillberg, D I; Davies, J M; Zimmermann, R; Uhlenbrock, M; Kraus, J K; Narayan, R T; John, A; Dam, M; Padilla aranda, C; Bellachia, F; Le flour chollet, F M; Jezequel, S; Dumont dayot, N; Fede, E; Mathieu, M; Gensolen, F D; Alio, L; Arnault, C; Bouchel, M; Ducorps, A; Kado, M M; Lounis, A; Zhang, Z P; De vivie de regie, J; Beau, T; Bruni, A; Bruni, G; Grafstrom, P; Romano, M; Lasagni manghi, F; Massa, L; Shaw, K; Ikegami, Y; Tsuno, S; Kawanishi, Y; Benincasa, G; Blagov, M; Fedorchuk, R; Shatalov, P; Romaniouk, A; Belotskiy, K; Timoshenko, S; Hooft van huysduynen, L; Lewis, G H; Wittgen, M M; Mader, W F; Rudolph, C J; Gumpert, C; Mamuzic, J; Rudolph, G; Schmid, P; Corriveau, F; Belanger-champagne, C; Yarkoni, S; Leroy, C; Koffas, T; Harack, B D; Weber, M S; Beck, H; Leger, A; Gonzalez sevilla, S; Zhu, Y; Gao, J; Zhang, X; Blazek, T; Rames, J; Sicho, P; Kouba, T; Sluka, T; Lysak, R; Ristic, B; Kompatscher, A E; Von radziewski, H; Groll, M; Meyer, C P; Oberlack, H; Stonjek, S M; Cortiana, G; Werthenbach, U; Ibragimov, I; Czirr, H S; Cavalli-sforza, M; Puigdengoles olive, C; Tallada crespi, P; Marti i garcia, S; Gonzalez de la hoz, S; Guyot, C; Meyer, J; Schoeffel, L O; Garvey, J; Hawkes, C; Hillier, S J; Staley, R J; Salvatore, P F; Santoyo castillo, I; Carter, J; Yusuff, I B; Barlow, N R; Berry, T S; Savage, G; Wraight, K G; Steele, G E; Hughes, G; Walder, J W; Love, P A; Crone, G J; Waugh, B M; Boeser, S; Sarkar, A M; Holmes, A; Massey, R; Pinder, A; Nicholson, R; Korolkova, E; Katsoufis, I; Maltezos, S; Tsipolitis, G; Leontsinis, S; Levinson, L J; Shoa, M; Abramowicz, H E; Bella, G; Gershon, A; Urkovsky, E; Taiblum, N; Gatti, C; Della pietra, M; Lanza, A; Negri, A; Flaminio, V; Lacava, F; Petrolo, E; Pontecorvo, L; Rosati, S; Zanello, L; Pasqualucci, E; Di ciaccio, A; Giordani, M; Yamazaki, Y; Jinno, T; Nomachi, M; De jong, P J; Ferrari, P; Homma, J; Van der graaf, H; Igonkina, O B; Stugu, B S; Buanes, T; Pedersen, M; Turala, M; Olszewski, A J; Koperny, S Z; Onofre, A; Castro nunes fiolhais, M; Alexa, C; Cuciuc, C M; Akesson, T P A; Hellman, S L; Milstead, D A; Bondyakov, A; Pushnova, V; Budagov, Y; Minashvili, I; Romanov, V; Sniatkov, V; Tskhadadze, E; Kalinovskaya, L; Shalyugin, A; Tavkhelidze, A; Rumyantsev, L; Karpov, S; Soloshenko, A; Vostrikov, A; Borissov, E; Solodkov, A; Vorob'ev, A; Sidorov, S; Malyaev, V; Lee, S; Grudzinski, J J; Virzi, J S; Vahsen, S E; Lys, J; Penwell, J W; Yan, Z; Bernard, C S; Barreiro guimaraes da costa, J P; Oliver, J N; Merritt, F S; Brubaker, E M; Kapliy, A; Kim, J; Zutshi, V V; Burghgrave, B O; Abolins, M A; Arabidze, G; Caughron, S A; Frey, R E; Radloff, P T; Schernau, M; Murillo garcia, R; Porter, R A; Mccormick, C A; Karn, P J; Sliwa, K J; Demers konezny, S M; Strauss, M G; Mueller, J A; Izen, J M; Klimentov, A; Lynn, D; Polychronakos, V; Radeka, V; Sondericker, J I I I; Bathe, S; Duffin, S; Chen, H; De castro faria salgado, P E; Kersevan, B P; Lacker, H M; Schulz, H; Kubota, T; Tan, K G; Yabsley, B D; Nunes de moura junior, N; Pinfold, J; Soluk, R A; Ouellette, E A; Leitner, R; Sykora, T; Solar, M; Sartisohn, G; Hirschbuehl, D; Huning, D; Fischer, J; Terron cuadrado, J; Glasman kuguel, C B; Lacasta llacer, C; Lopez-amengual, J; Calvet, D; Chevaleyre, J; Daudon, F; Montarou, G; Guicheney, C; Calvet, S P J; Tyndel, M; Dervan, P J; Maxfield, S J; Hayward, H S; Beck, G; Cox, B; Da via, C; Paschalias, P; Manolopoulou, M; Ragusa, F; Cimino, D; Ezzi, M; Fiuza de barros, N F; Yildiz, H; Ciftci, A K; Turkoz, S; Zain, S B; Tegenfeldt, F; Chapman, J W; Panikashvili, N; Bocci, A; Altheimer, A D; Martin, F F; Fratina, S; Jackson, B D; Grillo, A A; Seiden, A; Watts, G T; Mangiameli, S; Johns, K A; O'grady, F T; Errede, D R; Darbo, G; Ferretto parodi, A; Leahu, M C; Farbin, A; Ye, J; Liu, T; Wijnen, T A; Naito, D; Takashima, R; Sandoval usme, C E; Zinonos, Z; Moreno llacer, M; Agricola, J B; Mcgovern, S A; Sakurai, Y; Trigger, I M; Qing, D; De silva, A S; Butin, F; Dell'acqua, A; Hawkings, R J; Lamanna, M; Mapelli, L; Passardi, G; Rembser, C; Tremblet, L; Andreazza, W; Dobos, D A; Koblitz, B; Bianco, M; Dimitrov, G V; Schlenker, S; Armbruster, A J; Rammensee, M C; Romao rodrigues, L F; Peters, K; Pozo astigarraga, M E; Yi, Y; Desch, K K; Huegging, F G; Muller, K K; Stillings, J A; Schaetzel, S; Xella, S; Hansen, J D; Colas, J; Daguin, G; Wingerter, I; Ionescu, G D; Ledroit, F; Lucotte, A; Clement, B E; Stark, J; Clemens, J; Djama, F; Knoops, E; Coadou, Y; Vigeolas-choury, E; Feligioni, L; Iconomidou-fayard, L; Imbert, P; Schaffer, A C; Nikolic, I; Trincaz-duvoid, S; Warin, P; Camard, A F; Ridel, M; Pires, S; Giacobbe, B; Spighi, R; Villa, M; Negrini, M; Sato, K; Gavrilenko, I; Akimov, A; Khovanskiy, V; Talyshev, A; Voronkov, A; Hakobyan, H; Mallik, U; Shibata, A; Konoplich, R; Barklow, T L; Koi, T; Straessner, A; Stelzer, B; Robertson, S H; Vachon, B; Stoebe, M; Keyes, R A; Wang, K; Billoud, T R V; Strickland, V; Batygov, M; Krieger, P; Palacino caviedes, G D; Gay, C W; Jiang, Y; Han, L; Liu, M; Zenis, T; Lokajicek, M; Staroba, P; Tasevsky, M; Popule, J; Svatos, M; Seifert, F; Landgraf, U; Lai, S T; Schmitt, K H; Achenbach, R; Schuh, N; Kiesling, C; Macchiolo, A; Nisius, R; Schacht, P; Von der schmitt, J G; Kortner, O; Atlay, N B; Segura sole, E; Grinstein, S; Neissner, C; Bruckner, D M; Oliver garcia, E; Boonekamp, M; Perrin, P; Gaillot, F M; Wilson, J A; Thomas, J P; Thompson, P D; Palmer, J D; Falk, I E; Chavez barajas, C A; Sutton, M R; Robinson, D; Kaneti, S A; Wu, T; Robson, A; Shaw, C; Buzatu, A; Qin, G; Jones, R; Bouhova-thacker, E V; Viehhauser, G; Weidberg, A R; Gilbert, L; Johansson, P D C; Orphanides, M; Vlachos, S; Behar harpaz, S; Papish, O; Lellouch, D J H; Turgeman, D; Benary, O; La rotonda, L; Vena, R; Tarasio, A; Marzano, F; Gabrielli, A; Di stante, L; Liberti, B; Aielli, G; Oda, S; Nozaki, M; Takeda, H; Hayakawa, T; Miyazaki, K; Maeda, J; Sugimoto, T; Pettersson, N E; Bentvelsen, S; Groenstege, H L; Lipniacka, A; Vahabi, M; Ould-saada, F; Chwastowski, J J; Hajduk, Z; Kaczmarska, A; Olszowska, J B; Trzupek, A; Staszewski, R P; Palka, M; Constantinescu, S; Jarlskog, G; Lundberg, B L A; Pearce, M; Ellert, M F; Bannikov, A; Fechtchenko, A; Iambourenko, V; Kukhtin, V; Pozdniakov, V; Topilin, N; Vorozhtsov, S; Khassanov, A; Fliaguine, V; Kharchenko, D; Nikolaev, K; Kotenov, K; Kozhin, A; Zenin, A; Ivashin, A; Golubkov, D; Beddall, A; Su, D; Dallapiccola, C J; Cranshaw, J M; Price, L; Stanek, R W; Gieraltowski, G; Zhang, J; Gilchriese, M; Shapiro, M; Ahlen, S; Morii, M; Taylor, F E; Miller, R J; Phillips, F H; Torrence, E C; Wheeler, S J; Benedict, B H; Napier, A; Hamilton, S F; Petrescu, T A; Boyd, G R J; Jayasinghe, A L; Smith, J M; Mc carthy, R L; Adams, D L; Le vine, M J; Zhao, X; Patwa, A M; Baker, M; Kirsch, L; Krstic, J; Simic, L; Filipcic, A; Seidel, S C; Cantore-cavalli, D; Baroncelli, A; Kind, O M; Scarcella, M J; Maidantchik, C L L; Seixas, J; Balabram filho, L E; Vorobel, V; Spousta, M; Strachota, P; Vokac, P; Slavicek, T; Bergmann, B L; Biebel, O; Kersten, S; Srinivasan, M; Trefzger, T; Vazeille, F; Insa, C; Kirk, J; Middleton, R; Burke, S; Klein, U; Morris, J D; Ellis, K V; Millward, L R; Giokaris, N; Ioannou, P; Angelidakis, S; Bouzakis, K; Andreazza, A; Perini, L; Chtcheguelski, V; Spiridenkov, E; Yilmaz, M; Kaya, U; Ernst, J; Mahmood, A; Saland, J; Kutnink, T; Holler, J; Kagan, H P; Wang, C; Pan, Y; Xu, N; Ji, H; Willis, W J; Tuts, P M; Litke, A; Wilder, M; Rothberg, J; Twomey, M S; Rizatdinova, F; Loch, P; Rutherfoord, J P; Varnes, E W; Barberis, D; Osculati-becchi, B; Brandt, A G; Turvey, A J; Benchekroun, D; Nagasaka, Y; Thanakornworakij, T; Quadt, A; Nadal serrano, J; Magradze, E; Nackenhorst, O; Musheghyan, H; Kareem, M; Chytka, L; Perez codina, E; Stelzer-chilton, O; Brunel, B; Henriques correia, A M; Dittus, F; Hatch, M; Haug, F; Hauschild, M; Huhtinen, M; Lichard, P; Schuh-erhard, S; Spigo, G; Avolio, G; Tsarouchas, C; Ahmad, I; Backes, M P; Barisits, M; Gadatsch, S; Cerv, M; Sicoe, A D; Nattamai sekar, L P; Fazio, D; Shan, L; Sun, X; Gaycken, G F; Hemperek, T; Petersen, T C; Alonso diaz, A; Moynot, M; Werlen, M; Hryn'ova, T; Gallin-martel, M; Wu, M; Touchard, F; Menouni, M; Fougeron, D; Le guirriec, E; Chollet, J C; Veillet, J; Barrillon, P; Prat, S; Krasny, M W; Roos, L; Boudarham, G; Lefebvre, G; Boscherini, D; Valentinetti, S; Acharya, B S; Miglioranzi, S; Kanzaki, J; Unno, Y; Yasu, Y; Iwasaki, H; Tokushuku, K; Maio, A; Rodrigues fernandes, B J; Pinto figueiredo raimundo ribeiro, N M; Bot, A; Shmeleva, A; Zaidan, R; Djilkibaev, R; Mincer, A I; Salnikov, A; Aracena, I A; Schwartzman, A G; Silverstein, D J; Fulsom, B G; Anulli, F; Kuhn, D; White, M J; Vetterli, M J; Stockton, M C; Mantifel, R L; Azuelos, G; Shoaleh saadi, D; Savard, P; Clark, A; Ferrere, D; Gaumer, O P; Diaz gutierrez, M A; Liu, Y; Dubnickova, A; Sykora, I; Strizenec, P; Weichert, J; Zitek, K; Naumann, T; Goessling, C; Klingenberg, R; Jakobs, K; Rurikova, Z; Werner, M W; Arnold, H R; Buscher, D; Hanke, P; Stamen, R; Dietzsch, T A; Kiryunin, A; Salihagic, D; Buchholz, P; Pacheco pages, A; Sushkov, S; Porto fernandez, M D C; Cruz josa, R; Vos, M A; Schwindling, J; Ponsot, P; Charignon, C; Kivernyk, O; Goodrick, M J; Hill, J C; Green, B J; Quarman, C V; Bates, R L; Allwood-spiers, S E; Quilty, D; Chilingarov, A; Long, R E; Barton, A E; Konstantinidis, N; Simmons, B; Davison, A R; Christodoulou, V; Wastie, R L; Gallas, E J; Cox, J; Dehchar, M; Behr, J K; Pickering, M A; Filippas, A; Panagoulias, I; Tenenbaum katan, Y D; Roth, I; Pitt, M; Citron, Z H; Benhammou, Y; Amram, N Y N; Soffer, A; Gorodeisky, R; Antonelli, M; Chiarella, V; Curatolo, M; Esposito, B; Nicoletti, G; Martini, A; Sansoni, A; Carlino, G; Del prete, T; Bini, C; Vari, R; Kuna, M; Pinamonti, M; Itoh, Y; Colijn, A P; Klous, S; Garitaonandia elejabarrieta, H; Rosendahl, P L; Taga, A V; Malecki, P; Malecki, P; Wolter, M W; Kowalski, T; Korcyl, G M; Caprini, M; Caprini, I; Dita, P; Olariu, A; Tudorache, A; Lytken, E; Hidvegi, A; Aliyev, M; Alexeev, G; Bardin, D; Kakurin, S; Lebedev, A; Golubykh, S; Chepurnov, V; Gostkin, M; Kolesnikov, V; Karpova, Z; Davkov, K I; Yeletskikh, I; Grishkevich, Y; Rud, V; Myagkov, A; Nikolaenko, V; Starchenko, E; Zaytsev, A; Fakhrutdinov, R; Cheine, I; Istin, S; Sahin, S; Teng, P; Chu, M L; Trilling, G H; Heinemann, B; Richoz, N; Degeorge, C; Youssef, S; Pilcher, J; Cheng, Y; Purohit, M V; Kravchenko, A; Calkins, R E; Blazey, G; Hauser, R; Koll, J D; Reinsch, A; Brost, E C; Allen, B W; Lankford, A J; Ciobotaru, M D; Slagle, K J; Haffa, B; Mann, A; Loginov, A; Cummings, J T; Loyal, J D; Skubic, P L; Boudreau, J F; Lee, B E; Redlinger, G; Wlodek, T; Carcassi, G; Sexton, K A; Yu, D; Deng, W; Metcalfe, J E; Panitkin, S; Sijacki, D; Mikuz, M; Kramberger, G; Tartarelli, G F; Farilla, A; Stanescu, C; Herrberg, R; Alconada verzini, M J; Brennan, A J; Varvell, K; Marroquim, F; Gomes, A A; Do amaral coutinho, Y; Gingrich, D; Moore, R W; Dolejsi, J; Valkar, S; Broz, J; Jindra, T; Kohout, Z; Kral, V; Mann, A W; Calfayan, P P; Langer, T; Hamacher, K; Sanny, B; Wagner, W; Flick, T; Redelbach, A R; Ke, Y; Higon-rodriguez, E; Donini, J N; Lafarguette, P; Adye, T J; Baines, J; Barnett, B; Wickens, F J; Martin, V J; Jackson, J N; Prichard, P; Kretzschmar, J; Martin, A J; Walker, C J; Potter, K M; Kourkoumelis, C; Tzamarias, S; Houiris, A G; Iliadis, D; Fanti, M; Bertolucci, F; Maleev, V; Sultanov, S; Rosenberg, E I; Krumnack, N E; Bieganek, C; Diehl, E B; Mc kee, S P; Eppig, A P; Harper, D R; Liu, C; Schwarz, T A; Mazor, B; Looper, K A; Wiedenmann, W; Huang, P; Stahlman, J M; Battaglia, M; Nielsen, J A; Zhao, T; Khanov, A; Kaushik, V S; Vichou, E; Liss, A M; Gemme, C; Morettini, P; Parodi, F; Passaggio, S; Rossi, L; Kuzhir, P; Ignatenko, A; Ferrari, R; Spairani, M; Pianori, E; Sekula, S J; Firan, A I; Cao, T; Hetherly, J W; Gouighri, M; Vassilakopoulos, V; Long, M C; Shimojima, M; Sawyer, L H; Brummett, R E; Losada, M A; Schorlemmer, A L; Mantoani, M; Bawa, H S; Mornacchi, G; Nicquevert, B; Palestini, S; Stapnes, S; Veness, R; Kotamaki, M J; Sorde, C; Iengo, P; Campana, S; Goossens, L; Zajacova, Z; Pribyl, L; Poveda torres, J; Marzin, A; Conti, G; Carrillo montoya, G D; Kroseberg, J; Gonella, L; Velz, T; Schmitt, S; Lobodzinska, E M; Lovschall-jensen, A E; Galster, G; Perrot, G; Cailles, M; Berger, N; Barnovska, Z; Delsart, P; Lleres, A; Tisserant, S; Grivaz, J; Matricon, P; Bellagamba, L; Bertin, A; Bruschi, M; De castro, S; Semprini cesari, N; Fabbri, L; Rinaldi, L; Quayle, W B; Truong, T N L; Kondo, T; Haruyama, T; Ng, C; Do valle wemans, A; Almeida veloso, F M; Konovalov, S; Ziegler, J M; Su, D; Lukas, W; Prince, S; Ortega urrego, E J; Teuscher, R J; Knecht, N; Pretzl, K; Borer, C; Gadomski, S; Koch, B; Kuleshov, S; Brooks, W K; Antos, J; Kulkova, I; Chudoba, J; Chyla, J; Tomasek, L; Bazalova, M; Messmer, I; Tobias, J; Sundermann, J E; Kuehn, S S; Kluge, E; Scharf, V L; Barillari, T; Kluth, S; Menke, S; Weigell, P; Schwegler, P; Ziolkowski, M; Casado lechuga, P M; Garcia, C; Sanchez, J; Costa mezquita, M J; Valero biot, J A; Laporte, J; Nikolaidou, R; Virchaux, M; Nguyen, V T H; Charlton, D; Harrison, K; Slater, M W; Newman, P R; Parker, A M; Ward, P; Mcgarvie, S A; Kilvington, G J; D'auria, S; O'shea, V; Mcglone, H M; Fox, H; Henderson, R; Kartvelishvili, V; Davies, B; Sherwood, P; Fraser, J T; Lancaster, M A; Tseng, J C; Hays, C P; Apolle, R; Dixon, S D; Parker, K A; Gazis, E; Papadopoulou, T; Panagiotopoulou, E; Karastathis, N; Hershenhorn, A D; Milov, A; Groth-jensen, J; Bilokon, H; Miscetti, S; Canale, V; Rebuzzi, D M; Capua, M; Bagnaia, P; De salvo, A; Gentile, S; Safai tehrani, F; Solfaroli camillocci, E; Sasao, N; Tsunada, K; Massaro, G; Magrath, C A; Van kesteren, Z; Beker, M G; Van den wollenberg, W; Bugge, L; Buran, T; Read, A L; Gjelsten, B K; Banas, E A; Turnau, J; Derendarz, D K; Kisielewska, D; Chesneanu, D; Rotaru, M; Maurer, J B; Wong, M L; Lund-jensen, B; Asman, B; Jon-and, K B; Silverstein, S B; Johansen, M; Alexandrov, I; Iatsounenko, I; Krumshteyn, Z; Peshekhonov, V; Rybaltchenko, K; Samoylov, V; Cheplakov, A; Kekelidze, G; Lyablin, M; Teterine, V; Bednyakov, V; Kruchonak, U; Shiyakova, M M; Demichev, M; Denisov, S P; Fenyuk, A; Djobava, T; Salukvadze, G; Cetin, S A; Brau, B P; Pais, P R; Proudfoot, J; Van gemmeren, P; Zhang, Q; Beringer, J A; Ely, R; Leggett, C; Pengg, F X; Barnett, M R; Quick, R E; Williams, S; Gardner jr, R W; Huston, J; Brock, R; Wanotayaroj, C; Unel, G N; Taffard, A C; Frate, M; Baker, K O; Tipton, P L; Hutchison, A; Walsh, B J; Norberg, S R; Su, J; Tsybyshev, D; Caballero bejar, J; Ernst, M U; Wellenstein, H; Vudragovic, D; Vidic, I; Gorelov, I V; Toms, K; Alimonti, G; Petrucci, F; Kolanoski, H; Smith, J; Jeng, G; Watson, I J; Guimaraes ferreira, F; Miranda vieira xavier, F; Araujo pereira, R; Poffenberger, P; Sopko, V; Elmsheuser, J; Wittkowski, J; Glitza, K; Gorfine, G W; Ferrer soria, A; Fuster verdu, J A; Sanchis lozano, A; Reinmuth, G; Busato, E; Haywood, S J; Mcmahon, S J; Qian, W; Villani, E G; Laycock, P J; Poll, A J; Rizvi, E S; Foster, J M; Loebinger, F; Forti, A; Plano, W G; Brown, G J A; Kordas, K; Vegni, G; Ohsugi, T; Iwata, Y; Cherkaoui el moursli, R; Sahin, M; Akyazi, E; Carlsen, A; Kanwal, B; Cochran jr, J H; Aronnax, M V; Lockner, M J; Zhou, B; Levin, D S; Weaverdyck, C J; Grom, G F; Rudge, A; Ebenstein, W L; Jia, B; Yamaoka, J; Jared, R C; Wu, S L; Banerjee, S; Lu, Q; Hughes, E W; Alkire, S P; Degenhardt, J D; Lipeles, E D; Spencer, E N; Savine, A; Cheu, E C; Lampl, W; Veatch, J R; Roberts, K; Atkinson, M J; Odino, G A; Polesello, G; Martin, T; White, A P; Stephens, R; Grinbaum sarkisyan, E; Vartapetian, A; Yu, J; Sosebee, M; Thilagar, P A; Spurlock, B; Bonde, R; Filthaut, F; Klok, P; Hoummada, A; Ouchrif, M; Pellegrini, G; Rafi tatjer, J M; Navarro, G A; Blumenschein, U; Weingarten, J C; Mueller, D; Graber, L; Gao, Y; Bode, A; Capeans garrido, M D M; Carli, T; Wells, P; Beltramello, O; Vuillermet, R; Dudarev, A; Salzburger, A; Torchiani, C I; Serfon, C L G; Sloper, J E; Duperrier, G; Lilova, P T; Knecht, M O; Lassnig, M; Anders, G; Deviveiros, P; Young, C; Sforza, F; Shaochen, C; Lu, F; Wermes, N; Wienemann, P; Schwindt, T; Hansen, P H; Hansen, J B; Pingel, A M; Massol, N; Elles, S L; Hallewell, G D; Rozanov, A; Vacavant, L; Fournier, D A; Poggioli, L; Puzo, P M; Tanaka, R; Escalier, M A; Makovec, N; Rezynkina, K; De cecco, S; Cavalleri, P G; Massa, I; Zoccoli, A; Tanaka, S; Odaka, S; Mitsui, S; Tomasio pina, J A; Santos, H F; Satsounkevitch, I; Harkusha, S; Baranov, S; Nechaeva, P; Kayumov, F; Kazanin, V; Asai, M; Mount, R P; Nelson, T K; Smith, D; Kenney, C J; Malone, C M; Kobel, M; Friedrich, F; Grohs, J P; Jais, W J; O'neil, D C; Warburton, A T; Vincter, M; Mccarthy, T G; Groer, L S; Pham, Q T; Taylor, W J; La marra, D; Perrin, E; Wu, X; Bell, W H; Delitzsch, C M; Feng, C; Zhu, C; Tokar, S; Bruncko, D; Kupco, A; Marcisovsky, M; Jakoubek, T; Bruneliere, R; Aktas, A; Narrias villar, D I; Tapprogge, S; Mattmann, J; Kroha, H; Crespo, J; Korolkov, I; Cavallaro, E; Cabrera urban, S; Mitsou, V; Kozanecki, W; Mansoulie, B; Pabot, Y; Etienvre, A; Bauer, F; Chevallier, F; Bouty, A R; Watkins, P; Watson, A; Faulkner, P J W; Curtis, C J; Murillo quijada, J A; Grout, Z J; Chapman, J D; Cowan, G D; George, S; Boisvert, V; Mcmahon, T R; Doyle, A T; Thompson, S A; Britton, D; Smizanska, M; Campanelli, M; Butterworth, J M; Loken, J; Renton, P; Barr, A J; Issever, C; Short, D; Crispin ortuzar, M; Tovey, D R; French, R; Rozen, Y; Alexander, G; Kreisel, A; Conventi, F; Raulo, A; Schioppa, M; Susinno, G; Tassi, E; Giagu, S; Luci, C; Nisati, A; Cobal, M; Ishikawa, A; Jinnouchi, O; Bos, K; Verkerke, W; Vermeulen, J; Van vulpen, I B; Kieft, G; Mora, K D; Olsen, F; Rohne, O M; Pajchel, K; Nilsen, J K; Wosiek, B K; Wozniak, K W; Badescu, E; Jinaru, A; Bohm, C; Johansson, E K; Sjoelin, J B R; Clement, C; Buszello, C P; Huseynova, D; Boyko, I; Popov, B; Poukhov, O; Vinogradov, V; Tsiareshka, P; Skvorodnev, N; Soldatov, A; Chuguev, A; Gushchin, V; Yazici, E; Lutz, M S; Malon, D; Vanyashin, A; Lavrijsen, W; Spieler, H; Biesiada, J L; Bahr, M; Kong, J; Tatarkhanov, M; Ogren, H; Van kooten, R J; Cwetanski, P; Butler, J M; Shank, J T; Chakraborty, D; Ermoline, I; Sinev, N; Whiteson, D O; Corso radu, A; Huang, J; Werth, M P; Kastoryano, M; Meirose da silva costa, B; Namasivayam, H; Hobbs, J D; Schamberger jr, R D; Guo, F; Potekhin, M; Popovic, D; Gorisek, A; Sokhrannyi, G; Hofsajer, I W; Mandelli, L; Ceradini, F; Graziani, E; Giorgi, F; Zur nedden, M E G; Grancagnolo, S; Volpi, M; Nunes hanninger, G; Rados, P K; Milesi, M; Cuthbert, C J; Black, C W; Fink grael, F; Fincke-keeler, M; Keeler, R; Kowalewski, R V; Berghaus, F O; Qi, M; Davidek, T; Tas, P; Jakubek, J; Duckeck, G; Walker, R; Mitterer, C A; Harenberg, T; Sandvoss, S A; Del peso, J; Llorente merino, J; Gonzalez millan, V; Irles quiles, A; Crouau, M; Gris, P L Y; Liauzu, S; Romano saez, S M; Gallop, B J; Jones, T J; Austin, N C; Morris, J; Duerdoth, I; Thompson, R J; Kelly, M P; Leisos, A; Garas, A; Pizio, C; Venda pinto, B A; Kudin, L; Qian, J; Wilson, A W; Mietlicki, D; Long, J D; Sang, Z; Arms, K E; Rahimi, A M; Moss, J J; Oh, S H; Parker, S I; Parsons, J; Cunitz, H; Vanguri, R S; Sadrozinski, H; Lockman, W S; Martinez-mc kinney, G; Goussiou, A; Jones, A; Lie, K; Hasegawa, Y; Olcese, M; Gilewsky, V; Harrison, P F; Janus, M; Spangenberg, M; De, K; Ozturk, N; Pal, A K; Darmora, S; Bullock, D J; Oviawe, O; Derkaoui, J E; Rahal, G; Sircar, A; Frey, A S; Stolte, P; Rosien, N; Zoch, K; Li, L; Schouten, D W; Catinaccio, A; Ciapetti, M; Delruelle, N; Ellis, N; Farthouat, P; Hoecker, A; Klioutchnikova, T; Macina, D; Malyukov, S; Spiwoks, R D; Unal, G P; Vandoni, G; Petersen, B A; Pommes, K; Nairz, A M; Wengler, T; Mladenov, D; Solans sanchez, C A; Lantzsch, K; Schmieden, K; Jakobsen, S; Ritsch, E; Sciuccati, A; Alves dos santos, A M; Ouyang, Q; Zhou, M; Brock, I C; Janssen, J; Katzy, J; Anders, C F; Nilsson, B S; Bazan, A; Di ciaccio, L; Yildizkaya, T; Collot, J; Malek, F; Trocme, B S; Breugnon, P; Godiot, S; Adam bourdarios, C; Coulon, J; Duflot, L; Petroff, P G; Zerwas, D; Lieuvin, M; Calderini, G; Laporte, D; Ocariz, J; Gabrielli, A; Ohska, T K; Kurochkin, Y; Kantserov, V; Vasilyeva, L; Speransky, M; Smirnov, S; Antonov, A; Bulekov, O; Tikhonov, Y; Sargsyan, L; Vardanyan, G; Budick, B; Kocian, M L; Luitz, S; Young, C C; Grenier, P J; Kelsey, M; Black, J E; Kneringer, E; Jussel, P; Horton, A J; Beaudry, J; Chandra, A; Ereditato, A; Topfel, C M; Mathieu, R; Bucci, F; Muenstermann, D; White, R M; He, M; Urban, J; Straka, M; Vrba, V; Schumacher, M; Parzefall, U; Mahboubi, K; Sommer, P O; Koepke, L H; Bethke, S; Moser, H; Wiesmann, M; Walkowiak, W A; Fleck, I J; Martinez-perez, M; Sanchez sanchez, C A; Jorgensen roca, S; Accion garcia, E; Sainz ruiz, C A; Valls ferrer, J A; Amoros vicente, G; Vives torrescasana, R; Ouraou, A; Formica, A; Hassani, S; Watson, M F; Cottin buracchio, G F; Bussey, P J; Saxon, D; Ferrando, J E; Collins-tooth, C L; Hall, D C; Cuhadar donszelmann, T; Dawson, I; Duxfield, R; Argyropoulos, T; Brodet, E; Livneh, R; Shougaev, K; Reinherz, E I; Guttman, N; Beretta, M M; Vilucchi, E; Aloisio, A; Patricelli, S; Caprio, M; Cevenini, F; De vecchi, C; Livan, M; Rimoldi, A; Vercesi, V; Ayad, R; Mastroberardino, A; Ciapetti, G; Luminari, L; Rescigno, M; Santonico, R; Salamon, A; Del papa, C; Kurashige, H; Homma, Y; Tomoto, M; Horii, Y; Sugaya, Y; Hanagaki, K; Bobbink, G; Kluit, P M; Koffeman, E N; Van eijk, B; Lee, H; Eigen, G; Dorholt, O; Strandlie, A; Strzempek, P B; Dita, S; Stoicea, G; Chitan, A; Leven, S S; Moa, T; Brenner, R; Ekelof, T J C; Olshevskiy, A; Roumiantsev, V; Chlachidze, G; Zimine, N; Gusakov, Y; Grigalashvili, N; Mineev, M; Potrap, I; Barashkou, A; Shoukavy, D; Shaykhatdenov, B; Pikelner, A; Gladilin, L; Ammosov, V; Abramov, A; Arik, M; Sahinsoy, M; Uysal, Z; Azizi, K; Hotinli, S C; Zhou, S; Berger, E; Blair, R; Underwood, D G; Einsweiler, K; Garcia-sciveres, M A; Siegrist, J L; Kipnis, I; Dahl, O; Holland, S; Barbaro galtieri, A; Smith, P T; Parua, N; Franklin, M; Mercurio, K M; Tong, B; Pod, E; Cole, S G; Hopkins, W H; Guest, D H; Severini, H; Marsicano, J J; Abbott, B K; Wang, Q; Lissauer, D; Ma, H; Takai, H; Rajagopalan, S; Protopopescu, S D; Snyder, S S; Undrus, A; Popescu, R N; Begel, M A; Blocker, C A; Amelung, C; Mandic, I; Macek, B; Tucker, B H; Citterio, M; Troncon, C; Orestano, D; Taccini, C; Romeo, G L; Dova, M T; Taylor, G N; Gesualdi manhaes, A; Mcpherson, R A; Sobie, R; Taylor, R P; Dolezal, Z; Kodys, P; Slovak, R; Sopko, B; Vacek, V; Sanders, M P; Hertenberger, R; Meineck, C; Becks, K; Kind, P; Sandhoff, M; Cantero garcia, J; De la torre perez, H; Castillo gimenez, V; Ros, E; Hernandez jimenez, Y; Chadelas, R; Santoni, C; Washbrook, A J; O'brien, B J; Wynne, B M; Mehta, A; Vossebeld, J H; Landon, M; Teixeira dias castanheira, M; Cerrito, L; Keates, J R; Fassouliotis, D; Chardalas, M; Manousos, A; Grachev, V; Seliverstov, D; Sedykh, E; Cakir, O; Ciftci, R; Edson, W; Prell, S A; Rosati, M; Stroman, T; Jiang, H; Neal, H A; Li, X; Gan, K K; Smith, D S; Kruse, M C; Ko, B R; Leung fook cheong, A M; Cole, B; Angerami, A R; Greene, Z S; Kroll, J I; Van berg, R P; Forbush, D A; Lubatti, H; Raisher, J; Shupe, M A; Wolin, S; Oshita, H; Gaudio, G; Das, R; Konig, A C; Croft, V A; Harvey, A; Maaroufi, F; Melo, I; Greenwood jr, Z D; Shabalina, E; Mchedlidze, G; Drechsler, E; Rieger, J K; Blackston, M; Colombo, T

    2002-01-01

    % ATLAS \\\\ \\\\ ATLAS is a general-purpose experiment for recording proton-proton collisions at LHC. The ATLAS collaboration consists of 144 participating institutions (June 1998) with more than 1750~physicists and engineers (700 from non-Member States). The detector design has been optimized to cover the largest possible range of LHC physics: searches for Higgs bosons and alternative schemes for the spontaneous symmetry-breaking mechanism; searches for supersymmetric particles, new gauge bosons, leptoquarks, and quark and lepton compositeness indicating extensions to the Standard Model and new physics beyond it; studies of the origin of CP violation via high-precision measurements of CP-violating B-decays; high-precision measurements of the third quark family such as the top-quark mass and decay properties, rare decays of B-hadrons, spectroscopy of rare B-hadrons, and $ B ^0 _{s} $-mixing. \\\\ \\\\The ATLAS dectector, shown in the Figure includes an inner tracking detector inside a 2~T~solenoid providing an axial...

  3. The ATLAS Data Acquisition and High Level Trigger system

    Science.gov (United States)

    The ATLAS TDAQ Collaboration

    2016-06-01

    This paper describes the data acquisition and high level trigger system of the ATLAS experiment at the Large Hadron Collider at CERN, as deployed during Run 1. Data flow as well as control, configuration and monitoring aspects are addressed. An overview of the functionality of the system and of its performance is presented and design choices are discussed.

  4. Description on the signal processing system of ATLAS facility

    International Nuclear Information System (INIS)

    In the present report, signal processing system and logic of ATLAS facility was explained. Input signals was categorized as the first-, second- and third-order EU (engineering unit) parameter according to the signal processing logic. The system integration was described in Chapter 2, and the signal process logic of different types of signals was presented in Chapter 3

  5. Report on container technology for the ATLAS TDAQ system

    CERN Document Server

    Gadirov, Hamid

    2016-01-01

    My summer student project "Container technology for the Upgrade of the ATLAS Trigger and Data Acquisition (TDAQ) system" focused on the research of container-based (operating system-level) virtualization for TDAQ software. Several tests were performed on Docker platform, all of them showed compatibility for TDAQ software.

  6. Tuning of Kilopixel Transition Edge Sensor Bolometer Arrays with a Digital Frequency Multiplexed Readout System

    CERN Document Server

    MacDermid, K; Aubin, F; Bissonnette, E; Dobbs, M; Hubmayr, J; Smecher, G; Warraich, S

    2009-01-01

    A digital frequency multiplexing (DfMUX) system has been developed and used to tune large arrays of transition edge sensor (TES) bolometers read out with SQUID arrays for mm-wavelength cosmology telescopes. The DfMUX system multiplexes the input bias voltages and output currents for several bolometers on a single set of cryogenic wires. Multiplexing reduces the heat load on the camera's sub-Kelvin cryogenic detector stage. In this paper we describe the algorithms and software used to set up and optimize the operation of the bolometric camera. The algorithms are implemented on soft processors embedded within FPGA devices operating on each backend readout board. The result is a fully parallelized implementation for which the setup time is independent of the array size.

  7. Planar Lithographed Superconducting LC Resonators for Frequency-Domain Multiplexed Readout Systems

    Science.gov (United States)

    Rotermund, K.; Barch, B.; Chapman, S.; Hattori, K.; Lee, A.; Palaio, N.; Shirley, I.; Suzuki, A.; Tran, C.

    2016-03-01

    Cosmic microwave background (CMB) polarization experiments are increasing the number of transition edge sensor (TES) bolometers to increase sensitivity. In order to maintain low thermal loading of the sub-Kelvin stage, the frequency-domain multiplexing (FDM) factor has to increase accordingly. FDM is achieved by placing TES bolometers in series with inductor-capacitor (LC) resonators, which select the readout frequency. The multiplexing factor can be raised with a large total readout bandwidth and small frequency spacing between channels. The inductance is kept constant to maintain a uniform readout bandwidth across detectors, while the maximum acceptable value is determined by bolometer stability. Current technology relies on commercially available ceramic chip capacitors. These have high scatter in their capacitance thereby requiring large frequency spacing. Furthermore, they have high equivalent series resistance (ESR) at higher frequencies and are time consuming and tedious to hand assemble via soldering. A solution lies in lithographed, planar spiral inductors (currently in use by some experiments) combined with interdigitated capacitors on a silicon (Si) substrate. To maintain reasonable device dimensions, we have reduced trace and gap widths of the LCs to 4 \\upmu m. We increased the inductance from 16 to 60 \\upmu H to achieve a higher packing density, a requirement for FDM systems with large multiplexing factors. Additionally, the Si substrate yields low ESR values across the entire frequency range and lithography makes mass production of LC pairs possible. We reduced mutual inductance between inductors by placing them in a checkerboard pattern with the capacitors, thereby increasing physical distances between adjacent inductors. We also reduce magnetic coupling of inductors with external sources by evaporating a superconducting ground plane onto the backside of the substrate. We report on the development of lithographed LCs in the 1-5 MHz range for use

  8. Planar Lithographed Superconducting LC Resonators for Frequency-Domain Multiplexed Readout Systems

    Science.gov (United States)

    Rotermund, K.; Barch, B.; Chapman, S.; Hattori, K.; Lee, A.; Palaio, N.; Shirley, I.; Suzuki, A.; Tran, C.

    2016-07-01

    Cosmic microwave background (CMB) polarization experiments are increasing the number of transition edge sensor (TES) bolometers to increase sensitivity. In order to maintain low thermal loading of the sub-Kelvin stage, the frequency-domain multiplexing (FDM) factor has to increase accordingly. FDM is achieved by placing TES bolometers in series with inductor-capacitor (LC) resonators, which select the readout frequency. The multiplexing factor can be raised with a large total readout bandwidth and small frequency spacing between channels. The inductance is kept constant to maintain a uniform readout bandwidth across detectors, while the maximum acceptable value is determined by bolometer stability. Current technology relies on commercially available ceramic chip capacitors. These have high scatter in their capacitance thereby requiring large frequency spacing. Furthermore, they have high equivalent series resistance (ESR) at higher frequencies and are time consuming and tedious to hand assemble via soldering. A solution lies in lithographed, planar spiral inductors (currently in use by some experiments) combined with interdigitated capacitors on a silicon (Si) substrate. To maintain reasonable device dimensions, we have reduced trace and gap widths of the LCs to 4 \\upmu m. We increased the inductance from 16 to 60 \\upmu H to achieve a higher packing density, a requirement for FDM systems with large multiplexing factors. Additionally, the Si substrate yields low ESR values across the entire frequency range and lithography makes mass production of LC pairs possible. We reduced mutual inductance between inductors by placing them in a checkerboard pattern with the capacitors, thereby increasing physical distances between adjacent inductors. We also reduce magnetic coupling of inductors with external sources by evaporating a superconducting ground plane onto the backside of the substrate. We report on the development of lithographed LCs in the 1-5 MHz range for use

  9. Fast imaging readout and electronics--a novel high-speed imaging system for micro-channel plates

    CERN Document Server

    Lapington, J S

    2002-01-01

    The band-width of charge division readout anodes used with micro-channel plates (MCP) is usually limited by the speed of the acquisition electronics. We present a novel charge division anode that does not require analogue to digital conversion. The Fast Imaging Readout and Electronics is a new concept in high-speed imaging using an MCP detector. The imaging system described comprises an MCP intensifier coupled to a charge division image readout using high-speed, multichannel electronics. It has a projected spatial resolution of up to 128x128 pixels, though the image format is inherently flexible, and the potential for rates up to 100 million events per second with nanosecond timing resolution. The readout pattern has a planar electrode structure and the collected charge from each event is shared amongst all electrodes, grouped in pairs. The unique design of the readout obviates the need for charge measurement, usually the dominant process determining the event-processing deadtime. Instead, high-speed signal c...

  10. New conversion factors between human and automatic readouts of the CDMAM phantom for CR systems

    Science.gov (United States)

    Hummel, Johann; Homolka, Peter; Osanna-Elliot, Angelika; Kaar, Marcus; Semtrus, Friedrich; Figl, Michael

    2016-03-01

    Mammography screenings demand for profound image quality (IQ) assessment to guarantee their screening success. The European protocol for the quality control of the physical and technical aspects of mammography screening (EPQCM) suggests a contrast detail phantom such as the CDMAM phantom to evaluate IQ. For automatic evaluation a software is provided by the EUREF. As human and automatic readouts differ systematically conversion factors were published by the official reference organisation (EUREF). As we experienced a significant difference for these factors for Computed Radiography (CR) systems we developed an objectifying analysis software which presents the cells including the gold disks randomly in thickness and rotation. This allows to overcome the problem of an inevitable learning effect where observers know the position of the disks in advance. Applying this software, 45 computed radiography (CR) systems were evaluated and the conversion factors between human and automatic readout determined. The resulting conversion factors were compared with the ones resulting from the two methods published by EUREF. We found our conversion factors to be substantially lower than those suggested by EUREF, in particular 1.21 compared to 1.42 (EUREF EU method) and 1.62 (EUREF UK method) for 0.1 mm, and 1.40 compared to 1.73 (EUREF EU) and 1.83 (EUREF UK) for 0.25 mm disc diameter, respectively. This can result in a dose increase of up to 90% using either of these factors to adjust patient dose in order to fulfill image quality requirements. This suggests the need of an agreement on their proper application and limits the validity of the assessment methods. Therefore, we want to stress the need for clear criteria for CR systems based on appropriate studies.

  11. System Architecture Modeling for Technology Portfolio Management using ATLAS

    Science.gov (United States)

    Thompson, Robert W.; O'Neil, Daniel A.

    2006-01-01

    Strategic planners and technology portfolio managers have traditionally relied on consensus-based tools, such as Analytical Hierarchy Process (AHP) and Quality Function Deployment (QFD) in planning the funding of technology development. While useful to a certain extent, these tools are limited in the ability to fully quantify the impact of a technology choice on system mass, system reliability, project schedule, and lifecycle cost. The Advanced Technology Lifecycle Analysis System (ATLAS) aims to provide strategic planners a decision support tool for analyzing technology selections within a Space Exploration Architecture (SEA). Using ATLAS, strategic planners can select physics-based system models from a library, configure the systems with technologies and performance parameters, and plan the deployment of a SEA. Key parameters for current and future technologies have been collected from subject-matter experts and other documented sources in the Technology Tool Box (TTB). ATLAS can be used to compare the technical feasibility and economic viability of a set of technology choices for one SEA, and compare it against another set of technology choices or another SEA. System architecture modeling in ATLAS is a multi-step process. First, the modeler defines the system level requirements. Second, the modeler identifies technologies of interest whose impact on an SEA. Third, the system modeling team creates models of architecture elements (e.g. launch vehicles, in-space transfer vehicles, crew vehicles) if they are not already in the model library. Finally, the architecture modeler develops a script for the ATLAS tool to run, and the results for comparison are generated.

  12. Finite element simulations of low-mass readout cables for the CBM Silicon Tracking System using RAPHAEL

    Science.gov (United States)

    Singla, M.; Chatterji, S.; Müller, W. F. J.; Kleipa, V.; Heuser, J. M.

    2014-01-01

    The first three-dimensional simulation study of thin multi-line readout cables using finite element simulation tool RAPHAEL is being reported. The application is the Silicon Tracking System (STS) of the fixed-target heavy-ion experiment Compressed Baryonic Matter (CBM), under design at the forthcoming accelerator center FAIR in Germany. RAPHAEL has been used to design low-mass analog readout cables with minimum possible Equivalent Noise Charge (ENC). Various trace geometries and trace materials have been explored in detail for this optimization study. These cables will bridge the distance between the microstrip detectors and the signal processing electronics placed at the periphery of the silicon tracking stations. SPICE modeling has been implemented in Sentaurus Device to study the transmission loss (dB loss) in cables and simulation has been validated with measurements. An optimized design having minimum possible ENC, material budget and transmission loss for the readout cables has been proposed.

  13. Implementation of the readout system in the UFFO Slewing Mirror Telescope

    CERN Document Server

    Kim, J E; Jung, A; Ahn, K -B; Choi, H S; Choi, Y J; Grossan, B; Hermann, I; Jeong, S; Kim, S -W; Kim, Y W; Lee, J; Linder, E V; Min, K W; Na, G W; Nam, J W; Nam, K H; Panayuk, M I; Park, I H; Smoot, G F; Suh, Y D; Svelitov, S; Vedenken, N; Yashin, I; Zhao, M H

    2011-01-01

    The Ultra-Fast Flash Observatory (UFFO) is a new space-based experiment to observe Gamma-Ray Bursts (GRBs). GRBs are the most luminous electromagnetic events in the universe and occur randomly in any direction. Therefore the UFFO consists of two telescopes; UFFO Burst Alert & Trigger Telescope (UBAT) to detect GRBs using a wide field-of-view (FOV), and a Slewing Mirror Telescope (SMT) to observe UV/optical events rapidly within the narrow, targeted FOV. The SMT is a Ritchey-Chretien telescope that uses a motorized mirror system and an Intensified Charge-Coupled Device (ICCD). When the GRB is triggered by the UBAT, the SMT receives the position information and rapidly tilts the mirror to the target. The ICCD start to take the data within a second after GRB is triggered. Here we give the details about the SMT readout electronics that deliver the data.

  14. ATLAS Point-1 System Administration Group

    CERN Multimedia

    Marc Dobson

    2007-01-01

    Hello, my name is Joe Blog and I am about to go on shift at ATLAS. When I enter the control room shown below with my CERN ID card, I go to the subsystem desk for which I am responsible. This is the first shift of the run period and there is a login window displayed on the screens. I just need to hit return and the control room desktop is started. Before I can do anything I must give my credentials in the shifter window which is then synchronised with the shift plan. After that I have access to all the allowed commands and can start preparing for the run. In order not to forget any steps I consult the documentation on how to prepare for a run on the Point-1 web. I can also check what the general status is for the ATLAS online computing farm, the sub-detectors and the LHC by using the utilities provided. ATLAS Control Room. The situation described is made up but the conditions are real. But the control room that the shifters and general public see is only the tip of the iceberg. Behind these tools lie the...

  15. MBAT: A scalable informatics system for unifying digital atlasing workflows

    Directory of Open Access Journals (Sweden)

    Sane Nikhil

    2010-12-01

    Full Text Available Abstract Background Digital atlases provide a common semantic and spatial coordinate system that can be leveraged to compare, contrast, and correlate data from disparate sources. As the quality and amount of biological data continues to advance and grow, searching, referencing, and comparing this data with a researcher's own data is essential. However, the integration process is cumbersome and time-consuming due to misaligned data, implicitly defined associations, and incompatible data sources. This work addressing these challenges by providing a unified and adaptable environment to accelerate the workflow to gather, align, and analyze the data. Results The MouseBIRN Atlasing Toolkit (MBAT project was developed as a cross-platform, free open-source application that unifies and accelerates the digital atlas workflow. A tiered, plug-in architecture was designed for the neuroinformatics and genomics goals of the project to provide a modular and extensible design. MBAT provides the ability to use a single query to search and retrieve data from multiple data sources, align image data using the user's preferred registration method, composite data from multiple sources in a common space, and link relevant informatics information to the current view of the data or atlas. The workspaces leverage tool plug-ins to extend and allow future extensions of the basic workspace functionality. A wide variety of tool plug-ins were developed that integrate pre-existing as well as newly created technology into each workspace. Novel atlasing features were also developed, such as supporting multiple label sets, dynamic selection and grouping of labels, and synchronized, context-driven display of ontological data. Conclusions MBAT empowers researchers to discover correlations among disparate data by providing a unified environment for bringing together distributed reference resources, a user's image data, and biological atlases into the same spatial or semantic context

  16. Recent ATLAS Detector Improvements

    CERN Document Server

    de Nooij, L; The ATLAS collaboration

    2011-01-01

    During the recent LHC shutdown period, ATLAS performed vital maintenance and improvements on the various sub-detectors. For the calorimeters, repairs were carried out on front-end electronics and power supplies to recover detector coverage that had been lost since the last maintenance period. The ALFA luminosity detector was installed along the beam line and is currently being commissioned. Smaller scale repairs were needed on the Inner Detector. Maintenance on the muon system included repairs on the readout as well as updates and leak checks in the gas systems. Six TGC chambers were also replaced. This poster summarizes the repairs and their expected improvement for physics performance and reliability of ATLAS for the upcoming LHC run.

  17. Control and Data Acquisition System of the ATLAS Facility

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Ki-Yong; Kwon, Tae-Soon; Cho, Seok; Park, Hyun-Sik; Baek, Won-Pil; Kim, Jung-Taek

    2007-02-15

    This report describes the control and data acquisition system of an integral effect test facility, ATLAS (Advanced Thermal-hydraulic Test Loop for Accident Simulation) facility, which recently has been constructed at KAERI (Korea Atomic Energy Research Institute). The control and data acquisition system of the ATLAS is established with the hybrid distributed control system (DCS) by RTP corp. The ARIDES system on a LINUX platform which is provided by BNF Technology Inc. is used for a control software. The IO signals consists of 1995 channels and they are processed at 10Hz. The Human-Machine-Interface (HMI) consists of 43 processing windows and they are classified according to fluid system. All control devices can be controlled by manual, auto, sequence, group, and table control methods. The monitoring system can display the real time trend or historical data of the selected IO signals on LCD monitors in a graphical form. The data logging system can be started or stopped by operator and the logging frequency can be selected among 0.5, 1, 2, 10Hz. The fluid system of the ATLAS facility consists of several systems including a primary system to auxiliary system. Each fluid system has a control similarity to the prototype plant, APR1400/OPR1000.

  18. A quality control atlas for scintillation camera systems

    International Nuclear Information System (INIS)

    Full text: The accurate interpretation of quality control and clinical nuclear medicine image data is coupled to an understanding of image patterns and quantitative results. Understanding is gained by learning from different examples, and knowledge of underlying principles of image production. An Atlas of examples has been created to assist with interpreting quality control tests and recognizing artifacts in clinical examples. The project was initiated and supported by the International Atomic Energy Agency (IAEA). The Atlas was developed and written by Busemann Sokole from image examples submitted from nuclear medicine users from around the world. The descriptive text was written in a consistent format to accompany each image or image set. Each example in the atlas finally consisted of the images; a brief description of the data acquisition, radionuclide/radiopharmaceutical, specific circumstances under which the image was produced; results describing the images and subsequent conclusions; comments, where appropriate, giving guidelines for follow-up strategies and trouble shooting; and occasional literature references. Hardcopy images required digitizing into JPEG format for inclusion into a digital document. Where possible, an example was contained on one page. The atlas was reviewed by an international group of experts. A total of about 250 examples were compiled into 6 sections: planar, SPECT, whole body, camera/computer interface, environment/radioactivity, and display/hardcopy. Subtle loss of image quality may be difficult to detect. SPECT examples, therefore, include simulations demonstrating effects of deterioration in camera performance (e.g. center-of-rotation offset, non-uniformity) or suboptimal clinical performance. The atlas includes normal results, results from poor adjustment of the camera system, poor results obtained at acceptance testing, artifacts due to system malfunction, and artifacts due to environmental situations. Some image patterns are

  19. Alignment of the ATLAS Inner Detector tracking system

    CERN Document Server

    Moles-Valls, R; The ATLAS collaboration

    2010-01-01

    ATLAS is a multipurpose experiment that records the products of the LHC collisions. In order to reconstruct trajectories of charged particles produced in these collisions, ATLAS is equipped with a tracking system built on silicon planar sensors (Pixels and microstrips) and drift-tube based detectors. They constitute the ATLAS Inner Detector. It contains 1744 pixel modules (1456 in 3 barrel layers and 288 in 6 end cap disks). The pixel size is 50x400 squared microns. In order to achieve its scientific goals, the alignment of the ATLAS tracking system requires the determination of its almost 36000 degrees of freedom (DoF) with high accuracy. Thus the demanded precision for the alignment of the pixel and microstrip sensors is below 10 micrometers. This implies to use a large sample of high momentum and isolated charge particle tracks. The high level trigger selects those tracks online. Tracks from cosmic trigger during empty LHC bunches are also used as input for the alignment if they cross the pixel detector vo...

  20. Performance and Improvements of the ATLAS Jet Trigger System

    CERN Document Server

    Lang, V; The ATLAS collaboration

    2012-01-01

    At the harsh conditions of the LHC, with proton bunches colliding every 50ns and up to 40 pp interactions per bunch crossing, the ATLAS trigger system has to be flexible to maintaining an unbiased efficiency for a wide variety of physics studies while providing a fast rejection of non-interesting events. Jets are the most commonly produced objects at the LHC, essential for many physics measurements that range from precise QCD studies to searches for New Physics beyond the Standard Model, or even unexpected physics signals. The ATLAS jet trigger is the primary means of selecting events with high p_T jets and its good performance is fundamental to achieve the physics goals of ATLAS. The ATLAS trigger system is divided in three levels, the first one (L1) being hardware based, with a 2mu s latency, and the two following ones (called collectively High Level Triggers or HLT) being software based with larger processing times. It was designed to work in a Region of Interest (RoI) based approach, where the second leve...

  1. Performance and Improvements of the ATLAS Jet Trigger System

    CERN Document Server

    Lang, V; The ATLAS collaboration

    2012-01-01

    At the harsh conditions of the LHC, with proton bunches colliding every 50ns and up to 40 pp interactions per bunch crossing, the ATLAS trigger system has to be flexible to maintaining an unbiased efficiency for a wide variety of physics studies while providing a fast rejection of non-interesting events. Jets are the most commonly produced objects at the LHC, essential for many physics measurements that range from precise QCD studies to searches for New Physics beyond the Standard Model, or even unexpected physics signals. The ATLAS jet trigger is the primary means of selecting events with high p_T jets and its good performance is fundamental to achieve the physics goals of ATLAS. The ATLAS trigger system is divided in three levels, the first one (L1) being hardware based, with a 2$mu s$ latency, and the two following ones (called collectively High Level Triggers or HLT) being software based with larger processing times. It was designed to work in a Region of Interest (RoI) based approach, where the second le...

  2. Performance and Improvements of the ATLAS Jet Trigger System

    CERN Document Server

    Conde Muino, P; The ATLAS collaboration

    2012-01-01

    At the harsh conditions of the LHC, with proton bunches colliding every 50 ns and up to 40 pp interactions per bunch crossing, the ATLAS trigger system has to be flexible to maintaining an unbiased efficiency for a wide variety of physics studies while providing a fast rejection of non-interesting events. Jets are the most commonly produced objects at the LHC, essential for many physics measurements that range from precise QCD studies to searches for New Physics beyond the Standard Model, or even unexpected physics signals. The ATLAS jet trigger is the primary mean for selecting events with high pT jets and its good performance is fundamental to achieve the physics goals of ATLAS. The ATLAS trigger system is divided in three levels, the first one (L1) being hardware based, with a 2 μs latency, and the two following ones (called collectively High Level Triggers or HLT) being softwared based with larger processing times. It was designed to work in a Region of Interest (RoI) based approach, where the second lev...

  3. The ALFA Roman Pot Detectors of ATLAS

    CERN Document Server

    Khalek, S Abdel; Anghinolfi, F; Barrillon, P; Blanchot, G; Blin-Bondil, S; Braem, A; Chytka, L; Muíño, P Conde; Düren, M; Fassnacht, P; Franz, S; Gurriana, L; Grafström, P; Heller, M; Haguenauer, M; Hain, W; Hamal, P; Hiller, K; Iwanski, W; Jakobsen, S; Joram, C; Kötz, U; Korcyl, K; Kreutzfeldt, K; Lohse, T; Maio, A; Maneira, M J P; Notz, D; Nozka, L; Palma, A; Petschull, D; Pons, X; Puzo, P; Ravat, S; Schneider, T; Seabra, L; Sykora, T; Staszewski, R; Stenzel, H; Trzebinski, M; Valkar, S; Viti, M; Vorobel, V; Wemans, A

    2016-01-01

    The ATLAS Roman Pot system is designed to determine the total proton-proton cross-section as well as the luminosity at the Large Hadron Collider (LHC) by measuring elastic proton scattering at very small angles. The system is made of four Roman Pot stations, located in the LHC tunnel in a distance of about 240~m at both sides of the ATLAS interaction point. Each station is equipped with tracking detectors, inserted in Roman Pots which approach the LHC beams vertically. The tracking detectors consist of multi-layer scintillating fibre structures readout by Multi-Anode-Photo-Multipliers.

  4. A radiation-hard dual channel 4-bit pipeline for a 12-bit 40 MS/s ADC prototype with extended dynamic range for the ATLAS Liquid Argon Calorimeter readout electronics upgrade at the CERN LHC

    International Nuclear Information System (INIS)

    The design of a radiation-hard dual-channel 12-bit 40 MS/s pipeline ADC with extended dynamic range is presented, for use in the readout electronics upgrade for the ATLAS Liquid Argon Calorimeters at the CERN Large Hadron Collider. The design consists of two pipeline A/D channels with four Multiplying Digital-to-Analog Converters with nominal 12-bit resolution each. The design, fabricated in the IBM 130 nm CMOS process, shows a performance of 68 dB SNDR at 18 MHz for a single channel at 40 MS/s while consuming 55 mW/channel from a 2.5 V supply, and exhibits no performance degradation after irradiation. Various gain selection algorithms to achieve the extended dynamic range are implemented and tested

  5. ATLAS liquid argon calorimeter back end electronics

    CERN Document Server

    Bán, J; Bellachia, F; Blondel, A; Böttcher, S; Clark, A; Colas, Jacques; Díaz-Gómez, M; Dinkespiler, B; Efthymiopoulos, I; Escalier, M; Fayard, Lo; Gara, A; He, Y; Henry-Coüannier, F; Hubaut, F; Ionescu, G; Karev, A; Kurchaninov, L; Lafaye, R; Laforge, B; La Marra, D; Laplace, S; Le Dortz, O; Léger, A; Liu, T; Martin, D; Matricon, P; Moneta, L; Monnier, E; Oberlack, H; Parsons, J A; Pernecker, S; Perrot, G; Poggioli, L; Prast, J; Przysiezniak, H; Repetti, B; Rosselet, L; Riu, I; Schwemling, P; Simion, S; Sippach, W; Strässner, A; Stroynowski, R; Tisserant, S; Unal, G; Wilkens, H; Wingerter-Seez, I; Xiang, A; Yang, J; Ye, J

    2007-01-01

    The Liquid Argon calorimeters play a central role in the ATLAS (A Toroidal LHC Apparatus) experiment. The environment at the Large Hadron Collider (LHC) imposes strong constraints on the detectors readout systems. In order to achieve very high precision measurements, the detector signals are processed at various stages before reaching the Data Acquisition system (DAQ). Signals from the calorimeter cells are received by on-detector Front End Boards (FEB), which sample the incoming pulse every 25ns and digitize it at a trigger rate of up to 75~kHz. Off-detector Read Out Driver (ROD) boards further process the data and send reconstructed quantities to the DAQ while also monitoring the data quality. In this paper, the ATLAS Liquid Argon electronics chain is described first, followed by a detailed description of the off-detector readout system. Finally, the tests performed on the system are summarized.

  6. Supervision of the ATLAS High Level Trigger System

    CERN Document Server

    Wheeler, S; Baines, J T M; Bee, C P; Biglietti, M; Bogaerts, A; Boisvert, V; Bosman, M; Brandt, S; Caron, B; Casado, M P; Cataldi, G; Cavalli, D; Cervetto, M; Comune, G; Corso-Radu, A; De Seixas, J M; Di Mattia, A; Dos Anjos, A; Drohan, J; Díaz-Gómez, M; Ellis, Nick; Elsing, M; Epp, B; Etienne, F; Falciano, S; Farilla, A; Flammer, J; George, S; Ghete, V M; González, S; Grothe, M; Kaczmarska, A; Karr, K M; Khomich, A; Konstantinidis, N P; Krasny, W; Li, W; Lowe, A; Luminari, L; Ma, H; Meessen, C; Mello, A G; Merino, G; Morettini, P; Moyse, E; Nairz, A; Negri, A; Negri, F A; Nikitin, N V; Nisati, A; Padilla, C; Parodi, F; Pinfold, J L; Pinto, P; Polesello, G; Pérez-Réale, V; Qian, Z; Rajagopalan, S; Resconi, S; Rosati, S; Scannicchio, D A; Schiavi, C; Schörner-Sadenius, T; Segura, E; Shears, T G; Sivoklokov, S Yu; Smizanska, M; Soluk, R A; Stanescu, C; Tapprogge, Stefan; Touchard, F; Vercesi, V; Watson, A; Wengler, T; Werner, P; Wickens, F J; Wiedenmann, W; Wielers, M; Zobernig, G; Zobernig, H; CHEP 2003 Computing in High Energy Physics; Negri, France A.

    2003-01-01

    The ATLAS High Level Trigger (HLT) system provides software-based event selection after the initial LVL1 hardware trigger. It is composed of two stages, the LVL2 trigger and the Event Filter. The HLT is implemented as software tasks running on large processor farms. An essential part of the HLT is the supervision system, which is responsible for configuring, coordinating, controlling and monitoring the many hundreds of processes running in the HLT. A prototype implementation of the supervision system, using tools from the ATLAS Online Software system is presented. Results from scalability tests are also presented where the supervision system was shown to be capable of controlling over 1000 HLT processes running on 230 nodes.

  7. The ATLAS Trigger System: Recent Experience and Future Plans

    International Nuclear Information System (INIS)

    This paper will give an overview of the ATLAS trigger design and its innovative features. It will describe the valuable experience gained in running the trigger reconstruction and event selection in the fastchanging environment of the detector commissioning during 2008. It will also include a description of the trigger selection menu and its 2009 deployment plan from first collisions to the nominal luminosity. ATLAS is one of the two general-purpose detectors at the Large Hadron Collider (LHC). The trigger system needs to efficiently reject a large rate of background events and still select potentially interesting ones with high efficiency. After a first level trigger implemented in custom electronics, the trigger event selection is made by the High Level Trigger (HLT) system, implemented in software. To reduce the processing time to manageable levels, the HLT uses seeded, step-wise and fast selection algorithms, aiming at the earliest possible rejection of background events. The ATLAS trigger event selection is based on the reconstruction of potentially interesting physical objects like electrons, muons, jets, etc. The recent LHC startup and short single-beam run provided the first test of the trigger system against real data. Following this period, ATLAS continued to collect cosmic-ray events for detector alignment and calibration purposes. Both running periods provided very important data to commission the trigger reconstruction and selection algorithms. Profiting from this experience and taking into account the ATLAS first year physics goals, we are preparing a trigger selection menu including several tracking, muon-finding and calorimetry algorithms. Using Monte Carlo simulated data, we are evaluating the impact of the trigger menu on physics performance and rate. (author)

  8. Development of a modular test system for the silicon sensor R&D of the ATLAS Upgrade

    CERN Document Server

    Liu, H; Chen, H.; Chen, K; Di Bello, F A; Iacobucci, G; Lanni, F; Peric, I; Ristic, B; Pinto, M Vicente Barreto; Wu, W; Xu, L; Jin, G

    2016-01-01

    High Voltage CMOS sensors are a promising technology for tracking detectors in collider experiments. Extensive R&D studies are being carried out by the ATLAS Collaboration for a possible use of HV-CMOS in the High Luminosity LHC upgrade of the Inner Tracker detector. CaRIBOu (Control and Readout Itk BOard) is a modular test system developed to test Silicon based detectors. It currently includes five custom designed boards, a Xilinx ZC706 development board, FELIX (Front-End LInk eXchange) PCIe card and a host computer. A software program has been developed in Python to control the CaRIBOu hardware. CaRIBOu has been used in the testbeam of the HV-CMOS sensor CCPDv4 at CERN. Preliminary results have shown that the test system is very versatile. Further development is ongoing to adapt to different sensors, and to make it available to various lab test stands.

  9. Evolution of the Argonne Tandem Linear Accelerator System (ATLAS) control system

    International Nuclear Information System (INIS)

    Given that the Argonne Tandem Linear Accelerator System (ATLAS) recently celebrated its 25. anniversary, this paper will explore the past, present, and future of the ATLAS Control System, and how it has evolved along with the accelerator and control system technology. ATLAS as we know it today, originated with a Tandem Van de Graff in the sixties. With the addition of the Booster section in the late seventies, came the first computerized control. ATLAS itself was placed into service on June 25, 1985, and was the world's first superconducting linear accelerator for ions. Since its dedication as a National User Facility, more than a thousand experiments by more than 2,000 users worldwide, have taken advantage of the unique capabilities it provides. Today, ATLAS continues to be a user facility for physicists who study the particles that form the heart of atoms. Its most recent addition, CARIBU (Californium Rare Isotope Breeder Upgrade), creates special beams that feed into ATLAS. ATLAS is similar to a living organism, changing and responding to new technological challenges and research needs. As it continues to evolve, so does the control system: from the original days using a DEC PDP-11/34 computer and two CAMAC crates, to a DEC Alpha computer running Vsystem software and more than twenty CAMAC crates, to distributed computers and VME systems. Future upgrades are also in the planning stages that will continue to evolve the control system. (authors)

  10. The ATLAS Data Acquisition and High Level Trigger Systems: Experience and Upgrade Plans

    CERN Document Server

    Hauser, R; The ATLAS collaboration

    2012-01-01

    The ATLAS DAQ/HLT system reduces the Level 1 rate of 75 kHz to a few kHz event build rate after Level 2 and a few hundred Hz out output rate to disk. It has operated with an average data taking efficiency of about 94% during the recent years. The performance has far exceeded the initial requirements, with about 5 kHz event building rate and 500 Hz of output rate in 2012, driven mostly by physics requirements. Several improvements and upgrades are foreseen in the upcoming long shutdowns, both to simplify the existing architecture and improve the performance. On the network side new core switches will be deployed and possible use of 10GBit Ethernet links for critical areas is foreseen. An improved read-out system to replace the existing solution based on PCI is under development. A major evolution of the high level trigger system foresees a merging of the Level 2 and Event Filter functionality on a single node, including the event building. This will represent a big simplification of the existing system, while ...

  11. Alignment of the ATLAS Inner Detector tracking system

    CERN Document Server

    Loddenkoetter, T; The ATLAS collaboration

    2010-01-01

    The Large Hadron Collider (LHC) at CERN is the world's largest particle accelerator. After a successful start run at 900 GeV in 2009, during 2010, LHC will collide two proton beams at an unprecedented centre of mass energy of 7 TeV. ATLAS is one of the four multipurpose experiments that will record the products of the LHC proton‐proton collisions. ATLAS is equipped, among others, with a charged particle tracking system built on two different technologies: silicon planar sensors and drift‐tube based detectors constituting the ATLAS Inner Detector (ID). In order to achieve its scientific goals, ATLAS has quite exigent tracking performance requirements. Thus, the goal of the alignment is set such that the limited knowledge of the sensors location should not deteriorate the resolution of the track parameters by more than 20% with respect to the intrinsic tracker resolution. In this manner the required precision for the alignment of the silicon sensors in its most sensitive direction is below 10 micrometers. T...

  12. Operational Experience with the ATLAS Pixel Detector

    CERN Document Server

    Lantzsch, Kerstin; The ATLAS collaboration

    2016-01-01

    Run 2 of the LHC is providing new challenges to track and vertex reconstruction with higher energies, denser jets and higher rates. Therefore the ATLAS experiment has constructed the first 4-layer Pixel detector in HEP, installing a new Pixel layer, also called Insertable B-Layer (IBL). In addition the Pixel detector was refurbished with new service quarter panels to recover about 3% of defective modules lost during run 1 and a new optical readout system to readout the data at higher speed while reducing the occupancy when running with increased luminosity. The commissioning, operation and performance of the 4-layer Pixel Detector will be presented.

  13. A front-end readout Detector Board for the OpenPET electronics system

    International Nuclear Information System (INIS)

    We present a 16-channel front-end readout board for the OpenPET electronics system. A major task in developing a nuclear medical imaging system, such as a positron emission computed tomograph (PET) or a single-photon emission computed tomograph (SPECT), is the electronics system. While there are a wide variety of detector and camera design concepts, the relatively simple nature of the acquired data allows for a common set of electronics requirements that can be met by a flexible, scalable, and high-performance OpenPET electronics system. The analog signals from the different types of detectors used in medical imaging share similar characteristics, which allows for a common analog signal processing. The OpenPET electronics processes the analog signals with Detector Boards. Here we report on the development of a 16-channel Detector Board. Each signal is digitized by a continuously sampled analog-to-digital converter (ADC), which is processed by a field programmable gate array (FPGA) to extract pulse height information. A leading edge discriminator creates a timing edge that is ''time stamped'' by a time-to-digital converter (TDC) implemented inside the FPGA . This digital information from each channel is sent to an FPGA that services 16 analog channels, and then information from multiple channels is processed by this FPGA to perform logic for crystal lookup, DOI calculation, calibration, etc

  14. A self contained Linux based data acquisition system for 2D detectors with delay line readout

    International Nuclear Information System (INIS)

    This article describes a fast and self-contained data acquisition system for 2D gas-filled detectors with delay line readout. It allows the realization of time resolved experiments in the millisecond scale. The acquisition system comprises of an industrial PC running Linux, a commercial time-to-digital converter and an in-house developed histogramming PCI card. The PC provides a mass storage for images and a graphical user interface for system monitoring and control. The histogramming card builds images with a maximum count rate of 5 MHz limited by the time-to-digital converter. Histograms are transferred to the PC at 85 MB/s. This card also includes a time frame generator, a calibration channel unit and eight digital outputs for experiment control. The control software was developed for easy integration into a beamline, including scans. The system is fully operational at the Spanish beamline BM16 at the ESRF in France, the neutron beamlines Adam and Eva at the ILL in France, the Max Plank Institute in Stuttgart in Germany, the University of Copenhagen in Denmark and at the future ALBA synchrotron in Spain. Some representative collected images from synchrotron and neutron beamlines are presented

  15. A front-end readout Detector Board for the OpenPET electronics system

    Science.gov (United States)

    Choong, W.-S.; Abu-Nimeh, F.; Moses, W. W.; Peng, Q.; Vu, C. Q.; Wu, J.-Y.

    2015-08-01

    We present a 16-channel front-end readout board for the OpenPET electronics system. A major task in developing a nuclear medical imaging system, such as a positron emission computed tomograph (PET) or a single-photon emission computed tomograph (SPECT), is the electronics system. While there are a wide variety of detector and camera design concepts, the relatively simple nature of the acquired data allows for a common set of electronics requirements that can be met by a flexible, scalable, and high-performance OpenPET electronics system. The analog signals from the different types of detectors used in medical imaging share similar characteristics, which allows for a common analog signal processing. The OpenPET electronics processes the analog signals with Detector Boards. Here we report on the development of a 16-channel Detector Board. Each signal is digitized by a continuously sampled analog-to-digital converter (ADC), which is processed by a field programmable gate array (FPGA) to extract pulse height information. A leading edge discriminator creates a timing edge that is ``time stamped'' by a time-to-digital converter (TDC) implemented inside the FPGA . This digital information from each channel is sent to an FPGA that services 16 analog channels, and then information from multiple channels is processed by this FPGA to perform logic for crystal lookup, DOI calculation, calibration, etc.

  16. Upgraded Readout and Trigger Electronics for the ATLAS Liquid-Argon Calorimeters at the LHC at the Horizons 2018-2022

    CERN Document Server

    Damazio, D O; The ATLAS collaboration

    2013-01-01

    The ATLAS Liquid Argon (LAr) calorimeters produce a total of 182,486 signals which are digitized and processed by the front-end and back-end electronics at every triggered event. In addition, the front-end electronics is summing analog signals to provide coarsely grained energy sums, called trigger towers, to the first-level trigger system, which is optimized for nominal LHC luminosities. However, the pile-up noise expected during the High Luminosity phases of LHC will be increased by factors of 3 to 7. An improved spatial granularity of the trigger primitives is therefore proposed in order to improve the identification performance for trigger signatures, like electrons, photons, tau leptons, jets, total and missing energy, at high background rejection rates. For the first upgrade phase in 2018, new LAr Trigger Digitizer Board (LTDB) are being designed to receive higher granularity signals, digitize them on detector and send them via fast optical links to a new digital processing system (DPS). The DPS applies...

  17. The ATLAS Trigger System: Ready for Run 2

    CERN Document Server

    Czodrowski, Patrick; The ATLAS collaboration

    2015-01-01

    The ATLAS trigger system has been used successfully for data collection in the 2009-2013 Run 1 operation cycle of the CERN Large Hadron Collider (LHC) at center-of-mass energies of up to 8 TeV. With the restart of the LHC for the new Run 2 data-taking period at 13 TeV, the trigger rates are expected to rise by approximately a factor of 5. This presentation gave a brief overview of the upgrades to the ATLAS trigger system that have been implemented during the LHC shutdown period in order to deal with the increased trigger rates while efficiently selecting the physics processes of interest. These upgrades include changes to the L1 calorimeter trigger, the introduction of a new L1 topological trigger module, improvements in the L1 muon system, and the merging of the previously two-level HLT system into a single processing farm.

  18. ATLAS TDAQ System Administration: evolution and re-design

    Science.gov (United States)

    Ballestrero, S.; Bogdanchikov, A.; Brasolin, F.; Contescu, C.; Dubrov, S.; Fazio, D.; Korol, A.; Lee, C. J.; Scannicchio, D. A.; Twomey, M. S.

    2015-12-01

    The ATLAS Trigger and Data Acquisition system is responsible for the online processing of live data, streaming from the ATLAS experiment at the Large Hadron Collider at CERN. The online farm is composed of ∼3000 servers, processing the data read out from ∼100 million detector channels through multiple trigger levels. During the two years of the first Long Shutdown there has been a tremendous amount of work done by the ATLAS Trigger and Data Acquisition System Administrators, implementing numerous new software applications, upgrading the OS and the hardware, changing some design philosophies and exploiting the High- Level Trigger farm with different purposes. The OS version has been upgraded to SLC6; for the largest part of the farm, which is composed of net booted nodes, this required a completely new design of the net booting system. In parallel, the migration to Puppet of the Configuration Management systems has been completed for both net booted and local booted hosts; the Post-Boot Scripts system and Quattor have been consequently dismissed. Virtual Machine usage has been investigated and tested and many of the core servers are now running on Virtual Machines. Virtualisation has also been used to adapt the High-Level Trigger farm as a batch system, which has been used for running Monte Carlo production jobs that are mostly CPU and not I/O bound. Finally, monitoring the health and the status of ∼3000 machines in the experimental area is obviously of the utmost importance, so the obsolete Nagios v2 has been replaced with Icinga, complemented by Ganglia as a performance data provider. This paper serves for reporting of the actions taken by the Systems Administrators in order to improve and produce a system capable of performing for the next three years of ATLAS data taking.

  19. Towards a molecular QCA wire: simulation of write-in and read-out systems

    Science.gov (United States)

    Pulimeno, A.; Graziano, M.; Demarchi, D.; Piccinini, G.

    2012-11-01

    Among emerging beyond CMOS technologies Molecular Quantum Dot Cellular Automata (MQCA) are estimated as extremely promising for computational purposes. The elementary nanoelectronic devices are molecular systems in which the binary encoding is provided by the charge localization within a molecule. As a consequence, there is no current flowing among the cells and power dissipation is dramatically reduced. We study a new real molecule that was synthesized ad hoc for this technology. Differently with respect to previous contributions, this study has the aim of assessing the realistic properties of this molecule in a perspective experimental system based on a molecular wire principle. We use a combination of ab initio calculations and molecular dynamics simulations and analyze the molecule behavior when specific electric fields are applied to move the electrons inside the molecule in order to force a logic state. Our results allowed us (i) to asses the molecule behavior and to explore the working points of our experimental system for the write-in, (ii) to introduce in this scenario new metrics for studying and using these new devices from an electronic point of view, (iii) to give a perspective and to define design constraints for possible experimental solutions eligible for issue of molecule state read-out.

  20. The final phase of the ATLAS control system upgrade

    International Nuclear Information System (INIS)

    The ATLAS facility (Argonne Tandem-Linac Accelerator System) is located at the Argonne National Laboratory. The facility is a tool used in nuclear and atomic physics research focusing primarily on heavy-ion physics. Due to the complexity of the operation of the facility, a computerized control system has always been required. The nature of the design of the accelerator has allowed the accelerator to evolve over time to its present configuration. The control system for the accelerator has evolved as well, primarily in the form of additions to the original design. A project to upgrade the ATLAS control system replacing most of the major original components was first reported on in the Fall of 1992 during the Symposium Of North Eastern Accelerator Personnel (SNEAP) at the AECL, Chalk River Laboratories. A follow-up report was given in the Fall of 1993 at the First Workshop on Applications of Vsystem Software and Users' Meeting at the Brookhaven National Laboratory. This project is presently in its third and final phase. This paper briefly describes the ATLAS facility, summarizes the control system upgrade project, and explains the intended control system configuration at the completion of the final phase of the project

  1. Optical links for the ATLAS Pixel detector

    CERN Document Server

    Stucci, Stefania Antonia; The ATLAS collaboration

    2015-01-01

    Optical links are necessary to satisfy the high speed readout over long distances for advanced silicon detector systems. We report on the optical readout used in the newly installed central pixel layer (IBL) in the ATLAS experiment. The off detector readout employs commercial optical to analog converters, which were extensively tested for this application. Performance measurements during installation and commissioning will be shown. With the increasing instantaneous luminosity in the next years, the next layers outwards of IBL of the ATLAS Pixel detector (Layer 1 and Layer 2) will reach their bandwidth limits. A plan to increase the bandwidth by upgrading the off detector readout chain is put in place. The plan also involves new optical readout components, in particular the optical receivers, for which commercial units cannot be used and a new design has been made. The latter allows for a wider operational range in term of data frequency and light input power to match the on-detector sending units on the pres...

  2. optical links for the atlas pixel detector

    CERN Document Server

    Stucci, Stefania Antonia; The ATLAS collaboration

    2015-01-01

    Optical links are necessary to satisfy the high speed readout over long distances for advanced silicon detector systems. We report on the optical readout used in the newly installed central pixel layer (IBL) in the ATLAS experiment. The off detector readout employs commercial optical to analog converters, which were extensively tested for this application. Performance measurements during installation and commissioning will be shown. With the increasing instantaneous luminosity in the next years, the next layers outwards of IBL of the ATLAS Pixel detector (Layer 1 and Layer 2) will reach their bandwidth limits. A plan to increase the bandwidth by upgrading the off detector readout chain is put in place. The plan also involves new optical readout components, in particular the optical receivers, for which commercial units cannot be used and a new design has been made. The latter allows for a wider operational range in term of data frequency and light input power to match the on-detector sending units on the pres...

  3. Characterization of the FE-I4B pixel readout chip production run for the ATLAS Insertable B-layer upgrade

    CERN Document Server

    Backhaus, M

    2013-01-01

    The Insertable B-layer (IBL) is a fourth pixel layer that will be added inside the existing ATLAS pixel detector during the long LHC shutdown of 2013 and 2014. The new four layer pixel system will ensure excellent tracking, vertexing and b-tagging performance in the high luminosity pile-up conditions projected for the next LHC run. The peak luminosity is expected to reach 3• 10^34 cm^−2 s ^−1with an integrated luminosity over the IBL lifetime of 300 fb^−1 corresponding to a design lifetime fluence of 5 • 10^15 n_eqcm^−2 and ionizing dose of 250 Mrad including safety factors. The production front-end electronics FE-I4B for the IBL has been fabricated at the end of 2011 and has been extensively characterized on diced ICs as well as at the wafer level. The production tests at the wafer level were performed during 2012. Selected results of the diced IC characterization are presented, including measurements of the on-chip voltage regulators. The IBL powering scheme, which was chosen based on these resu...

  4. Operation of the Upgraded ATLAS Level-1 Central Trigger System

    Science.gov (United States)

    Glatzer, Julian

    2015-12-01

    The ATLAS Level-1 Central Trigger (L1CT) system is a central part of ATLAS data-taking and has undergone a major upgrade for Run 2 of the LHC, in order to cope with the expected increase of instantaneous luminosity of a factor of two with respect to Run 1. The upgraded hardware offers more flexibility in the trigger decisions due to the factor of two increase in the number of trigger inputs and usable trigger channels. It also provides an interface to the new topological trigger system. Operationally - particularly useful for commissioning, calibration and test runs - it allows concurrent running of up to three different subdetector combinations. An overview of the operational software framework of the L1CT system with particular emphasis on the configuration, controls and monitoring aspects is given. The software framework allows a consistent configuration with respect to the ATLAS experiment and the LHC machine, upstream and downstream trigger processors, and the data acquisition system. Trigger and dead-time rates are monitored coherently at all stages of processing and are logged by the online computing system for physics analysis, data quality assurance and operational debugging. In addition, the synchronisation of trigger inputs is watched based on bunch-by-bunch trigger information. Several software tools allow for efficient display of the relevant information in the control room in a way useful for shifters and experts. The design of the framework aims at reliability, flexibility, and robustness of the system and takes into account the operational experience gained during Run 1. The Level-1 Central Trigger was successfully operated with high efficiency during the cosmic-ray, beam-splash and first Run 2 data taking with the full ATLAS detector.

  5. Task management in the new ATLAS production system

    International Nuclear Information System (INIS)

    This document describes the design of the new Production System of the ATLAS experiment at the LHC [1]. The Production System is the top level workflow manager which translates physicists' needs for production level processing and analysis into actual workflows executed across over a hundred Grid sites used globally by ATLAS. As the production workload increased in volume and complexity in recent years (the ATLAS production tasks count is above one million, with each task containing hundreds or thousands of jobs) there is a need to upgrade the Production System to meet the challenging requirements of the next LHC run while minimizing the operating costs. In the new design, the main subsystems are the Database Engine for Tasks (DEFT) and the Job Execution and Definition Interface (JEDI). Based on users' requests, DEFT manages inter-dependent groups of tasks (Meta-Tasks) and generates corresponding data processing workflows. The JEDI component then dynamically translates the task definitions from DEFT into actual workload jobs executed in the PanDA Workload Management System [2]. We present the requirements, design parameters, basics of the object model and concrete solutions utilized in building the new Production System and its components.

  6. Cosmic ray angular distribution employing plastic scintillation detectors and flash-ADC/FPGA-based readout systems

    International Nuclear Information System (INIS)

    It is known that secondary cosmic rays are high energetic particles which are products of shower particles, when primary cosmic rays from outer space hit the atmosphere molecules. Many studies of cosmic rays show that cosmic flux depends on the depth of atmosphere. At ground level, most secondary cosmic rays are muons, a type of charged particle, and have angular dependence. The purpose of this article was to develop the telescope with two plastic scintillation detectors to investigate the angular distribution of cosmic rays at ground level. Directions of investigation were carried out vertical, 45-degree oblique and horizontal directions with respect to earth surface, approximately from North to South. Electronic readout system was developed from Flash Analog Digital Converter (Flash-ADC) of 8 bits-250Ms/sec, and Embedded Field Programmable Gate Array (FPGA)-based trigger. LabVIEW trademark interface was written for controlling trigger system and taking data. For each direction measurement, the deposited energy spectra in the scintillators were obtained. The interest of cosmic-ray events was analyzed for counting. Angular distribution was obtained quantitatively. The experiment has been done at University of Science-HCMC. (orig.)

  7. A Time-Based Front End Readout System for PET & CT

    CERN Document Server

    Meyer, T C; Anghinolfi, F; Auffray, E; Dosanjh, M; Hillemanns, H; Hoffmann, H -F; Jarron, P; Kaplon, J; Kronberger, M; Lecoq, P; Moraes, D; Trummer, J

    2007-01-01

    In the framework of the European FP6's BioCare project, we develop a novel, time-based, photo-detector readout technique to increase sensitivity and timing precision for molecular imaging in PET and CT. The project aims to employ Avalanche Photo Diode (APD) arrays with state of the art, high speed, front end amplifiers and discrimination circuits developed for the Large Hadron Collider (LHC) physics program at CERN, suitable to detect and process photons in a combined one-unit PET/CT detection head. In the so-called time-based approach our efforts focus on the system's timing performance with sub-nanosecond time-jitter and -walk, and yet also provide information on photon energy without resorting to analog to digital conversion. The bandwidth of the electronic circuitry is compatible with the scintillator's intrinsic light response (e.g. les40ns in LSO) and hence allows high rate CT operation in single-photon counting mode. Based on commercial LSO crystals and Hamamatsu S8550 APD arrays, we show the system pe...

  8. 1 ns time to digital converters for the KM3NeT data readout system

    Energy Technology Data Exchange (ETDEWEB)

    Calvo, David [IFIC, Instituto de Física Corpuscular, CSIC- Universidad de Valencia, C/Catedrático José Beltrán, 2, 46980 Paterna (Spain); Collaboration: KM3NeT Collaboration

    2014-11-18

    The KM3NeT collaboration aims at the construction of a multi-km3 high-energy neutrino telescope in the Mediterranean Sea consisting of thousands of glass spheres (nodes), each of them containing 31 photomultiplier (PMT) of small photocathode area. The readout and data acquisition system of KM3NeT has to collect, treat and send to shore, in an economic way, the enormous amount of data produced by the photomultipliers. For this purpose, 31 high-resolution time-interval measuring channels are implemented on the Field-Programmable Gate Arrays (FPGA) based on Time to Digital Converter (TDC). TDC are very common devices in particles physics experiments. Architectures with low resources occupancy are desirable allowing the implementation of other instrumentation, communication and synchronization systems on the same device. The required resolution to measure both, time of flight and timestamp must be 1 ns. A 4-Oversampling technique with two high frequency clocks is used to achieve this resolution. The proposed TDC firmware is developed using very few resources in Xilinx Kintex-7.

  9. The Process Manager in the ATLAS DAQ System

    CERN Document Server

    Avolio, G; Lehmann-Miotto, G; Wiesmann, M; 15th IEEE Real Time Conference 2007

    2008-01-01

    The Process Manager is the component responsible for launching and controlling processes in the ATLAS DAQ system. The tasks of the Process Manager can be coarsely grouped into three categories: process creation, control and monitoring. Process creation implies the creation of the actual process on behalf of different users and the preparation of all the resources and data needed to actually start the process. Process control includes mostly process termination and UNIX signal dispatching. Process monitoring implies both giving state information on request and initiating call-backs to notify clients that processes have changed states. This paper describes the design and implementation of the DAQ Process Manager for the ATLAS experiment. Since the Process Manager is at the basis of the DAQ control system it must be extremely robust and tolerate the failure of any other DAQ service. Particular emphasis will be given to the testing and quality assurance procedures carried out to validate this component.

  10. The ATLAS tile calorimeter web systems for data quality

    International Nuclear Information System (INIS)

    The ATLAS detector consists of four major components: inner tracker, calorimeter, muon spectrometer and magnet system. In the Tile Calorimeter (TileCal), there are 4 partitions, each partition has 64 modules and each module has up to 48 channels. During the ATLAS pre-operation phase, a group of physicists need to analyze the Tile Calorimeter data quality, generate reports and update the official database, when necessary. The Tile Commissioning Web System (TCWS) retrieves information from different directories and databases, executes programs that generate results, stores comments and verifies the calorimeter status. TCWS integrates different applications, each one presenting a unique data view. The Web Interface for Shifters (WIS) supports monitoring tasks by managing test parameters and all the calorimeter status. The TileComm Analysis stores plots, automatic analyses results and comments concerning the tests. With the necessity of increasing granularity, a new application was created: the Monitoring and Calibration Web System (MCWS). This application supports data quality analyses at the channel level by presenting the automatic analyses results, the problematic known channels and the channels masked by the shifters. Through the web system, it is possible to generate plots and reports, related to the channels, identify new bad channels and update the Bad Channels List at the ATLAS official database (COOL DB). The Data Quality Monitoring Viewer (DQM Viewer) displays the data quality automatic results through an oriented visualization.

  11. Alignment of the ATLAS Inner Detector tracking system

    International Nuclear Information System (INIS)

    ATLAS is a multipurpose experiment that records the products of the LHC collisions. To reconstruct trajectories of charged particles produced in these collisions, ATLAS is equipped with a tracking system built of silicon planar sensors and drift-tube based detectors. They constitute the ATLAS Inner Detector. In order to achieve its scientific goals, the alignment of the ATLAS tracking system requires the determination of its almost 36000 degrees of freedom (DoF) with high accuracy. Thus the demanded precision for the alignment of the silicon sensors is below 10 micrometers. This implies to use a large sample of high momentum and isolated charge particle tracks. The high level trigger selects those tracks online. Then the raw data with the hits information of the triggered tracks is stored in a calibration stream. Tracks from cosmic trigger during empty LHC bunches are also used as input for the alignment. The implementation of the track based alignment within the ATLAS software framework unifies different alignment approaches and allows the alignment of all tracking subsystems together. Primary vertexing and beam spot constraints have also been implemented, as well as constraints on survey measurements. As alignment algorithms are based on minimization of the track-hit residuals, one needs to solve a linear system with large number of DoF. The solving involves the inversion or diagonalization of a large matrix that may be dense. The alignment jobs are executed at the CERN Analysis Facility. The event processing is run in parallel in many jobs. The output matrices from all jobs are added before solving. We will present the results of the alignment of the ATLAS detector using real data recorded during the LHC start up run in 2009 plus the recent 7 TeV data collected during 2010 run. Validation of the alignment was performed by measuring the alignment observables as well as many other physics observables, notably resonance invariant masses. The results of the

  12. A Forward Silicon Strip System for the ATLAS HL-LHC Upgrade

    CERN Document Server

    Wonsak, S; The ATLAS collaboration

    2012-01-01

    The LHC is successfully accumulating luminosity at a centre-of-mass energy of 8 TeV this year. At the same time, plans are rapidly progressing for a series of upgrades, culminating roughly eight years from now in the High Luminosity LHC (HL-LHC) project. The HL-LHC is expected to deliver approximately five times the LHC nominal instantaneous luminosity, resulting in a total integrated luminosity of around 3000 fb-1 by 2030. The ATLAS experiment has a rather well advanced plan to build and install a completely new Inner Tracker (IT) system entirely based on silicon detectors by 2020. This new IT will be made from several pixel and strip layers. The silicon strip detector system will consist of single-sided p-type detectors with five barrel layers and six endcap (EC) disks on each forward side. Each disk will consist of 32 trapezoidal objects dubbed “petals”, with all services (cooling, read-out, command lines, LV and HV power) integrated into the petal. Each petal will contain 18 silicon sensors grouped in...

  13. Hybrid Network Simulation for the ATLAS Trigger and Data Acquisition (TDAQ) System

    CERN Document Server

    Bonaventura, Matias Alejandro; The ATLAS collaboration; Castro, Rodrigo Daniel; Foguelman, Daniel Jacob

    2015-01-01

    The poster shows the ongoing research in the ATLAS TDAQ group in collaboration with the University of Buenos Aires in the area of hybrid data network simulations. he Data Network and Processing Cluster filters data in real-time, achieving a rejection factor in the order of 40000x and has real-time latency constrains. The dataflow between the processing units (TPUs) and Readout System (ROS) presents a “TCP Incast”-type network pathology which TCP cannot handle it efficiently. A credits system is in place which limits rate of queries and reduces latency. This large computer network, and the complex dataflow has been modelled and simulated using a PowerDEVS, a DEVS-based simulator. The simulation has been validated and used to produce what-if scenarios in the real network. Network Simulation with Hybrid Flows: Speedups and accuracy, combined • For intensive network traffic, Discrete Event simulation models (packet-level granularity) soon becomes prohibitive: Too high computing demands. • Fluid Flow simul...

  14. Operation of the Upgraded ATLAS Level-1 Central Trigger System

    CERN Document Server

    Glatzer, Julian Maximilian Volker; The ATLAS collaboration

    2015-01-01

    The ATLAS Level-1 Central Trigger (L1CT) system is a central part of ATLAS data-taking and has undergone a major upgrade for Run 2 of the LHC, in order to cope with the expected increase of instantaneous luminosity of a factor of 2 with respect to Run 1. The upgraded hardware offers more flexibility in the trigger decisions due to the double amount of trigger inputs and usable trigger channels. It also provides an interface to the new topological trigger system. Operationally - particularly useful for commissioning, calibration and test runs - it allows concurrent running of up to 3 different subdetector combinations. An overview of the operational software framework of the L1CT system with particular emphasis of the configuration, controls and monitoring aspects is given. The software framework allows a consistent configuration with respect to the ATLAS experiment and the LHC machine, upstream and downstream trigger processors, and the data acquisition. Trigger and dead-time rates are monitored coherently at...

  15. Operation of the Upgraded ATLAS Level-1 Central Trigger System

    CERN Document Server

    Glatzer, Julian Maximilian Volker; The ATLAS collaboration

    2015-01-01

    The ATLAS Level-1 Central Trigger (L1CT) system is a central part of ATLAS data-taking and has undergone a major upgrade for Run 2 of the LHC, in order to cope with the expected increase of instantaneous luminosity of a factor of 2 with respect to Run 1. The upgraded hardware offers more flexibility in the trigger decisions due to the double amount of trigger inputs and usable trigger channels. It also provides an interface to the new topological trigger system. Operationally - particularly useful for commissioning, calibration and test runs - it allows concurrent running of up to 3 different sub-detector combinations. In this contribution, we give an overview of the operational software framework of the L1CT system with particular emphasis of the configuration, controls and monitoring aspects. The software framework allows a consistent configuration with respect to the ATLAS experiment and the LHC machine, upstream and downstream trigger processors, and the data acquisition. Trigger and dead-time rates are m...

  16. A fast hardware tracker for the ATLAS trigger system

    International Nuclear Information System (INIS)

    The fast tracker (FTK) is an integral part of the trigger upgrade program for the ATLAS detector at the Large Hadron Collider (LHC). As the LHC luminosity approaches its design level of 1034 cm−2 s−1, the combinatorial problem posed by charged particle tracking becomes increasingly difficult due to the swelling of multiple interactions per bunch crossing (pile-up). The FTK is a highly-parallel hardware system intended to provide high-quality tracks with transverse momentum above 1 GeV/c. The FTK systems design, based on a mixture of advanced technologies, and expected physics performance will be presented. -- Author-Highlights: •The fast tracker (FTK) is an integral part of the trigger upgrade program for the ATLAS detector. •FTK provide high-quality tracks for ATLAS data acquisition system. •Track information from FTK will reduce the difficulty due to the increasing beam luminosity. •We report a FTK performance and it shows that it performs well with up to 75 pile-up events

  17. A multichannel compact readout system for single photon detection: Design and performances

    Energy Technology Data Exchange (ETDEWEB)

    Argentieri, A.G. [Istituto Nazionale di Fisica Nucleare, via E. Orabona 4, 70126 Bari (Italy); Cisbani, E.; Colilli, S.; Cusanno, F. [Istituto Superiore di Sanita, viale Regina Elena 299, 00161 Roma (Italy); De Leo, R. [Istituto Nazionale di Fisica Nucleare, via E. Orabona 4, 70126 Bari (Italy); Fratoni, R.; Garibaldi, F.; Giuliani, F.; Gricia, M.; Lucentini, M. [Istituto Superiore di Sanita, viale Regina Elena 299, 00161 Roma (Italy); Marra, M. [Istituto Nazionale di Fisica Nucleare, via E. Orabona 4, 70126 Bari (Italy); Musico, Paolo, E-mail: Paolo.Musico@ge.infn.i [Istituto Nazionale di Fisica Nucleare, via Dodecaneso 33, 16146 Genova (Italy); Santavenere, F.; Torrioli, S. [Istituto Superiore di Sanita, viale Regina Elena 299, 00161 Roma (Italy)

    2010-05-21

    Optimal exploitation of Multi Anode PhotoMultiplier Tubes (MAPMT) as imaging devices requires the acquisition of a large number of independent channels; despite the rather wide demand, on-the-shelf electronics for this purpose does not exist. A compact independent channel readout system for an array of MAPMTs has been developed and tested . The system can handle up to 4096 independent channels, covering an area of about 20x20cm{sup 2} with pixel size of 3x3mm{sup 2}, using Hamamatsu H-9500 devices. The front-end is based on a 64 channels VLSI custom chip called MAROC, developed by IN2P3 Orsay (France) group, controlled by means of a Field Programmable Gate Array (FPGA) which implements configuration, triggering and data conversion controls. Up to 64 front-end cards can be housed in four backplanes and a central unit collects data from all of them, communicating with a control Personal Computer (PC) using an high speed USB 2.0 connection. A complete system has been built and tested. Eight Flat MAPMTs (256 anodes Hamamatsu H-9500) have been arranged on a boundary of a 3x3 matrix for a grand total of 2048 channels. This detector has been used to verify the performances of a focusing aerogel RICH prototype using an electron beam at the Frascati (Rome) INFN National Laboratory Beam Test Facility (BTF) during the last week of January 2009. Data analysis is ongoing: the first results are encouraging, showing that the Cherenkov rings are well identified by this system.

  18. A multichannel compact readout system for single photon detection: Design and performances

    Science.gov (United States)

    Argentieri, A. G.; Cisbani, E.; Colilli, S.; Cusanno, F.; De Leo, R.; Fratoni, R.; Garibaldi, F.; Giuliani, F.; Gricia, M.; Lucentini, M.; Marra, M.; Musico, Paolo; Santavenere, F.; Torrioli, S.

    2010-05-01

    Optimal exploitation of Multi Anode PhotoMultiplier Tubes (MAPMT) as imaging devices requires the acquisition of a large number of independent channels; despite the rather wide demand, on-the-shelf electronics for this purpose does not exist. A compact independent channel readout system for an array of MAPMTs has been developed and tested [1,2]. The system can handle up to 4096 independent channels, covering an area of about 20×20 cm2 with pixel size of 3×3 mm2, using Hamamatsu H-9500 devices. The front-end is based on a 64 channels VLSI custom chip called MAROC, developed by IN2P3 Orsay (France) group, controlled by means of a Field Programmable Gate Array (FPGA) which implements configuration, triggering and data conversion controls. Up to 64 front-end cards can be housed in four backplanes and a central unit collects data from all of them, communicating with a control Personal Computer (PC) using an high speed USB 2.0 connection. A complete system has been built and tested. Eight Flat MAPMTs (256 anodes Hamamatsu H-9500) have been arranged on a boundary of a 3×3 matrix for a grand total of 2048 channels. This detector has been used to verify the performances of a focusing aerogel RICH prototype using an electron beam at the Frascati (Rome) INFN National Laboratory Beam Test Facility (BTF) during the last week of January 2009. Data analysis is ongoing: the first results are encouraging, showing that the Cherenkov rings are well identified by this system.

  19. The PASERO Project: parallel and serial readout systems for gas proportional synchrotron radiation X-ray detectors

    CERN Document Server

    Koch, M H J; Briquet-Laugier, F; Epstein, A; Sheldon, S; Beloeuvre, E; Gabriel, A; Hervé, C; Kocsis, M; Koschuch, A; Laggner, P; Leingartner, W; Raad-Iseli, C D; Reimann, T; Golding, F; Torki, K

    2001-01-01

    A project aiming at producing more efficient position sensitive gas proportional detectors and readout systems is presented. An area detector with reduced electrode spacing and a spatial resolution of 0.5 mm and two time to digital convertors (TDC) based on ASICs were produced. The first TDC, intended for use with linear detectors, relies on time to space conversion, whereas the second one, for area detectors, uses a ring oscillator with a phase locked loop. A parallel readout system for multi-anode detectors aiming at a maximum count rate extensively uses RISC microcontrollers. An electronic simulator of linear detectors built for test purposes and a mechanical chopper used for attenuation of the X-ray beam are also briefly described.

  20. The dataflow system of the ATLAS DAQ and event filter prototype "-1" project

    CERN Document Server

    Mornacchi, Giuseppe

    1999-01-01

    The final design of the data acquisition (DAQ) and event filter (EF) system for the ATLAS experiment at the LHC is scheduled to start not earlier than 1999. Clear specification of the detector requirements, further technology investigation of hardware and software elements and integration studies are still required to reach maturity for the design. The ATLAS DAQ Group has chosen to approach such pre-design investigations via a structured prototype, supporting the evaluation of hardware and software technologies as well as their system integration aspects. A project has been proposed and approved by the ATLAS Collaboration for the design and implementation of a full DAQ /EF prototype, based on the trigger/DAQ architecture described in the ATLAS Technical Proposal and supporting studies of the full system functionality, although obviously not the required final performance. For this reason, it is referred to as ATLAS DAQ Prototype "-1". The prototype consists of a full "vertical" slice of the ATLAS DAQ/EF archi...

  1. Development and Test of a High Performance Multi Channel Readout System on a Chip with Application in PET/MR

    OpenAIRE

    2014-01-01

    The availability of new, compact, magnetic field tolerant sensors suitable for PET has opened the opportunity to build highly integrated PET scanners that can be included in commercial MR scanners. This combination has long been expected to have big advantages over existing systems combining PET and CT. This thesis describes my work towards building a highly integrated readout ASIC for application in PET/MR within the framework of the HYPERImage and SUBLIMA projects. It also gives a brief ...

  2. Direct ion storage dosimetry systems for photon, beta and neutron radiation with instant readout capabilities

    International Nuclear Information System (INIS)

    The direct ion storage (DIS) dosemeter is a new type of electronic dosemeter from which the dose information for both Hp(10) and Hp(0.07) can be obtained instantly at the workplace by using an electronic reader unit. The number of readouts is unlimited and the stored information is not affected by the readout procedure. The accumulated dose can also be electronically reset by authorised personnel. The DIS dosemeter represents a potential alternative for replacing the existing film and thermoluminescence dosemeters (TLDs) used in occupational monitoring due to its ease of use and low operating costs. The standard version for normal photon and beta dosimetry, as well as a developmental version for neutron dosimetry, have been characterised in several field studies. Two new small size variations are also introduced including a contactless readout device and a militarised version optimised for field use. (author)

  3. Microstrip electrode readout noise for load-dominated long shaping-time systems

    International Nuclear Information System (INIS)

    In cases such as that of the proposed International Linear Collider (ILC), for which the beam-delivery and detector-occupancy characteristics permit a long shaping-time readout of the microstrip sensors, it is possible to envision long (∼1 meter) daisy-chained ‘ladders’ of fine-pitch sensors read out by a single front-end amplifier. In this study, a long shaping-time (∼2μsec) front-end amplifier has been used to measure readout noise as a function of detector load. Comparing measured noise to that expected from lumped and distributed models of the load network, it is seen that network effects significantly mitigate the amount of readout noise contributed by the detector load. Further reduction in noise is demonstrated for the case that the sensor load is read out from its center rather than its end

  4. A Triggerless readout system for the P-bar ANDA electromagnetic calorimeter

    International Nuclear Information System (INIS)

    One of the physics goals of the future P-bar ANDA experiment at FAIR is to research newly discovered exotic states. Because the detector response created by these particles is very similar to the background channels, a new type of data readout had to be developed, called ''triggerless'' readout. In this concept, each detector subsystem preprocesses the signal, so that in a later stage, high-level phyiscs constraints can be applied to select events of interest. A dedicated clock source using a protocol called SODANET over optical fibers ensures proper synchronisation between the components. For this new type of readout, a new way of simulating the detector response also needed to be developed, taking into account the effects of pile-up caused by the 20 MHz interaction rate

  5. Characterization of the ATLAS Micromegas quadruplet prototype

    Science.gov (United States)

    Sidiropoulou, O.; Bianco, M.; Danielsson, H.; Degrange, J.; Farina, E. M.; Gomez, F. P.; Iengo, P.; Kuger, F.; Lin, T. H.; Schott, M.; Sekhniaidze, G.; Valderanis, C.; Vergain, M.; Wotschack, J.

    2016-07-01

    A Micromegas [1] detector with four active layers, serving as prototype for the upgrade of the ATLAS muon spectrometer [2], was designed and constructed in 2014 at CERN and represents the first example of a Micromegas quadruplet ever built. The detector has been realized using the resistive-strip technology and decoupling the amplification mesh from the readout structure. The four readout layers host overall 4096 strips with a pitch of 415 μm; two layers have strips running parallel (η in the ATLAS reference system, for measuring the muon bending coordinate) and two layers have inclined strips by ±1.5° angle with respect to the η coordinate in order to provide measurement of the second coordinate. A detector characterization carried out with cosmic muons and under X-ray irradiation is presented with the obtained results.

  6. Integration of the Omega-3 readout chip into a high energy physics experimental data acquisition system

    Science.gov (United States)

    Beker, H.; Chesi, E.; Martinengo, P.

    1997-02-01

    The Omega-3 readout chip is presented in detail elsewhere in the same proceedings. We here describe the integration of the chip into present and future experiments describing both hardware and software aspects. We cover preliminary tests in the laboratory and on the beam. The WA97 experiment has already used a pixel telescope in the past and intends to upgrade to the Omega-3 chip. A newly proposed experiment at CERN studying strangeness production in heavy ion collisions also plans to use a similar telescope. Finally, we give an outlook on the ongoing developments in the pixel readout architecture in the context of ALICE, the heavy ion experiment at the LHC collider.

  7. The ATLAS LAr Calorimeter Level 1 Trigger Signal pre-Processing System: Installation, Commissioning and Calibration Results.

    CERN Document Server

    Boulahouache, C; The ATLAS collaboration

    2009-01-01

    The Liquid Argon calorimeter is one of the main sub-detectors in the ATLAS experiment at the LHC. It provides precision measurements of electrons, photons, jets and missing transverse energy produced in the LHC pp collisions. The calorimeter information is a key ingredient in the first level (L1) trigger decision to reduce the 40 MHz p-p bunch crossing rate to few 100 kHz of accepted events waiting to be readout in full precision, in the system pipelines. This presentation covers the LAr calorimeter electronics used to prepare signals for the L1 trigger. After exiting the cryostat, part of the current signal, at the front end, is directly split off the main readout path and summed with neighbouring channels forming trigger towers which are transmitted in analog form over 50 to 70 meters to the counting room. There, the signals are calibrated, reordered and futher summed for fast digitization using the L1 trigger hardware. Many factors like calorimeter capacitances and pulse shapes have to be taken into accoun...

  8. Control and monitoring system for TRT detector in ATLAS experiment

    CERN Document Server

    Hajduk, Z

    2002-01-01

    In this article we present methods and tools for design and construction of the control and monitoring system for a big particle physics experiment taking as an example one of the ATLAS subdetectors. Several requirements has been enumerated which such a system have to meet both by hardware and software. Harsh environmental conditions, difficult if not impossible access and very long exploitation time create conditions where only application of industrial standards allow for serviceability, possibility of fast and easy upgrades and intuitive running of the system by relatively non-experienced staff. (6 refs).

  9. ATCA-based ATLAS FTK input interface system

    International Nuclear Information System (INIS)

    The first stage of the ATLAS Fast TracKer (FTK) is an ATCA-based input interface system, where hits from the entire silicon tracker are clustered and organized into overlapping η-φ trigger towers before being sent to the tracking engines. First, FTK Input Mezzanine cards receive hit data and perform clustering to reduce data volume. Then, the ATCA-based Data Formatter system will organize the trigger tower data, sharing data among boards over full mesh backplanes and optic fibers. The board and system level design concepts and implementation details, as well as the operation experiences from the FTK full-chain testing, will be presented

  10. Development of a versatile readout and test system and characterization of a capacitively coupled active pixel sensor

    Energy Technology Data Exchange (ETDEWEB)

    Janssen, Jens; Gonella, Laura; Hemperek, Tomasz; Hirono, Toko; Huegging, Fabian; Krueger, Hans; Wermes, Norbert [Institute of Physics, University of Bonn, Bonn (Germany); Peric, Ivan [Karlsruher Institut fuer Technologie, Karlsruhe (Germany); Collaboration: ATLAS-Collaboration

    2015-07-01

    With the availability of high voltage and high resistivity CMOS processes, active pixel sensors are becoming increasingly interesting for radiation detection in high energy physics experiments. Although the pixel signal-to-noise ratio and the sensor radiation tolerance were improved, active pixel sensors cannot yet compete with state-of-the-art hybrid pixel detector in a high radiation environment. Hence, active pixel sensors are possible candidates for the outer tracking detector in HEP experiments where production cost plays a role. The investigation of numerous prototyping steps and different technologies is still ongoing and requires a versatile test and readout system, which will be presented in this talk. A capacitively coupled active pixel sensor fabricated in AMS 180 nm high voltage CMOS process is investigated. The sensor is designed to be glued to existing front-end pixel readout chips. Results from the characterization are presented in this talk.

  11. Development of a versatile readout and test system and characterization of a capacitively coupled active pixel sensor

    International Nuclear Information System (INIS)

    With the availability of high voltage and high resistivity CMOS processes, active pixel sensors are becoming increasingly interesting for radiation detection in high energy physics experiments. Although the pixel signal-to-noise ratio and the sensor radiation tolerance were improved, active pixel sensors cannot yet compete with state-of-the-art hybrid pixel detector in a high radiation environment. Hence, active pixel sensors are possible candidates for the outer tracking detector in HEP experiments where production cost plays a role. The investigation of numerous prototyping steps and different technologies is still ongoing and requires a versatile test and readout system, which will be presented in this talk. A capacitively coupled active pixel sensor fabricated in AMS 180 nm high voltage CMOS process is investigated. The sensor is designed to be glued to existing front-end pixel readout chips. Results from the characterization are presented in this talk.

  12. The electronics readout and data acquisition system of the KM3NeT neutrino telescope node

    International Nuclear Information System (INIS)

    The KM3NeT neutrino telescope will be composed by tens of thousands of glass spheres, called Digital Optical Module (DOM), each of them containing 31 PMTs of small photocathode area (3'). The readout and data acquisition system of KM3NeT have to collect, treat and send to shore, in an economic way, the enormous amount of data produced by the photomultipliers and at the same time to provide time synchronization between each DOM at the level of 1 ns. It is described in the present article the Central Logic Board, that integrates the Time to Digital Converters and the White Rabbit protocol used for the DOM synchronization in a transparent way, the Power Board used in the DOM, the PMT base to readout the photomultipliers and the respective collecting boards, the so called Octopus Board

  13. Integrated optical readout for miniaturization of cantilever-based sensor system

    DEFF Research Database (Denmark)

    Nordström, Maria; Zauner, Dan; Calleja, Montserrat;

    2007-01-01

    The authors present the fabrication and characterization of an integrated optical readout scheme based on single-mode waveguides for cantilever-based sensors. The cantilever bending is read out by monitoring changes in the optical intensity of light transmitted through the cantilever that also ac...

  14. Front-end electronics and readout system for the ILD TPC

    CERN Document Server

    Hedberg, V; Lundberg, B; Mjörnmark, U; Oskarsson, A; Österman, L; De Lentdecker, G; Yang, Y; Zhang, F

    2015-01-01

    A high resolution TPC is the main option for a central tracking detector at the future International Linear Collider (ILC). It is planned that the MPGD (Micro Pattern Gas Detector) technology will be used for the readout. A Large Prototype TPC at DESY has been used to test the performance of MPGDs in an electron beam of energies up to 6 GeV. The first step in the technology development was to demonstrate that the MPGDs are able to achieve the necessary performance set by the goals of ILC. For this ’proof of principle’ phase, the ALTRO front-end electronics from the ALICE TPC was used, modified to adapt to MPGD readout. The proof of principle has been verified and at present further improvement of the MPGD technology is going on, using the same readout electronics. The next step is the ’feasibility phase’, which aims at producing front-end electronics comparable in size (few mm2) to the readout pads of the TPC. This development work is based on the succeeding SALTRO16 chip, which combines the analogue ...

  15. Thermal Performance of ATLAS Laser Thermal Control System Demonstration Unit

    Science.gov (United States)

    Ku, Jentung; Robinson, Franklin; Patel, Deepak; Ottenstein, Laura

    2013-01-01

    The second Ice, Cloud, and Land Elevation Satellite mission currently planned by National Aeronautics and Space Administration will measure global ice topography and canopy height using the Advanced Topographic Laser Altimeter System {ATLAS). The ATLAS comprises two lasers; but only one will be used at a time. Each laser will generate between 125 watts and 250 watts of heat, and each laser has its own optimal operating temperature that must be maintained within plus or minus 1 degree Centigrade accuracy by the Laser Thermal Control System (LTCS) consisting of a constant conductance heat pipe (CCHP), a loop heat pipe (LHP) and a radiator. The heat generated by the laser is acquired by the CCHP and transferred to the LHP, which delivers the heat to the radiator for ultimate rejection. The radiator can be exposed to temperatures between minus 71 degrees Centigrade and minus 93 degrees Centigrade. The two lasers can have different operating temperatures varying between plus 15 degrees Centigrade and plus 30 degrees Centigrade, and their operating temperatures are not known while the LTCS is being designed and built. Major challenges of the LTCS include: 1) A single thermal control system must maintain the ATLAS at 15 degrees Centigrade with 250 watts heat load and minus 71 degrees Centigrade radiator sink temperature, and maintain the ATLAS at plus 30 degrees Centigrade with 125 watts heat load and minus 93 degrees Centigrade radiator sink temperature. Furthermore, the LTCS must be qualification tested to maintain the ATLAS between plus 10 degrees Centigrade and plus 35 degrees Centigrade. 2) The LTCS must be shut down to ensure that the ATLAS can be maintained above its lowest desirable temperature of minus 2 degrees Centigrade during the survival mode. No software control algorithm for LTCS can be activated during survival and only thermostats can be used. 3) The radiator must be kept above minus 65 degrees Centigrade to prevent ammonia from freezing using no more

  16. High-rate irradiation of 15mm muon drift tubes and development of an ATLAS compatible readout driver for micromegas detectors

    CERN Document Server

    Zibell, Andre

    The upcoming luminosity upgrades of the LHC accelerator at CERN demand several upgrades to the detectors of the ATLAS muon spectrometer, mainly due to the proportionally increasing rate of uncorrelated background irradiation. This concerns also the "Small Wheel" tracking stations of the ATLAS muon spectrometer, where precise muon track reconstruction will no longer be assured when around 2020 the LHC luminosity is expected to reach values 2 to 5 times the design luminosity of $1 \\times 10^{34} \\text{cm}^{-2}\\text{s}^{-1}$, and when background hit rates will exceed 10 kHz/cm$^2$. This, together with the need of an additional triggering station in this area with an angular resolution of 1 mrad, requires the construction of "New Small Wheel" detectors for a complete replacement during the long maintenance period in 2018 and 2019. As possible technology for these New Small Wheels, high-rate capable sMDT drift tubes have been investigated, based on the ATLAS 30 mm Monitored Drift Tube technology, but with a smalle...

  17. Readout Architecture for Hybrid Pixel Readout Chips

    CERN Document Server

    AUTHOR|(SzGeCERN)694170; Westerlund, Tomi; Wyllie, Ken

    The original contribution of this thesis to knowledge are novel digital readout architectures for hybrid pixel readout chips. The thesis presents asynchronous bus-based architecture, a data-node based column architecture and a network-based pixel matrix architecture for data transportation. It is shown that the data-node architecture achieves readout efficiency 99 % with half the output rate as a bus-based system. The network-based solution avoids ``broken'' columns due to some manufacturing errors, and it distributes internal data traffic more evenly across the pixel matrix than column-based architectures. An improvement of $>$ 10 % to the efficiency is achieved with uniform and non-uniform hit occupancies. Architectural design has been done using transaction level modeling ($TLM$) and sequential high-level design techniques for reducing the design and simulation time. It has been possible to simulate tens of column and full chip architectures using the high-level techniques. A decrease of $>$ 10 in run-time...

  18. Beam Test of the ATLAS Level-1 Calorimeter Trigger System

    CERN Document Server

    Garvey, J; Mahout, G; Moye, T H; Staley, R J; Thomas, J P; Typaldos, D; Watkins, P M; Watson, A; Achenbach, R; Föhlisch, F; Geweniger, C; Hanke, P; Kluge, E E; Mahboubi, K; Meier, K; Meshkov, P; Rühr, F; Schmitt, K; Schultz-Coulon, H C; Ay, C; Bauss, B; Belkin, A; Rieke, S; Schäfer, U; Tapprogge, T; Trefzger, T; Weber, GA; Eisenhandler, E F; Landon, M; Apostologlou, P; Barnett, B M; Brawn, I P; Davis, A O; Edwards, J; Gee, C N P; Gillman, A R; Mirea, A; Perera, V J O; Qian, W; Sankey, D P C; Bohm, C; Hellman, S; Hidvegi, A; Silverstein, S

    2005-01-01

    The Level-1 Calorimter Trigger consists of a Preprocessor (PP), a Cluster Processor (CP), and a Jet/Energy-sum Processor (JEP). The CP and JEP receive digitised trigger-tower data from the Preprocessor and produce Region-of-Interest (RoIs) and trigger multiplicities. The latter are sent in real time to the Central Trigger Processor (CTP) where the Level-1 decision is made. On receipt of a Level-1 Accept, Readout Driver Modules (RODs), provide intermediate results to the data acquisition (DAQ) system for monitoring and diagnostic purpose. RoI information is sent to the RoI builder (RoIB) to help reduce the amount of data required for the Level-2 Trigger The Level-1 Calorimeter Trigger System at the test beam consisted of 1 Preprocessor module, 1 Cluster Processor Module, 1 Jet/Energy Module and 2 Common Merger Modules. Calorimeter energies were sucessfully handled thourghout the chain and trigger object sent to the CTP. Level-1 Accepts were sucessfully produced and used to drive the readout path. Online diagno...

  19. Detector Control System of the ATLAS Tile Calorimeter

    CERN Document Server

    Arabidze, G; The ATLAS collaboration; Ribeiro, G; Santos, H; Vinagre, F

    2011-01-01

    The main task of the ATLAS Tile calorimeter Detector Control System (DCS) is to enable the coherent and safe operation of the calorimeter. All actions initiated by the operator, as well as all errors, warnings and alarms concerning the hardware of the detector are handled by DCS. The Tile calorimeter DCS controls and monitors mainly the low voltage and high voltage power supply systems, but it is also interfaced with the infrastructure (cooling system and racks), the calibration systems, the data acquisition system, configuration and conditions databases and the detector safety system. The system has been operational since the beginning of LHC operation and has been extensively used in the operation of the detector. In the last months effort was directed to the implementation of automatic recovery of power supplies after trips. Current status, results and latest developments will be presented.

  20. Offset correction system for 128-channel self-triggering readout chip with in-channel 5-bit energy measurement functionality

    International Nuclear Information System (INIS)

    We report on a novel, two-stage 8-bit trimming solution dedicated for multichannel systems with reduced trim DAC area occupancy. The presented design was used for comparator offset correction in a 128-channel particle tracking, self-triggering readout system and manufactured in 180 nm CMOS process. The 8-bit trim DAC has a range of ±165 mV, current consumption of 3.2 µA and occupies an area of 37 µm×17 µm in each channel, which corresponds to a 6-bit conventional current steering DAC with similar linearity

  1. A DSP-based readout and online processing system for a new focal-plane polarimeter at AGOR

    International Nuclear Information System (INIS)

    A Focal-Plane Polarimeter (FPP) for the large acceptance Big-Bite Spectrometer (BBS) at AGOR using a novel readout architecture has been commissioned at the KVI Groningen. The instrument is optimized for medium-energy polarized proton scattering near or at 0 deg. . For the handling of the high counting rates at extreme forward angles and for the suppression of small-angle scattering in the graphite analyzer, a high-performance data processing DSP system connecting to the LeCroy FERA and PCOS ECL bus architecture has been made operational and tested successfully. Details of the system and the functions of the various electronic components are described

  2. The Error Reporting in the ATLAS TDAQ System

    Science.gov (United States)

    Kolos, Serguei; Kazarov, Andrei; Papaevgeniou, Lykourgos

    2015-05-01

    The ATLAS Error Reporting provides a service that allows experts and shift crew to track and address errors relating to the data taking components and applications. This service, called the Error Reporting Service (ERS), gives to software applications the opportunity to collect and send comprehensive data about run-time errors, to a place where it can be intercepted in real-time by any other system component. Other ATLAS online control and monitoring tools use the ERS as one of their main inputs to address system problems in a timely manner and to improve the quality of acquired data. The actual destination of the error messages depends solely on the run-time environment, in which the online applications are operating. When an application sends information to ERS, depending on the configuration, it may end up in a local file, a database, distributed middleware which can transport it to an expert system or display it to users. Thanks to the open framework design of ERS, new information destinations can be added at any moment without touching the reporting and receiving applications. The ERS Application Program Interface (API) is provided in three programming languages used in the ATLAS online environment: C++, Java and Python. All APIs use exceptions for error reporting but each of them exploits advanced features of a given language to simplify the end-user program writing. For example, as C++ lacks language support for exceptions, a number of macros have been designed to generate hierarchies of C++ exception classes at compile time. Using this approach a software developer can write a single line of code to generate a boilerplate code for a fully qualified C++ exception class declaration with arbitrary number of parameters and multiple constructors, which encapsulates all relevant static information about the given type of issues. When a corresponding error occurs at run time, the program just need to create an instance of that class passing relevant values to one

  3. Integration of the Omega-3 readout chip into a high energy physics experimental data acquisition system

    Energy Technology Data Exchange (ETDEWEB)

    Beker, H.; Chesi, E.; Martinengo, P. [European Organization for Nuclear Research, Geneva (Switzerland)

    1997-08-21

    The Omega-3 readout chip is presented in detail elsewhere in the same proceedings. We here describe the integration of the chip into present and future experiments describing both hardware and software aspects. We cover preliminary tests in the laboratory and on the beam. The WA97 experiment has already used a pixel telescope in the past and intends to upgrade to the Omega-3 chip. A newly proposed experiment at CERN studying strangeness production in heavy ion collisions also plans to use a similar telescope. Finally, we give an outlook on the ongoing developments in the pixel readout architecture in the context of ALICE, the heavy ion experiment at the LHC collider. (orig.). 11 refs.

  4. Integration of the Omega-3 readout chip into a high energy physics experimental data acquisition system

    International Nuclear Information System (INIS)

    The Omega-3 readout chip is presented in detail elsewhere in the same proceedings. We here describe the integration of the chip into present and future experiments describing both hardware and software aspects. We cover preliminary tests in the laboratory and on the beam. The WA97 experiment has already used a pixel telescope in the past and intends to upgrade to the Omega-3 chip. A newly proposed experiment at CERN studying strangeness production in heavy ion collisions also plans to use a similar telescope. Finally, we give an outlook on the ongoing developments in the pixel readout architecture in the context of ALICE, the heavy ion experiment at the LHC collider. (orig.)

  5. A Fast hardware Tracker for the ATLAS Trigger system

    CERN Document Server

    Pandini, Carlo Enrico; The ATLAS collaboration

    2015-01-01

    The trigger system at the ATLAS experiment is designed to lower the event rate occurring from the nominal bunch crossing at 40 MHz to about 1 kHz for a designed LHC luminosity of 10$^{34}$ cm$^{-2}$ s$^{-1}$. After a very successful data taking run the LHC is expected to run starting in 2015 with much higher instantaneous luminosities and this will increase the load on the High Level Trigger system. More sophisticated algorithms will be needed to achieve higher background rejection while maintaining good efficiency for interesting physics signals, which requires a more extensive use of tracking information. The Fast Tracker (FTK) trigger system, part of the ATLAS trigger upgrade program, is a highly parallel hardware device designed to perform full-scan track-finding at the event rate of 100 kHz. FTK is a dedicated processor based on a mixture of advanced technologies. Modern, powerful, Field Programmable Gate Arrays form an important part of the system architecture, and the combinatorial problem of pattern r...

  6. The ATLAS Trigger System: Ready for Run-2

    CERN Document Server

    Maeda, Junpei; The ATLAS collaboration

    2015-01-01

    The ATLAS trigger has been successfully collecting collision data during the first run of the LHC between 2009-2013 at a centre-of-mass energy between 900 GeV and 8 TeV. The trigger system consists of a hardware Level-1 and a software based high-level trigger that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. During the data-taking period of Run-2 the LHC will operate at a centre-of-mass energy of about 13 TeV resulting in roughly five times higher trigger rates. In these proceedings, we briefly review the ATLAS trigger system upgrades that were implemented during the shutdown, allowing us to cope with the increased trigger rates while maintaining or even improving our efficiency to select relevant physics processes. This includes changes to the Level-1 calorimeter and muon trigger system, the introduction of a new Level-1 topological trigger module and themerging of the previously two-level higher-level trigger system into a single even...

  7. The ATLAS PanDA Monitoring System and its Evolution

    Science.gov (United States)

    Klimentov, A.; Nevski, P.; Potekhin, M.; Wenaus, T.

    2011-12-01

    The PanDA (Production and Distributed Analysis) Workload Management System is used for ATLAS distributed production and analysis worldwide. The needs of ATLAS global computing imposed challenging requirements on the design of PanDA in areas such as scalability, robustness, automation, diagnostics, and usability for both production shifters and analysis users. Through a system-wide job database, the PanDA monitor provides a comprehensive and coherent view of the system and job execution, from high level summaries to detailed drill-down job diagnostics. It is (like the rest of PanDA) an Apache-based Python application backed by Oracle. The presentation layer is HTML code generated on the fly in the Python application which is also responsible for managing database queries. However, this approach is lacking in user interface flexibility, simplicity of communication with external systems, and ease of maintenance. A decision was therefore made to migrate the PanDA monitor server to Django Web Application Framework and apply JSON/AJAX technology in the browser front end. This allows us to greatly reduce the amount of application code, separate data preparation from presentation, leverage open source for tools such as authentication and authorization mechanisms, and provide a richer and more dynamic user experience. We describe our approach, design and initial experience with the migration process.

  8. Role Based Access Control system in the ATLAS experiment

    CERN Document Server

    Valsan, M L; The ATLAS collaboration; Lehmann Miotto, G; Scannicchio, D A; Schlenker, S; Filimonov, V; Khomoutnikov, V; Dumitru, I; Zaytsev, A S; Korol, A A; Bogdantchikov, A; Caramarcu, C; Ballestrero, S; Darlea, G L; Twomey, M; Bujor, F; Avolio, G

    2011-01-01

    The complexity of the ATLAS experiment motivated the deployment of an integrated Access Control System in order to guarantee safe and optimal access for a large number of users to the various software and hardware resources. Such an integrated system was foreseen since the design of the infrastructure and is now central to the operations model. In order to cope with the ever growing needs of restricting access to all resources used within the experiment, the Roles Based Access Control (RBAC) previously developed has been extended and improved. The paper starts with a short presentation of the RBAC design, implementation and the changes made to the system to allow the management and usage of roles to control access to the vast and diverse set of resources. The paper continues with a detailed description of the integration across all areas of the system: local Linux and Windows nodes in the ATLAS Control Network (ATCN), the Linux application gateways offering remote access inside ATCN, the Windows Terminal Serv...

  9. Role Based Access Control System in the ATLAS Experiment

    CERN Document Server

    Valsan, M L; The ATLAS collaboration; Lehmann Miotto, G; Scannicchio, D A; Schlenker, S; Filimonov, V; Khomoutnikov, V; Dumitru, I; Zaytsev, A S; Korol, A A; Bogdantchikov, A; Avolio, G; Caramarcu, C; Ballestrero, S; Darlea, G L; Twomey, M; Bujor, F

    2010-01-01

    The complexity of the ATLAS experiment motivated the deployment of an integrated Access Control System in order to guarantee safe and optimal access for a large number of users to the various software and hardware resources. Such an integrated system was foreseen since the design of the infrastructure and is now central to the operations model. In order to cope with the ever growing needs of restricting access to all resources used within the experiment, the Roles Based Access Control (RBAC) previously developed has been extended and improved. The paper starts with a short presentation of the RBAC design, implementation and the changes made to the system to allow the management and usage of roles to control access to the vast and diverse set of resources. The paper continues with a detailed description of the integration across all areas of the system: local Linux and Windows nodes in the ATLAS Control Network (ATCN), the Linux application gateways offering remote access inside ATCN, the Windows Terminal Serv...

  10. Characterization and commissioning of the ATLAS micromegas quadruplet prototype

    CERN Document Server

    Bianco, Michele; The ATLAS collaboration; Iengo, Paolo; Lin, Tai-hua; Schott, Matthias; Sekhniaidze, Givi; Sidiropoulou, Ourania; Valderanis, Chrysostomos; Wotschack, Jorg; Zibell, Andre

    2014-01-01

    Micromegas (Micro Mesh Gaseous Detector) chambers have been chosen for the upgrade of the forward muon spectrometer of the ATLAS experiment to provide precision tracking and also to contribute to the trigger. A quadruplet (1m X 0.5m) has been built at the CERN laboratories, it will serve as prototype for the future ATLAS chambers. This detector is realized using resistive-strip technology and decoupling the amplification mesh from the readout structure. The four readout planes host overall 4096 strips with a pitch of 415$\\mu m$. A complete detector characterization carried out with cosmic rays, X-Ray source and dedicated test beam is discussed, characterization is obtained by use of analog front-end chip (APV25). The efforts that lead to the chamber construction and the preparation for the installation in the ATLAS experimental cavern are presented. Finally, an overview of the readout system developed for this prototype, and integration in to the ATLAS Data Acquisition System is provided.

  11. Technical Design Report for the Phase-I Upgrade of the ATLAS TDAQ System

    CERN Document Server

    Aad, Georges; Abbott, Brad; Abdallah, Jalal; Abdel Khalek, Samah; Abdinov, Ovsat; Aben, Rosemarie; Abi, Babak; AbouZeid, Ossama; Abramowicz, Halina; Abreu, Henso; Abreu, Ricardo; Abulaiti, Yiming; Acharya, Bobby Samir; Achenbach, Ralf; Adamczyk, Leszek; Adams, David; Adelman, Jahred; Adomeit, Stefanie; Adye, Tim; Aefsky, Scott; Agatonovic-Jovin, Tatjana; Aguilar-Saavedra, Juan Antonio; Agustoni, Marco; Ahlen, Steven; Ahmad, Ashfaq; Ahmadov, Faig; Aielli, Giulio; Åkesson, Torsten Paul Ake; Akimoto, Ginga; Akimov, Andrei; Alam, Muhammad Aftab; Albert, Justin; Albrand, Solveig; Alconada Verzini, Maria Josefina; Aleksa, Martin; Aleksandrov, Igor; Alexa, Calin; Alexander, Gideon; Alexandre, Gauthier; Alexandrov, Evgeny; Alexopoulos, Theodoros; Alhroob, Muhammad; Alimonti, Gianluca; Alio, Lion; Alison, John; Allbrooke, Benedict; Allison, Lee John; Allport, Phillip; Allwood-Spiers, Sarah; Almond, John; Aloisio, Alberto; Alon, Raz; Alonso, Alejandro; Alonso, Francisco; Altheimer, Andrew David; Alvarez Gonzalez, Barbara; Alviggi, Mariagrazia; Amaral Coutinho, Yara; Amelung, Christoph; Amor Dos Santos, Susana Patricia; Amoroso, Simone; Amram, Nir; Amundsen, Glenn; Anastopoulos, Christos; Ancu, Lucian Stefan; Andari, Nansi; Andeen, Timothy; Anders, Christoph Falk; Anders, Gabriel; Anderson, John Thomas; Anderson, Kelby; Andreazza, Attilio; Andrei, George Victor; Anduaga, Xabier; Angelidakis, Stylianos; Angelozzi, Ivan; Anger, Philipp; Angerami, Aaron; Anisenkov, Alexey; Anjos, Nuno; Annovi, Alberto; Antonaki, Ariadni; Antonelli, Mario; Antonov, Alexey; Antos, Jaroslav; Anulli, Fabio; Aoki, Masato; Aperio Bella, Ludovica; Apolle, Rudi; Arabidze, Giorgi; Aracena, Ignacio; Arai, Yasuo; Araujo Ferraz, Victor; Arce, Ayana; Arguin, Jean-Francois; Argyropoulos, Spyridon; Arik, Metin; Armbruster, Aaron James; Arnaez, Olivier; Arnal, Vanessa; Arslan, Ozan; Artamonov, Andrei; Artoni, Giacomo; Asai, Shoji; Asbah, Nedaa; Ask, Stefan; Åsman, Barbro; Asquith, Lily; Assamagan, Ketevi; Astalos, Robert; Atkinson, Markus; Atlay, Naim Bora; Auerbach, Benjamin; Augsten, Kamil; Augusto, José; Aurousseau, Mathieu; Avolio, Giuseppe; Azuelos, Georges; Azuma, Yuya; Baak, Max; Baas, Alessandra; Bach, Andre; Bachacou, Henri; Bachas, Konstantinos; Backes, Moritz; Backhaus, Malte; Backus Mayes, John; Badescu, Elisabeta; Bagiacchi, Paolo; Bagnaia, Paolo; Bai, Yu; Bain, Travis; Baines, John; Baker, Oliver Keith; Baker, Sarah; Balek, Petr; Ballestrero, Sergio; Balli, Fabrice; Banas, Elzbieta; Banerjee, Swagato; Bangert, Andrea Michelle; Bansal, Vikas; Bansil, Hardeep Singh; Barak, Liron; Barber, Tom; Barberio, Elisabetta Luigia; Barberis, Dario; Barbero, Marlon; Barillari, Teresa; Barisonzi, Marcello; Barklow, Timothy; Barlow, Nick; Barnett, Bruce; Barnett, Michael; Baroncelli, Antonio; Barone, Gaetano; Barr, Alan; Barreiro, Fernando; Barreiro Guimarães da Costa, João; Bartoldus, Rainer; Barton, Adam Edward; Bartos, Pavol; Bartsch, Valeria; Bassalat, Ahmed; Basye, Austin; Bates, Richard; Batkova, Lucia; Batley, Richard; Batraneanu, Silvia; Battistin, Michele; Bauer, Florian; Bauss, Bruno; Bawa, Harinder Singh; Beacham, James Baker; Beau, Tristan; Beauchemin, Pierre-Hugues; Bechtle, Philip; Beck, Hans Peter; Becker, Anne Kathrin; Becker, Sebastian; Beckingham, Matthew; Beddall, Andrew; Beddall, Ayda; Bedikian, Sourpouhi; Bednyakov, Vadim; Bee, Christopher; Beemster, Lars; Beermann, Thomas; Begel, Michael; Behr, Katharina; Belanger-Champagne, Camille; Bell, Paul; Bell, William; Bella, Gideon; Bellagamba, Lorenzo; Bellerive, Alain; Bellomo, Massimiliano; Belloni, Alberto; Belotskiy, Konstantin; Beltramello, Olga; Benary, Odette; Benchekroun, Driss; Bendtz, Katarina; Benekos, Nektarios; Benhammou, Yan; Benhar Noccioli, Eleonora; Benitez Garcia, Jorge-Armando; Benjamin, Douglas; Bensinger, James; Bentvelsen, Stan; Beretta, Matteo; Berge, David; Bergeaas Kuutmann, Elin; Berger, Nicolas; Berghaus, Frank; Berglund, Elina; Beringer, Jürg; Bernard, Clare; Bernat, Pauline; Bernius, Catrin; Bernlochner, Florian Urs; Berry, Tracey; Berta, Peter; Bertella, Claudia; Bertelsen, Henrik; Bertolucci, Federico; Besana, Maria Ilaria; Besjes, Geert-Jan; Bessidskaia Bylund, Olga; Besson, Nathalie; Betancourt, Christopher; Bethke, Siegfried; Bhimji, Wahid; Bianchi, Riccardo-Maria; Bianchini, Louis; Bianco, Michele; Biebel, Otmar; Bieniek, Stephen Paul; Bierwagen, Katharina; Biesiada, Jed; Biglietti, Michela; Bilbao De Mendizabal, Javier; Bilokon, Halina; Bindi, Marcello; Binet, Sebastien; Bingul, Ahmet; Bini, Cesare; Bittner, Bernhard; Black, Curtis; Black, James

    2013-01-01

    The Phase-I upgrade of the ATLAS Trigger and Data Acquisition (TDAQ) system is to allow the ATLAS experiment to efficiently trigger and record data at instantaneous luminosities that are up to three times that of the original LHC design while maintaining trigger thresholds close to those used in the initial run of the LHC.

  12. Development of a beam test telescope based on the Alibava readout system

    Science.gov (United States)

    Marco-Hernández, R.

    2011-01-01

    A telescope for a beam test have been developed as a result of a collaboration among the University of Liverpool, Centro Nacional de Microelectrónica (CNM) of Barcelona and Instituto de Física Corpuscular (IFIC) of Valencia. This system is intended to carry out both analogue charge collection and spatial resolution measurements with different types of microstrip or pixel silicon detectors in a beam test environment. The telescope has four XY measurement as well as trigger planes (XYT board) and it can accommodate up to twelve devices under test (DUT board). The DUT board uses two Beetle ASICs for the readout of chilled silicon detectors. The board could operate in a self-triggering mode. The board features a temperature sensor and it can be mounted on a rotary stage. A peltier element is used for cooling the DUT. Each XYT board measures the track space points using two silicon strip detectors connected to two Beetle ASICs. It can also trigger on the particle tracks in the beam test. The board includes a CPLD which allows for the synchronization of the trigger signal to a common clock frequency, delaying and implementing coincidence with other XYT boards. An Alibava mother board is used to read out and to control each XYT/DUT board from a common trigger signal and a common clock signal. The Alibava board has a TDC on board to have a time stamp of each trigger. The data collected by each Alibava board is sent to a master card by means of a local data/address bus following a custom digital protocol. The master board distributes the trigger, clock and reset signals. It also merges the data streams from up to sixteen Alibava boards. The board has also a test channel for testing in a standard mode a XYT or DUT board. This board is implemented with a Xilinx development board and a custom patch board. The master board is connected with the DAQ software via 100M Ethernet. Track based alignment software has also been developed for the data obtained with the DAQ software.

  13. Development of a beam test telescope based on the Alibava readout system

    Energy Technology Data Exchange (ETDEWEB)

    Marco-Hernandez, R, E-mail: rmarco@ific.uv.es [Intituto de Fisica Corpuscular (CSIC-UV), Edificicio Institutos de Investigacion, PolIgono de La Coma, s/n. E-46980 Paterna (Valencia) (Spain)

    2011-01-15

    A telescope for a beam test have been developed as a result of a collaboration among the University of Liverpool, Centro Nacional de Microelectronica (CNM) of Barcelona and Instituto de Fisica Corpuscular (IFIC) of Valencia. This system is intended to carry out both analogue charge collection and spatial resolution measurements with different types of microstrip or pixel silicon detectors in a beam test environment. The telescope has four XY measurement as well as trigger planes (XYT board) and it can accommodate up to twelve devices under test (DUT board). The DUT board uses two Beetle ASICs for the readout of chilled silicon detectors. The board could operate in a self-triggering mode. The board features a temperature sensor and it can be mounted on a rotary stage. A peltier element is used for cooling the DUT. Each XYT board measures the track space points using two silicon strip detectors connected to two Beetle ASICs. It can also trigger on the particle tracks in the beam test. The board includes a CPLD which allows for the synchronization of the trigger signal to a common clock frequency, delaying and implementing coincidence with other XYT boards. An Alibava mother board is used to read out and to control each XYT/DUT board from a common trigger signal and a common clock signal. The Alibava board has a TDC on board to have a time stamp of each trigger. The data collected by each Alibava board is sent to a master card by means of a local data/address bus following a custom digital protocol. The master board distributes the trigger, clock and reset signals. It also merges the data streams from up to sixteen Alibava boards. The board has also a test channel for testing in a standard mode a XYT or DUT board. This board is implemented with a Xilinx development board and a custom patch board. The master board is connected with the DAQ software via 100M Ethernet. Track based alignment software has also been developed for the data obtained with the DAQ software.

  14. Real time physics analysis with the ATLAS tau trigger system

    International Nuclear Information System (INIS)

    The scope of the ATLAS tau trigger system at the LHC is most ambitious. It aims at reconstructing in real time, a matter of seconds, a detailed picture of the high energy proton proton collisions at the LHC. Such system is mandatory in order to select efficiently data needed for discovery of new physics in a proton proton collision environment where the rates of jets observed in the detector are high and the tau identification is difficult. New physics scenarios targeted specifically by the the ATLAS tau trigger system are Standard Model or Supersymmetric Higgs production, and production of new exotic resonances. This contribution will detail how the analysis techniques developed offline for efficient data analysis have been implemented in the algorithms which run online at the trigger. In particular, the focus will be on how to satisfy the requirements imposed by the physics goals while addressing the limitations from the overall event rate and latency allowed. The prospects for early running during the first LHC collisions and trigger evolution from first collisions to stable running will be also summarized, following change of trigger goals from commissioning of detector to measurement of Standard Model physics and discoveries. (author)

  15. The Error Reporting in the ATLAS TDAQ system

    CERN Document Server

    Kolos, S; The ATLAS collaboration; Papaevgeniou, L

    2014-01-01

    The ATLAS Error Reporting feature, which is used in the TDAQ environment, provides a service that allows experts and shift crew to track and address errors relating to the data taking components and applications. This service, called the Error Reporting Service(ERS), gives software applications the opportunity to collect and send comprehensive data about errors, happening at run-time, to a place where it can be intercepted in real-time by any other system component. Other ATLAS online control and monitoring tools use the Error Reporting service as one of their main inputs to address system problems in a timely manner and to improve the quality of acquired data. The actual destination of the error messages depends solely on the run-time environment, in which the online applications are operating. When applications send information to ERS, depending on the actual configuration the information may end up in a local file, in a database, in distributed middle-ware, which can transport it to an expert system or dis...

  16. The Error Reporting in the ATLAS TDAQ System

    CERN Document Server

    Kolos, S; The ATLAS collaboration; Papaevgeniou, L

    2015-01-01

    The ATLAS Error Reporting feature, which is used in the TDAQ environment, provides a service that allows experts and shift crew to track and address errors relating to the data taking components and applications. This service, called the Error Reporting Service(ERS), gives software applications the opportunity to collect and send comprehensive data about errors, happening at run-time, to a place where it can be intercepted in real-time by any other system component. Other ATLAS online control and monitoring tools use the Error Reporting service as one of their main inputs to address system problems in a timely manner and to improve the quality of acquired data. The actual destination of the error messages depends solely on the run-time environment, in which the online applications are operating. When applications send information to ERS, depending on the actual configuration the information may end up in a local file, in a database, in distributed middle-ware, which can transport it to an expert system or dis...

  17. ATLAS Tier-2 monitoring system for the German cloud

    International Nuclear Information System (INIS)

    The ATLAS tier centers in Germany provide their computing resources for the ATLAS experiment. The stable and sustainable operation of this so-called DE-cloud heavily relies on effective monitoring of the Tier-1 center GridKa and its associated Tier-2 centers. Central and local grid information services constantly collect and publish the status information from many computing resources and sites. The cloud monitoring system discussed in this presentation evaluates the information related to different cloud resources and provides a coherent and comprehensive view of the cloud. The main monitoring areas covered by the tool are data transfers, cloud software installation, site batch systems, Service Availability Monitoring (SAM). The cloud monitoring system consists of an Apache-based Python application, which retrieves the information and publishes it on the generated HTML web page. This results in an easy-to-use web interface for the limited number of sites in the cloud with fast and efficient access to the required information starting from a high level summary for the whole cloud to detailed diagnostics for the single site services. This approach provides the efficient identification of correlated site problems and simplifies the administration on both cloud and site level.

  18. Upgrading the ATLAS fast calorimeter simulation

    CERN Document Server

    Hubacek, Zdenek; The ATLAS collaboration

    2016-01-01

    Many physics and performance studies with the ATLAS detector at the Large Hadron Collider require very large samples of simulated events, and producing these using the full GEANT4 detector simulation is highly CPU intensive. Often, a very detailed detector simulation is not needed, and in these cases fast simulation tools can be used to reduce the calorimeter simulation time. In ATLAS, a fast simulation of the calorimeter systems was developed, called Fast Calorimeter Simulation (FastCaloSim). It provides a parametrized simulation of the particle energy response at the calorimeter read-out cell level. It is interfaced to the standard ATLAS digitization and reconstruction software, and can be tuned to data more easily than with GEANT4. An improved parametrization is being developed, to eventually address shortcomings of the original version. It makes use of statistical techniques such as principal component analysis, and a neural network parametrization to optimise the amount of information to store in the ATL...

  19. Readout of a superconducting qubit. A problem of quantum escape processes for driven systems

    International Nuclear Information System (INIS)

    We started this work with a description of two devices that were recently developed in the context of quantum information processing. These devices are used as read-out for superconducting quantum bits based on Josephson junctions. The classical description has to be extended to the quantum regime. As the main result we calculate the leading order corrections in ℎ on the escape rate. We took into account a standard metastable potential with a static energy barrier and showed how to derive an extension of the classical diffusion equation. We did this within a systematic semiclassical formalism starting from a quantum mechanical master equation. This master equation contains an extra term for the loss of population due to tunneling through the barrier and, in contrast to previous approaches, finite barrier transmission which also affects the transition probabilities between the states. The escape rate is obtained from the stationary non-equilibrium solution of the diffusion equation. The quantum corrections on the escape rate are captured by two factors, the first one describes zero-point fluctuations in the well, while the second one describes the impact of finite barrier transmission close to the top. Interestingly, for weak friction there exists a temperature range, where the latter one can actually prevail and lead to a reduction of the escape compared to the classical situation due to finite reflection from the barrier even for energies above the barrier. Only for lower temperatures does the quantum result exceed the classical one. The approach can not strictly be used for the Duffing oscillator because of the time dependent term in its Hamiltonian. But it is possible to move in a frame rotating with a frequency equal to the response frequency of the Duffing oscillator in order to obtain a time-independent Hamiltonian. Therefore a system plus reservoir model was applied to consistently derive in the weak coupling limit the master equation for the reduced

  20. gLExec Integration with the ATLAS PanDA Workload Management System

    Science.gov (United States)

    Karavakis, E.; Barreiro, F.; Campana, S.; De, K.; Di Girolamo, A.; Litmaath, M.; Maeno, T.; Medrano, R.; Nilsson, P.; Wenaus, T.

    2015-12-01

    ATLAS user jobs are executed on Worker Nodes (WNs) by pilots sent to sites by pilot factories. This paradigm serves to allow a high job reliability and although it has clear advantages, such as making the working environment homogeneous, the approach presents security and traceability challenges. To address these challenges, gLExec can be used to let the payloads for each user be executed under a different UNIX user id that uniquely identifies the ATLAS user. This paper describes the recent improvements and evolution of the security model within the ATLAS PanDA system, including improvements in the PanDA pilot, in the PanDA server and their integration with MyProxy, a credential caching system that entitles a person or a service to act in the name of the issuer of the credential. Finally, it presents results from ATLAS user jobs running with gLExec and describes the deployment campaign within ATLAS.

  1. gLExec Integration with the ATLAS PanDA Workload Management System

    CERN Document Server

    Karavakis, Edward; Campana, Simone; De, Kaushik; Di Girolamo, Alessandro; Litmaath, Maarten; Maeno, Tadashi; Medrano Llamas, Ramon; Nilsson, Paul; Wenaus, Torre

    2015-01-01

    ATLAS user jobs are executed on Worker Nodes (WNs) by pilots sent to sites by pilot factories. This paradigm serves to allow a high job reliability and although it has clear advantages, such as making the working environment homogeneous, the approach presents security and traceability challenges. To address these challenges, gLExec can be used to let the payloads for each user be executed under a different UNIX user id that uniquely identifies the ATLAS user. This paper describes the recent improvements and evolution of the security model within the ATLAS PanDA system, including improvements in the PanDA pilot, in the PanDA server and their integration with MyProxy, a credential caching system that entitles a person or a service to act in the name of the issuer of the credential. Finally, it presents results from ATLAS user jobs running with gLExec and describes the deployment campaign within ATLAS.

  2. Design and performance of the ATLAS jet trigger system

    CERN Document Server

    Tavares Delgado, Ademar; The ATLAS collaboration

    2015-01-01

    The CERN Large Hadron Collider is the biggest and most powerful particle collider made by man. It produces up to 40 million proton-proton collisions per second at unprecedented energies to explore the fundamental laws and properties of Nature. The ATLAS experiment is one of the detectors that analyse and record these collisions. It generates a huge data volume that has to be reduced before it can be permanently stored. The event selection is made by the ATLAS trigger system, which reduces the data volume by a factor of 10^{5}. The trigger system has to be highly configurable in order to adapt to changing running conditions and maximize the physics output whilst keeping the output rate under control. A particularly interesting pattern generated during collisions consists of a collimated spray of particles, known as a hadronic jet. To retain the interesting jets and efficiently reject the overwhelming background, optimal jet energy resolution is needed. Therefore the Jet trigger software requires CPU-intensive ...

  3. Performance of the ATLAS Trigger and Data-Acquisition system

    CERN Document Server

    Dobson, E

    2011-01-01

    The ATLAS Trigger and Data Acquisition (TDAQ) system cite{TriggerPerf} is responsible for reducing the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of 200 Hz. The ATLAS trigger is designed to select signal-like events from a large background in three levels: a first-level (L1) implemented in custom-built electronics, as well as the two levels of the high level trigger (HLT) software triggers executed on large computing farms.\\ indent The first-level trigger is comprised of calorimeter, muon and forward triggers to identify event features such as missing transverse energy, as well as candidate electrons, photons, jets and muons. Input signals from these objects are processed by the L1 Central Trigger to form a L1 Accept (L1A) decision. L1A and timing information is consequently sent to all sub-detectors, which push their data to DAQ buffers. The first part of the HLT system (called Level 2) pulls the data from the buffers on demand, while the second part (called Event F...

  4. Performance of the ATLAS Trigger and Data Acquisition system

    CERN Document Server

    Dobson, E; The ATLAS collaboration

    2011-01-01

    "The ATLAS Trigger and Data Acquisition (TDAQ) system is responsible for reducing the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of 200 Hz. The ATLAS trigger is designed to select signal-like events from a large background in three levels: a first-level (L1) implemented in custom-built electronics, as well as the two levels of the high level trigger (HLT) software triggers executed on large computing farms. The first-level trigger is comprised of calorimeter, muon and forward triggers to identify event features such as missing transverse energy, as well as candidate electrons, photons, jets and muons. Input signals from these objects are processed by the L1 Central Trigger to form a L1 Accept (L1A) decision. L1A and timing information is consequently sent to all sub-detectors, which push their data to DAQ buffers. The first part of the HLT system (called Level 2) pulls the data from the buffers on demand, while the second part (called Event Filter) works with the who...

  5. A fast hardware tracker for the ATLAS trigger system

    International Nuclear Information System (INIS)

    The Fast Tracker (FTK) processor is an approved ATLAS upgrade that will reconstruct tracks using the full silicon tracker at Level-1 rate (up to 100 KHz). FTK uses a completely parallel approach to read the silicon tracker information, execute the pattern matching and reconstruct the tracks. This approach, according to detailed simulation results, allows full tracking with nearly offline resolution within an execution time of 100μs. A central component of the system is the associative memories (AM); these special devices reduce the pattern matching combinatoric problem, providing identification of coarse resolution track candidates. The system consists of a pipeline of several components with the goal to organize and filter the data for the AM, then to reconstruct and filter the final tracks. This document presents an overview of the system and reports the status of the different elements of the system

  6. The Architecture and Administration of the ATLAS Online Computing System

    CERN Document Server

    Dobson, M; Ertorer, E; Garitaonandia, H; Leahu, L; Leahu, M; Malciu, I M; Panikashvili, E; Topurov, A; Ünel, G; Computing In High Energy and Nuclear Physics

    2006-01-01

    The needs of ATLAS experiment at the upcoming LHC accelerator, CERN, in terms of data transmission rates and processing power require a large cluster of computers (of the order of thousands) administrated and exploited in a coherent and optimal manner. Requirements like stability, robustness and fast recovery in case of failure impose a server-client system architecture with servers distributed in a tree like structure and clients booted from the network. For security reasons, the system should be accessible only through an application gateway and, also to ensure the autonomy of the system, the network services should be provided internally by dedicated machines in synchronization with CERN IT department's central services. The paper describes a small scale implementation of the system architecture that fits the given requirements and constraints. Emphasis will be put on the mechanisms and tools used to net boot the clients via the "Boot With Me" project and to synchronize information within the cluster via t...

  7. ATLAS Tile Calorimeter: simulation and validation of the response

    Science.gov (United States)

    Faltova, Jana; ATLAS Collaboration

    2015-02-01

    The Tile Calorimeter (TileCal) is the central section of the ATLAS hadronic calorimeter at the Large Hadron Collider. Scintillation light produced in the tiles is readout by wavelength shifting fibers and transmitted to photomultiplier tubes (PMTs). The resulting electronic signals from approximately 10000 PMTs are measured and digitized before being further transferred to off-detector data-acquisition systems. Detailed simulations are described in this contribution, ranging from the implementation of the geometrical elements to the realistic description of the electronics readout pulses, including specific noise treatment and the signal reconstruction. Special attention is given to the improved optical signal propagation and the validation with the real particle data.

  8. ATLAS Tile Calorimeter: simulation and validation of the response

    CERN Document Server

    Faltova, J; The ATLAS collaboration

    2014-01-01

    The Tile Calorimeter (TileCal) is the central section of the ATLAS hadronic calorimeter at the Large Hadron Collider. Scintillation light produced in the tiles is readout by wavelength shifting fibers and transmitted to photomultiplier tubes (PMTs). The resulting electronic signals from approximately 10000 PMTs are measured and digitized before being further transferred to off-detector data-acquisition systems. Detailed simulations are described in this contribution, ranging from the implementation of the geometrical elements to the realistic description of the electronics readout pulses, including specific noise treatment and the signal reconstruction. Special attention is given to the improved optical signal propagation and the validation with the real particle data.

  9. The NASA atlas of the solar system

    Science.gov (United States)

    Greeley, Ronald; Batson, Raymond M.

    1997-01-01

    Describes every planet, moon, and body that has been the subject of a NASA mission, including images of 30 solar system objects and maps of 26 objects. The presentation includes geologic history, geologic and reference maps, and shaded relief maps.

  10. The ATLAS Diamond Beam Monitor

    CERN Document Server

    Schaefer, Douglas; The ATLAS collaboration

    2015-01-01

    After the first three years of the LHC running the ATLAS experiment extracted it's pixel detector system to refurbish and re-position the optical readout drivers and install a new barrel layer of pixels. The experiment has also taken advantage of this access to also install a set of beam monitoring telescopes with pixel sensors, four each in the forward and backward regions. These telescopes were assembled based on chemical vapour deposited (CVD) diamond sensors to survive in this high radiation environment without needing extensive cooling. This talk will describe the lessons learned in construction and commissioning of the ATLAS x Diamond Beam Monitor (DBM). We will show results from the construction quality assurance tests, commissioning performance, including results from cosmic ray running in early 2015 and also expected first results from LHC run 2 collisions.

  11. Development of an Optical Read-Out System for the LISA/NGO Gravitational Reference Sensor: A Status Report

    Science.gov (United States)

    Di Fiore, L.; De Rosa, R.; Garufi, F.; Grado, A.; Milano, L.; Spagnuolo, V.; Russano, G.

    2013-01-01

    The LISA group in Napoli is working on the development of an Optical Read-Out (ORO) system, based on optical levers and position sensitive detectors, for the LISA gravitational reference sensor. ORO is not meant as an alternative, but as an addition, to capacitive readout, that is the reference solution for LISA/NGO and will be tested on flight by LISA-Pathfinder. The main goal is the introduction of some redundancy with consequent mission risk mitigation. Furthermore, the ORO system is more sensitive than the capacitive one and its usage would allow a significant relaxation of the specifications on cross-couplings in the drag free control loops. The reliability of the proposed ORO device and the fulfilment of the sensitivity requirements have been already demonstrated in bench-top measurements and tests with the four mass torsion pendulum developed in Trento as a ground testing facility for LISA-Pathfinder and LISA hardware. In this paper we report on the present status of this activity presenting the last results and perspectives on some relevant aspects. 1) System design, measured sensitivity and noise characterization. 2) Possible layouts for integration in LISA/NGO and bench-top tests on real scale prototypes. 3) Search for space compatible components and preliminary tests. We will also discuss next steps in view of a possible application in LISA/NGO.

  12. A low-noise and fast pre-amplifier and readout system for SiPMs

    Energy Technology Data Exchange (ETDEWEB)

    Biroth, M., E-mail: biroth@kph.uni-mainz.de [Institut für Kernphysik, Johannes Gutenberg-Universität, Mainz (Germany); Achenbach, P., E-mail: patrick@kph.uni-mainz.de [Institut für Kernphysik, Johannes Gutenberg-Universität, Mainz (Germany); Downie, E. [Physics Department, George Washington University, Washington, DC (United States); Thomas, A. [Institut für Kernphysik, Johannes Gutenberg-Universität, Mainz (Germany)

    2015-07-01

    To operate silicon photomultipliers (SiPMs) in a demanding environment with large temperature gradients, different amplifier concepts were characterized by analyzing SiPM pulse-shapes and charge distributions. A fully differential 4-wire SiPM pre-amplifier with separated tracks for the bias voltage and with good common-mode noise suppression was developed and successfully tested. To achieve highest single-pixel resolutions an online after-pulse and pile-up suppression was realized with fast readout electronics based on digital filters.

  13. A Fast Hardware Tracker for the ATLAS Trigger System

    CERN Document Server

    Neubauer, M; The ATLAS collaboration

    2011-01-01

    In hadron collider experiments, triggering the detector to store interesting events for offline analysis is a challenge due to the high rates and multiplicities of particles produced. The LHC will soon operate at a center-of-mass energy of 14 TeV and at high instantaneous luminosities of the order of $10^{34}$ to $10^{35}$ cm$^{-2}$ s$^{-1}$. A multi-level trigger strategy is used in ATLAS, with the first level (LVL1) implemented in hardware and the second and third levels (LVL2 and EF) implemented in a large computer farm. Maintaining high trigger efficiency for the physics we are most interested in while at the same time suppressing high rate physics from inclusive QCD processes is a difficult but important problem. It is essential that the trigger system be flexible and robust, with sufficient redundancy and operating margin. Providing high quality track reconstruction over the full ATLAS detector by the start of processing at LVL2 is an important element to achieve these needs. As the instantaneous lumino...

  14. Improving Security in the ATLAS PanDA System

    International Nuclear Information System (INIS)

    The security challenges faced by users of the grid are considerably different to those faced in previous environments. The adoption of pilot jobs systems by LHC experiments has mitigated many of the problems associated with the inhomogeneities found on the grid and has greatly improved job reliability; however, pilot jobs systems themselves must then address many security issues, including the execution of multiple users' code under a common 'grid' identity. In this paper we describe the improvements and evolution of the security model in the ATLAS PanDA (Production and Distributed Analysis) system. We describe the security in the PanDA server which is in place to ensure that only authorized members of the VO are allowed to submit work into the system and that jobs are properly audited and monitored. We discuss the security in place between the pilot code itself and the PanDA server, ensuring that only properly authenticated workload is delivered to the pilot for execution. When the code to be executed is from a 'normal' ATLAS user, as opposed to the production system or other privileged actor, then the pilot may use an EGEE developed identity switching tool called gLExec. This changes the grid proxy available to the job and also switches the UNIX user identity to protect the privileges of the pilot code proxy. We describe the problems in using this system and how they are overcome. Finally, we discuss security drills which have been run using PanDA and show how these improved our operational security procedures.

  15. Level-1 Data Driver Card of the ATLAS New Small Wheel Upgrade Compatible with the Phase II 1 MHz Readout Scheme

    CERN Document Server

    Gkountoumis, Panagiotis; The ATLAS collaboration

    2016-01-01

    The Level-1 Data Driver Card (L1DDC) will be fabricated for the needs of the future upgrades of the ATLAS experiment at CERN. Specifically, these upgrades will be performed in the innermost stations of the muon spectrometer end-caps. The L1DDC is a high speed aggregator board capable of communicating with a large number of front-end electronics. It collects the Level-1 along with monitoring data and transmits them to a network interface through a single bidirectional fiber link. Finally, the L1DDC board distributes trigger, time and configuration data coming from the network interface to the front-end boards. The L1DDC is fully compatible with phase II upgrade where the trigger rate is 1 MHz. This paper describes the overall scheme of the data acquisition process and especially the L1DDC board for the upgrade of the New Small Wheel. Finally, the electronics layout on the chamber is also mentioned.

  16. Neutron and proton tests of different technologies for the upgrade of cold readout electronics of the ATLAS Hadronic End-cap Calorimeter

    International Nuclear Information System (INIS)

    The expected increase of total integrated luminosity by a factor ten at the HL-LHC compared to the design goals for LHC essentially eliminates the safety factor for radiation hardness realized at the current cold amplifiers of the ATLAS Hadronic End-cap Calorimeter (HEC). New more radiation hard technologies have been studied: SiGe bipolar, Si CMOS FET and GaAs FET transistors have been irradiated with neutrons up to an integrated fluence of 2.2 · 1016 n/cm2 and with 200 MeV protons up to an integrated fluence of 2.6 · 1014 p/cm2. Comparisons of transistor parameters such as the gain for both types of irradiations are presented.

  17. Robustness analysis of an intensity modulated fiber-optic position sensor with an image sensor readout system.

    Science.gov (United States)

    Jason, Johan; Nilsson, Hans-Erik; Arvidsson, Bertil; Larsson, Anders

    2013-06-01

    An intensity modulated fiber-optic position sensor, based on a fiber-to-bundle coupling and a readout system using a CMOS image camera together with fast routines for position extraction and calibration, is presented and analyzed. The proposed system eliminates alignment issues otherwise associated with coupling-based fiber-optic sensors, still keeping the sensing point free from detector electronics. In this study the robustness of the system is characterized through simulations of the system performance, and the outcome is compared with experimental results. It is shown that knowledge of the shape of the coupled power distribution is the single most important factor for high performance of the system. Further it is experimentally shown that the position extraction error can be improved down to the theoretical limit by employing a modulation function model well fitted to the real coupled power distribution. PMID:23736347

  18. Calculation Sheet for the Basic Design of the ATLAS Fluid System

    Energy Technology Data Exchange (ETDEWEB)

    Park, Hyun Sik; Moon, S. K.; Yun, B. J.; Kwon, T. S.; Choi, K. Y.; Cho, S.; Park, C. K.; Lee, S. J.; Kim, Y. S.; Song, C. H.; Baek, W. P.; Hong, S. D

    2007-03-15

    The basic design of an integral effect test loop for pressurized water reactors (PWRs), the ATLAS (Advanced Thermal-hydraulic Test Loop for Accident Simulation), has been carried out by Thermal-Hydraulics Safety Research Team in Korea Atomic Energy Research Institute (KAERI). The ATLAS facility has been designed to have the length scale of 1/2 and area scale of 1/144 compared with the reference plant, APR1400, and is scaled for full pressure and temperature conditions. This report includes calculation sheets for the basic design of ATLAS fluid systems, which are consisted of a reactor pressure vessel with core simulator, the primary loop piping, a pressurizer, reactor coolant pumps, steam generators, the secondary system, the safety system, the auxiliary system, and the heat loss compensation system. The present calculation sheets will be used to help understanding the basic design of the ATLAS fluid system and its based scaling methodology.

  19. The upgrade of the ATLAS first-level calorimeter trigger

    Science.gov (United States)

    Yamamoto, Shimpei

    2016-07-01

    The first-level calorimeter trigger (L1Calo) had operated successfully through the first data taking phase of the ATLAS experiment at the CERN Large Hadron Collider. Towards forthcoming LHC runs, a series of upgrades is planned for L1Calo to face new challenges posed by the upcoming increases of the beam energy and the luminosity. This paper reviews the ATLAS L1Calo trigger upgrade project that introduces new architectures for the liquid-argon calorimeter trigger readout and the L1Calo trigger processing system.

  20. Design of a current based readout chip and development of a DEPFET pixel prototype system for the ILC vertex detector

    International Nuclear Information System (INIS)

    The future TeV-scale linear collider ILC (International Linear Collider) offers a large variety of precision measurements complementary to the discovery potential of the LHC (Large Hadron Collider). To fully exploit its physics potential, a vertex detector with unprecedented performance is needed. One proposed technology for the ILC vertex detector is the DEPFET active pixel sensor. The DEPFET sensor offers particle detection with in-pixel amplification by incorporating a field effect transistor into a fully depleted high-ohmic silicon substrate. The device provides an excellent signal-to-noise ratio and a good spatial resolution at the same time. To establish a very fast readout of a DEPFET pixel matrix with row rates of 20 MHz and more, the 128 channel CURO II ASIC has been designed and fabricated. The architecture of the chip is completely based on current mode techniques (SI) perfectly adapted to the current signal of the sensor. For the ILC vertex detector a prototype system with a 64 x 128 DEPFET pixel matrix read out by the CURO II chip has been developed. The design issues and the standalone performance of the readout chip as well as first results with the prototype system will be presented. (orig.)

  1. Design of a current based readout chip and development of a DEPFET pixel prototype system for the ILC vertex detector

    Energy Technology Data Exchange (ETDEWEB)

    Trimpl, M.

    2005-12-15

    The future TeV-scale linear collider ILC (International Linear Collider) offers a large variety of precision measurements complementary to the discovery potential of the LHC (Large Hadron Collider). To fully exploit its physics potential, a vertex detector with unprecedented performance is needed. One proposed technology for the ILC vertex detector is the DEPFET active pixel sensor. The DEPFET sensor offers particle detection with in-pixel amplification by incorporating a field effect transistor into a fully depleted high-ohmic silicon substrate. The device provides an excellent signal-to-noise ratio and a good spatial resolution at the same time. To establish a very fast readout of a DEPFET pixel matrix with row rates of 20 MHz and more, the 128 channel CURO II ASIC has been designed and fabricated. The architecture of the chip is completely based on current mode techniques (SI) perfectly adapted to the current signal of the sensor. For the ILC vertex detector a prototype system with a 64 x 128 DEPFET pixel matrix read out by the CURO II chip has been developed. The design issues and the standalone performance of the readout chip as well as first results with the prototype system will be presented. (orig.)

  2. The Helium Cryogenic System for the ATLAS Experiment

    CERN Document Server

    Delruelle, N; Passardi, Giorgio; ten Kate, H H J

    2000-01-01

    The magnetic configuration of the ATLAS detector is generated by an inner superconducting solenoid and three air-core toroids (the barrel and two end-caps), each of them made of eight superconducting coils. Two separated helium refrigerators will be used to allow cool-down from ambient temperature and steady-state operation at 4.5 K of all the magnets having a total cold mass of about 600 tons. In comparison with the preliminary design, the helium distribution scheme and interface with the magnet sub-systems are simplified, resulting in a considerable improvement of the operational easiness and the overall reliability of the system at some expense of the operational flexibility. The paper presents the cryogenic layout and the basic principles for magnets cool-down, steady state operation and thermal recovery after a fast energy dump.

  3. Design and Performance of the ATLAS Muon Detector Control System

    CERN Document Server

    Polini, A; The ATLAS collaboration

    2011-01-01

    Muon detection plays a key role at the Large Hadron Collider. The ATLAS Muon Spectrometer includes Monitored Drift Tubes (MDT) and Cathode Strip Chambers (CSC) for precision momentum measurement in the toroidal magnetic field. Resistive Plate Chambers (RPC) in the barrel region, and Thin Gap Chambers (TGC) in the end-caps, provide the level-1 trigger and a second coordinate used for tracking in conjunction with the MDT. The Detector Control System of each subdetector technology is required to monitor and safely operate tens of thousand of channels, which are distributed on several subsystems, including low and high voltage power supplies, trigger and front-end electronics, currents and thresholds monitoring, alignment and environmental sensors, gas and electronic infrastructure. The system is also required to provide a level of abstraction for ease of operation as well as specific tools allowing expert actions and detailed analysis of archived data. The hardware architecture and the software solutions adopted...

  4. Silicon photomultiplier readout system for the ECAL in the PEBS and test results from the system

    International Nuclear Information System (INIS)

    Silicon photomultipliers (SiPMs) have remarkable advantages for use in photo-detection. Com- pared with PMT, SiPM shows advantages of high gain, excellent time resolution, insensitivity to magnetic fields and a lower operating voltage. SiPMs from Hamamatsu are used in the electromagnetic calorimeter (ECAL) sub-detector in the Positron Electron Balloon Spectrometer (PEBS) experiment, a balloon-borne spectrometer experiment aiming at the precise measurement of the cosmic-ray positron fraction. This paper introduces the evaluation and test results of several SiPM detector types, the dedicated front-end application specific integrated circuit (ASIC) electronics and the design of the data acquisition system (DAQ) system. (authors)

  5. Integrated System for Performance Monitoring of the ATLAS TDAQ Network

    International Nuclear Information System (INIS)

    The ATLAS TDAQ Network consists of three separate networks spanning four levels of the experimental building. Over 200 edge switches and 5 multi-blade chassis routers are used to interconnect 2000 processors, adding up to more than 7000 high speed interfaces. In order to substantially speed-up ad-hoc and post mortem analysis, a scalable, yet flexible, integrated system for monitoring both network statistics and environmental conditions, processor parameters and data taking characteristics was required. For successful up-to-the-minute monitoring, information from many SNMP compliant devices, independent databases and custom APIs was gathered, stored and displayed in an optimal way. Easy navigation and compact aggregation of multiple data sources were the main requirements; characteristics not found in any of the tested products, either open-source or commercial. This paper describes how performance, scalability and display issues were addressed and what challenges the project faced during development and deployment. A full set of modules, including a fast polling SNMP engine, user interfaces using latest web technologies and caching mechanisms, has been designed and developed from scratch. Over the last year the system proved to be stable and reliable, replacing the previous performance monitoring system and extending its capabilities. Currently it is operated using a precision interval of 25 seconds (the industry standard is 300 seconds). Although it was developed in order to address the needs for integrated performance monitoring of the ATLAS TDAQ network, the package can be used for monitoring any network with rigid demands of precision and scalability, exceeding normal industry standards.

  6. Integrated System for Performance Monitoring of the ATLAS TDAQ Network

    Science.gov (United States)

    Octavian Savu, Dan; Al-Shabibi, Ali; Martin, Brian; Sjoen, Rune; Batraneanu, Silvia Maria; Stancu, Stefan

    2011-12-01

    The ATLAS TDAQ Network consists of three separate networks spanning four levels of the experimental building. Over 200 edge switches and 5 multi-blade chassis routers are used to interconnect 2000 processors, adding up to more than 7000 high speed interfaces. In order to substantially speed-up ad-hoc and post mortem analysis, a scalable, yet flexible, integrated system for monitoring both network statistics and environmental conditions, processor parameters and data taking characteristics was required. For successful up-to-the-minute monitoring, information from many SNMP compliant devices, independent databases and custom APIs was gathered, stored and displayed in an optimal way. Easy navigation and compact aggregation of multiple data sources were the main requirements; characteristics not found in any of the tested products, either open-source or commercial. This paper describes how performance, scalability and display issues were addressed and what challenges the project faced during development and deployment. A full set of modules, including a fast polling SNMP engine, user interfaces using latest web technologies and caching mechanisms, has been designed and developed from scratch. Over the last year the system proved to be stable and reliable, replacing the previous performance monitoring system and extending its capabilities. Currently it is operated using a precision interval of 25 seconds (the industry standard is 300 seconds). Although it was developed in order to address the needs for integrated performance monitoring of the ATLAS TDAQ network, the package can be used for monitoring any network with rigid demands of precision and scalability, exceeding normal industry standards.

  7. The Compact NASA Atlas of the Solar System

    Science.gov (United States)

    Greeley, Ronald; Batson, Raymond

    2002-01-01

    Without sacrificing any of the detail or breadth of the full-size edition, the essential reference source for maps of every planet, moon, or small body investigated by NASA missions is now available in a convenient, portable format. Featuring over 150 maps, 214 color illustrations and a gazetteer that lists the names of all features officially approved by the International Astronomical Union, The Compact NASA Atlas of the Solar System includes the full range of information gathered from NASA missions throughout the Solar System. Compiled by the US Geological Survey, this atlas includes: -Geological maps -Reference maps -Shaded relief maps -Synthetic aperture radar mosaics -Color photo-mosaics that present the features of planets and their satellites This 'road map' of the solar system is the definitive guide for planetary science and should be part of every cartographers and astonomer's collection. Ronald Greeley is a Regent Professor in the Department of Geological Sciences at Arizona State University. He is a team member of the Galileo mission to Jupiter and of the Mars Pathfinder lander. Greeley is currently a co-investigator for the European Mars Express mission. Raymond Batson spent his 35-year career with the United States Geological Survey. He has worked in terrestrial mapping and in lunar and planetary mapping. Batson served as co-investigator or team member on most NASA planetary missions, including the Apollo lunar lander missions, the Mariner Mars and Venus/Mercury mapping missions, the Viking 1 and 2 Mars mapping missions, the Voyager missions to the outer planets, and the Magellan Venus radar mapping mission.

  8. Performance of Frequency Division Multiplexing Readout System for AC-Biased Transition-Edge Sensor X-ray Microcalorimeters

    Science.gov (United States)

    Yamamoto, R.; Sakai, K.; Takei, Y.; Yamasaki, N. Y.; Mitsuda, K.

    2014-08-01

    Frequency division multiplexing (FDM) is a promising approach to read out a large format transition-edge sensor (TES) array for future astrophysical missions. We constructed a four channel FDM readout system using baseband feedback in the MHz band. We demonstrated the principle of our FDM method with an actual TES array, a multiplexing SQUID and LC band-pass filters under 100 mK. The resonant frequencies of LC filters were consistent with the design value with an accuracy of better than 3 %. We successfully obtained X-ray pulses from two TESs simultaneously but the energy resolution was degraded to about 100 eV at 5.9 keV and crosstalk effects were observed. The origin of the crosstalk effects is investigated by modified setups. Based on comparative experiments and numerical calculations, we conclude that the non-linearity of the SQUID is the cause of some of the crosstalk effects. Unlike the regular crosstalk effect from the adjoining channels, the crosstalk effect due to non-linearity observed in this paper occurs in all channels. Solving these problems will help us to obtain FDM readout with sufficient energy resolution.

  9. OPC Unified Architecture within the Control System of the ATLAS Experiment

    CERN Document Server

    Nikiel, P P; Franz, S; Schlenker, S; Boterenbrood, H; Filimonov, V

    2014-01-01

    The Detector Control System (DCS) of the ATLAS experiment at the LHC has been using the OPC DA standard as an interface for controlling various standard and custom hardware components and their integration into the SCADA layer.

  10. IT Infrastructure Design and Implementation Considerations for the ATLAS TDAQ System

    CERN Document Server

    Dobson, M; The ATLAS collaboration; Caramarcu, C; Dumitru, I; Valsan, L; Darlea, G L; Bujor, F; Bogdanchikov, A G; Korol, A A; Zaytsev, A S; Ballestrero, S

    2013-01-01

    This paper gives a thorough overview of the ATLAS TDAQ SysAdmin group activities which deals with administration of the TDAQ computing environment supporting Front End detector hardware, Data Flow, Event Filter and other subsystems of the ATLAS detector operating on the LHC accelerator at CERN. The current installation consists of approximately 1500 netbooted nodes managed by more than 60 dedicated servers, a high performance centralized storage system, about 50 multi-screen user interface systems installed in the control rooms and various hardware and critical service monitoring machines. In the final configuration, the online computer farm will be capable of hosting tens of thousands applications running simultaneously. The ATLAS TDAQ computing environment is now serving more than 3000 users subdivided into approximately 300 categories in correspondence with their roles in the system. The access and role management system is custom built on top of an LDAP schema. The engineering infrastructure of the ATLAS ...

  11. Network Resiliency Implementation in the ATLAS TDAQ System

    CERN Document Server

    Stancu, S N; The ATLAS collaboration; Batraneanu, S M; Ballestrero, S; Caramarcu, C; Martin, B; Savu, D O; Sjoen, R V; Valsan, L

    2010-01-01

    The ATLAS TDAQ (Trigger and Data Acquisition) system performs the real-time selection of events produced by the detector. For this purpose approximately 2000 computers are deployed and interconnected through various high speed networks, whose architecture has already been described. This article focuses on the implementation and validation of network connectivity resiliency (previously presented at a conceptual level). Redundancy and eventually load balancing are achieved through the synergy of various protocols: 802.3ad link aggregation, OSPF (Open Shortest Path First), VRRP (Virtual Router Redundancy Protocol), MST (Multiple Spanning Trees). An innovative method for cost-effective redundant connectivity of high-throughput high-availability servers is presented. Furthermore, real-life examples showing how redundancy works, and more importantly how it might fail despite careful planning are presented.

  12. Integrated System for Performance Monitoring of the ATLAS TDAQ Network

    CERN Document Server

    Savu, DO; The ATLAS collaboration; Martin, B; Sjoen, R; Batraneanu, SM; Stancu, S

    2011-01-01

    The ATLAS TDAQ Network consists of three separate networks spanning four levels of the experimental building. Over 200 edge switches and 5 multi-blade chassis routers are used to interconnect 2000 processors, adding up to more than 7000 high speed interfaces. In order to substantially speed-up ad-hoc and post mortem analysis, a scalable, yet flexible, integrated system for monitoring both network statistics and environmental conditions, processor parameters and data taking characteristics was required. For successful up-to-the-minute monitoring, information from many SNMP compliant devices, independent databases and custom APIs was gathered, stored and displayed in an optimal way. Easy navigation and compact aggregation of multiple data sources were the main requirements; characteristics not found in any of the tested products, either open-source or commercial. This paper describes how performance, scalability and display issues were addressed and what challenges the project faced during development and deplo...

  13. Integrated System for Performance Monitoring of ATLAS TDAQ Network

    CERN Document Server

    Savu, D; The ATLAS collaboration; Martin, B; Sjoen, R; Batraneanu, S; Stancu, S

    2010-01-01

    The ATLAS TDAQ Network consists of three separate networks spanning four levels of the experimental building. Over 200 edge switches and 5 multi-blade chassis routers are used to interconnect 2000 processors, adding up to more than 7000 high speed interfaces. In order to substantially speed-up ad-hoc and post mortem analysis, a scalable, yet flexible, integrated system for monitoring both network statistics and environmental conditions, processor parameters and data taking characteristics was required. For successful up-to-the-minute monitoring, information from many SNMP compliant devices, independent databases and custom APIs was gathered, stored and displayed in an optimal way. Easy navigation and compact aggregation of multiple data sources were the main requirements; characteristics not found in any of the tested products, either open-source or commercial. This paper describes how performance, scalability and display issues were addressed and what challenges the project faced during development and deplo...

  14. Development of a test system for the analysis of the read-out electronic cabling for the CMS drift tube chambers

    International Nuclear Information System (INIS)

    A test system has been developed for the analysis of the read-out electronics cabling for the CMS drift tube chambers. The read-out electronics will be placed inside some aluminium boxes, so-called Minicrates, which are going to be produced soon at CIEMAT. Due to the difficulty of detecting and repairing errors in the cables once they have been installed and recalling also to the large number of Minicrates that are going to be produced, it was decided to design and develop a test system for testing the cabling before its installation. (Author)

  15. The performance of the bolometer array and readout system during the 2012/2013 flight of the E and B experiment (EBEX)

    CERN Document Server

    MacDermid, Kevin; Ade, Peter; Aubin, Francois; Baccigalupi, Carlo; Bandura, Kevin; Bao, Chaoyun; Borrill, Julian; Chapman, Daniel; Didier, Joy; Dobbs, Matt; Grain, Julien; Grainger, Will; Hanany, Shaul; Helson, Kyle; Hillbrand, Seth; Hilton, Gene; Hubmayr, Hannes; Irwin, Kent; Johnson, Bradley; Jaffe, Andrew; Jones, Terry; Kisner, Ted; Klein, Jeff; Korotkov, Andrei; Lee, Adrian; Levinson, Lorne; Limon, Michele; Miller, Amber; Milligan, Michael; Pascale, Enzo; Raach, Kate; Reichborn-Kjennerud, Britt; Reintsema, Carl; Sagiv, Ilan; Smecher, Graeme; Stompor, Radek; Tristram, Matthieu; Tucker, Greg; Westbrook, Ben; Zilic, Kyle

    2014-01-01

    EBEX is a balloon-borne telescope designed to measure the polarization of the cosmic microwave background radiation. During its eleven day science flight in the Austral Summer of 2012, it operated 955 spider-web transition edge sensor (TES) bolometers separated into bands at 150, 250 and 410 GHz. This is the first time that an array of TES bolometers has been used on a balloon platform to conduct science observations. Polarization sensitivity was provided by a wire grid and continuously rotating half-wave plate. The balloon implementation of the bolometer array and readout electronics presented unique development requirements. Here we present an outline of the readout system, the remote tuning of the bolometers and Superconducting QUantum Interference Device (SQUID) amplifiers, and preliminary current noise of the bolometer array and readout system.

  16. Evolution of the ATLAS Distributed Computing system during the LHC Long shutdown

    CERN Document Server

    Campana, S; The ATLAS collaboration

    2014-01-01

    The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the WLCG distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileu...

  17. Performance of a proximity cryogenic system for the ATLAS central solenoid magnet

    CERN Document Server

    Doi, Y; Makida, Y; Kondo, Y; Kawai, M; Aoki, K; Haruyama, T; Kondo, T; Mizumaki, S; Wachi, Y; Mine, S; Haug, F; Delruelle, N; Passardi, Giorgio; ten Kate, H H J

    2002-01-01

    The ATLAS central solenoid magnet has been designed and constructed as a collaborative work between KEK and CERN for the ATLAS experiment in the LHC project The solenoid provides an axial magnetic field of 2 Tesla at the center of the tracking volume of the ATLAS detector. The solenoid is installed in a common cryostat of a liquid-argon calorimeter in order to minimize the mass of the cryostat wall. The coil is cooled indirectly by using two-phase helium flow in a pair of serpentine cooling line. The cryogen is supplied by the ATLAS cryogenic plant, which also supplies helium to the Toroid magnet systems. The proximity cryogenic system for the solenoid has two major components: a control dewar and a valve unit In addition, a programmable logic controller, PLC, was prepared for the automatic operation and solenoid test in Japan. This paper describes the design of the proximity cryogenic system and results of the performance test. (7 refs).

  18. ATLAS EventIndex monitoring system using Kibana analytics and visualization platform

    CERN Document Server

    Barberis, Dario; The ATLAS collaboration; Prokoshin, Fedor; Gallas, Elizabeth; Favareto, Andrea; Hrivnac, Julius; Sanchez, Javier; Fernandez Casani, Alvaro; Gonzalez de la Hoz, Santiago; Garcia Montoro, Carlos; Salt, Jose; Malon, David; Toebbicke, Rainer; Yuan, Ruijun

    2016-01-01

    The ATLAS EventIndex is a data catalogue system that stores event-related metadata for all (real and simulated) ATLAS events, on all processing stages. As it consists of different components that depend on other applications (such as distributed storage, and different sources of information) we need to monitor the conditions of many heterogeneous subsystems, to make sure everything is working correctly. This information is collected, processed, and then displayed using CERN service monitoring software based on the Kibana analytic and visualization package, provided by CERN IT Department. EventIndex monitoring is used both by the EventIndex team and ATLAS Distributed Computing shifts crew.

  19. Development of a picosecond time-of-flight system in the ATLAS experiment

    International Nuclear Information System (INIS)

    In this thesis, we present a study of the sensitivity to Beyond Standard Model physics brought by the design and installation of picosecond time-of-flight detectors in the forward region of the ATLAS experiment at the LHC. The first part of the thesis present a study of the sensitivity to the quartic gauge anomalous coupling between the photon and the W boson, using exclusive WW pair production in ATLAS. The event selection is built considering the semi-leptonic decay of WW pair and the presence of the AFP detector in ATLAS. The second part gives a description of large area picosecond photo-detectors design and time reconstruction algorithms with a special care given to signal sampling and processing for precision timing. The third part presents the design of SamPic: a custom picosecond readout integrated circuit. At the end, its first results are reported, and in particular a world-class 5 ps timing precision in measuring the delay between two fast pulses. (author)

  20. Development of a picosecond time-of-flight system in the ATLAS experiment

    CERN Document Server

    Grabas, Hervé

    In this thesis, we present a study of the sensitivity to Beyond Standard Model physics brought by the design and installation of picosecond time-of-flight detectors in the forward region of the ATLAS experiment at the LHC. The first part of the thesis present a study of the sensitivity to the quartic gauge anomalous coupling between the photon and the W boson, using exclusive WW pair production in ATLAS. The event selection is built considering the semi-leptonic decay of WW pair and the presence of the AFP detector in ATLAS. The second part gives a description of large area picosecond photo-detectors design and time reconstruction algorithms with a special care given to signal sampling and processing for precision timing.The third part presents the design of SamPic: a custom picosecond readout integrated circuit. At the end, its first results are reported, and in particular a world-class 5ps timing precision in measuring the delay between two fast pulses.

  1. Readiness of the ATLAS Liquid Argon Calorimeter for LHC Collisions

    CERN Document Server

    Aad, G; Abdallah, J; Abdelalim, A A; Abdesselam, A; Abdinov, O; Abi, B; Abolins, M; Abramowicz, H; Abreu, H; Acharya, B S; Adams, D L; Addy, T N; Adelman, J; Adorisio, C; Adragna, P; Adye, T; Aefsky, S; Aguilar-Saavedra, J A; Aharrouche, M; Ahlen, S P; Ahles, F; Ahmad, A; Ahmed, H; Ahsan, M; Aielli, G; Akdogan, T; Åkesson, T P A; Akimoto, G; Akimov, A V; Aktas, A; Alam, M S; Alam, M A; Albert, J; Albrand, S; Aleksa, M; Aleksandrov, I N; Alessandria, F; Alexa, C; Alexander, G; Alexandre, G; Alexopoulos, T; Alhroob, M; Aliev, M; Alimonti, G; Alison, J; Aliyev, M; Allport, P P; Allwood-Spiers, S E; Almond, J; Aloisio, A; Alon, R; Alonso, A; Alviggi, M G; Amako, K; Amelung, C; Ammosov, V V; Amorim, A; Amorós, G; Amram, N; Anastopoulos, C; Andeen, T; Anders, C F; Anderson, K J; Andreazza, A; Andrei, V; Anduaga, X S; Angerami, A; Anghinolfi, F; Anjos, N; Antonaki, A; Antonelli, M; Antonelli, S; Antunovic, B; Anulli, F; Aoun, S; Arabidze, G; Aracena, I; Arai, Y; Arce, A T H; Archambault, J P; Arfaoui, S; Arguin, J-F; Argyropoulos, T; Arik, E; Arik, M; Armbruster, A J; Arnaez, O; Arnault, C; Artamonov, A; Arutinov, D; Asai, M; Asai, S; Asfandiyarov, R; Ask, S; Åsman, B; Asner, D; Asquith, L; Assamagan, K; Astbury, A; Astvatsatourov, A; Atoian, G; Auerbach, B; Auge, E; Augsten, K; Aurousseau, M; Austin, N; Avolio, G; Avramidou, R; Axen, D; Ay, C; Azuelos, G; Azuma, Y; Baak, M A; Baccaglioni, G; Bacci, C; Bach, A; Bachacou, H; Bachas, K; Backes, M; Badescu, E; Bagnaia, P; Bai, Y; Bailey, D C; Bain, T; Baines, J T; Baker, O K; Baker, M D; Baltasar Dos Santos Pedrosa, F; Banas, E; Banerjee, P; Banerjee, S; Banfi, D; Bangert, A; Bansal, V; Baranov, S P; Baranov, S; Barashkou, A; Barber, T; Barberio, E L; Barberis, D; Barbero, M; Bardin, D Y; Barillari, T; Barisonzi, M; Barklow, T; Barlow, N; Barnett, B M; Barnett, R M; Baron, S; Baroncelli, A; Barr, A J; Barreiro, F; BarreiroGuimarães da Costa, J; Barrillon, P; Barros, N; Bartoldus, R; Bartsch, D; Bastos, J; Bates, R L; Bathe, S; Batkova, L; Batley, J R; Battaglia, A; Battistin, M; Bauer, F; Bawa, H S; Bazalova, M; Beare, B; Beau, T; Beauchemin, P H; Beccherle, R; Becerici, N; Bechtle, P; Beck, G A; Beck, H P; Beckingham, M; Becks, K H; Bedajanek, I; Beddall, A J; Beddall, A; Bednár, P; Bednyakov, V A; Bee, C; Begel, M; Behar Harpaz, S; Behera, P K; Beimforde, M; Belanger-Champagne, C; Bell, P J; Bell, W H; Bella, G; Bellagamba, L; Bellina, F; Bellomo, M; Belloni, A; Belotskiy, K; Beltramello, O; Ben Ami, S; Benary, O; Benchekroun, D; Bendel, M; Benedict, B H; Benekos, N; Benhammou, Y; Benincasa, G P; Benjamin, D P; Benoit, M; Bensinger, J R; Benslama, K; Bentvelsen, S; Beretta, M; Berge, D; Bergeaas Kuutmann, E; Berger, N; Berghaus, F; Berglund, E; Beringer, J; Bernardet, K; Bernat, P; Bernhard, R; Bernius, C; Berry, T; Bertin, A; Besson, N; Bethke, S; Bianchi, R M; Bianco, M; Biebel, O; Biesiada, J; Biglietti, M; Bilokon, H; Bindi, M; Binet, S; Bingul, A; Bini, C; Biscarat, C; Bitenc, U; Black, K M; Blair, R E; Blanchard, J-B; Blanchot, G; Blocker, C; Blocki, J; Blondel, A; Blum, W; Blumenschein, U; Bobbink, G J; Bocci, A; Boehler, M; Boek, J; Boelaert, N; Böser, S; Bogaerts, J A; Bogouch, A; Bohm, C; Bohm, J; Boisvert, V; Bold, T; Boldea, V; Boldyrev, A; Bondarenko, V G; Bondioli, M; Boonekamp, M; Booth, J R A; Bordoni, S; Borer, C; Borisov, A; Borissov, G; Borjanovic, I; Borroni, S; Bos, K; Boscherini, D; Bosman, M; Bosteels, M; Boterenbrood, H; Bouchami, J; Boudreau, J; Bouhova-Thacker, E V; Boulahouache, C; Bourdarios, C; Boyd, J; Boyko, I R; Bozovic-Jelisavcic, I; Bracinik, J; Braem, A; Branchini, P; Brandenburg, G W; Brandt, A; Brandt, G; Brandt, O; Bratzler, U; Brau, B; Brau, J E; Braun, H M; Brelier, B; Bremer, J; Brenner, R; Bressler, S; Breton, D; Brett, N D; Britton, D; Brochu, F M; Brock, I; Brock, R; Brodbeck, T J; Brodet, E; Broggi, F; Bromberg, C; Brooijmans, G; Brooks, W K; Brown, G; Brubaker, E; Bruckman de Renstrom, P A; Bruncko, D; Bruneliere, R; Brunet, S; Bruni, A; Bruni, G; Bruschi, M; Buanes, T; Bucci, F; Buchanan, J; Buchholz, P; Buckley, A G; Budagov, I A; Budick, B; Büscher, V; Bugge, L; Bulekov, O; Bunse, M; Buran, T; Burckhart, H; Burdin, S; Burgess, T; Burke, S; Busato, E; Bussey, P; Buszello, C P; Butin, F; Butler, B; Butler, J M; Buttar, C M; Butterworth, J M; Byatt, T; Caballero, J; Cabrera Urbán, S; Caforio, D; Cakir, O; Calafiura, P; Calderini, G; Calfayan, P; Calkins, R; Caloba, L P; Caloi, R; Calvet, D; Camarri, P; Cambiaghi, M; Cameron, D; Campabadal-Segura, F; Campana, S; Campanelli, M; Canale, V; Canelli, F; Canepa, A; Cantero, J; Capasso, L; Capeans-Garrido, M D M; Caprini, I; Caprini, M; Capua, M; Caputo, R; Caracinha, D; Caramarcu, C; Cardarelli, R; Carli, T; Carlino, G; Carminati, L; Caron, B; Caron, S; Carrillo Montoya, G D; Carron Montero, S; Carter, A A; Carter, J R

    2010-01-01

    The ATLAS liquid argon calorimeter has been operating continuously since August 2006. At this time, only part of the calorimeter was readout, but since the beginning of 2008, all calorimeter cells have been connected to the ATLAS readout system in preparation for LHC collisions. This paper gives an overview of the liquid argon calorimeter performance measured in situ with random triggers, calibration data, cosmic muons, and LHC beam splash events. Results on the detector operation, timing performance, electronics noise, and gain stability are presented. High energy deposits from radiative cosmic muons and beam splash events allow to check the intrinsic constant term of the energy resolution. The uniformity of the electromagnetic barrel calorimeter response along eta (averaged over phi) is measured at the percent level using minimum ionizing cosmic muons. Finally, studies of electromagnetic showers from radiative muons have been used to cross-check the Monte Carlo simulation. The performance results obtained u...

  2. ATLAS magnet common cryogenic, vacuum, electrical and control systems

    CERN Document Server

    Miele, P; Delruelle, N; Geich-Gimbel, C; Haug, F; Olesen, G; Pengo, R; Sbrissa, E; Tyrvainen, H; ten Kate, H H J

    2004-01-01

    The superconducting Magnet System for the ATLAS detector at the LHC at CERN comprises a Barrel Toroid, two End Cap Toroids and a Central Solenoid with overall dimensions of 20 m diameter by 26 m length and a stored energy of 1.6 GJ. Common proximity cryogenic and electrical systems for the toroids are implemented. The Cryogenic System provides the cooling power for the 3 toroid magnets considered as a single cold mass (600 tons) and for the CS. The 21 kA toroid and the 8 kA solenoid electrical circuits comprise both a switch-mode power supply, two circuit breakers, water cooled bus bars, He cooled current leads and the diode resistor ramp-down unit. The Vacuum System consists of a group of primary rotary pumps and sets of high vacuum diffusion pumps connected to each individual cryostat. The Magnet Safety System guarantees the magnet protection and human safety through slow and fast dump treatment. The Magnet Control System ensures control, regulation and monitoring of the operation of the magnets. The update...

  3. Supporting ATLAS

    CERN Multimedia

    maximilien brice

    2003-01-01

    Eighteen feet made of stainless steel will support the barrel ATLAS detector in the cavern at Point 1. In total, the ATLAS feet system will carry approximately 6000 tons, and will give the same inclination to the detector as the LHC accelerator.

  4. Integration Of PanDA Workload Management System With Supercomputers for ATLAS

    CERN Document Server

    Oleynik, Danila; The ATLAS collaboration; De, Kaushik; Wenaus, Torre; Maeno, Tadashi; Barreiro Megino, Fernando Harald; Nilsson, Paul; Guan, Wen; Panitkin, Sergey

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production ANd Distributed Analysis system) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more t...

  5. The Straw Cooling System in the ATLAS TRT

    CERN Document Server

    Godlewski, J

    2002-01-01

    This technical note deals with the straw cooling system for the TRT End-caps in the ATLAS detector. The combination of a high gas flow requirement and small gas volumes yield unfavourable properties in terms of control stability. Early experiments on a prototype of the final cooling system, showed that pressure losses in the gas distribution lines must be decreased to fulfil the pressure control requirements. One part of this note is devoted to a cfd analysis of a critical component, an elbow duct, in the gas distribution line. To enable analyses of the overall cooling system dynamics, generic simulation components were created and applied in a simulation of the prototype cooling system. The simulation was verified by an equivalent experiment on the prototype cooling system. The manifolds that distribute and collect the gas in the group-of-wheels are dealt with in the last chapter where results from a fluid mechanical model implemented in Matlab are compared to values obtained by experiments

  6. A non-destructive readout circuit of the linear array image sensor with over 90dB dynamic range and 190k fps for radar system

    Science.gov (United States)

    Yang, Cong-jie; Gao, Zhi-yuan; Zeng, Xin-ji; Yao, Su-ying; Gao, Jing

    2015-04-01

    This paper presents a non-destructive readout circuit of the linear array image sensor with wide dynamic range and high speed readout for radar system. A multi-capacitor and self-regulated capacitive trans-impedance amplifier (CTIA) structure is employed to extend the dynamic range. The gain of the CTIA is auto adjusted by switching different capacitors to the integration node asynchronously according to the output voltage. A class AB OPA is utilized to drive all the additional capacitors to achieve high speed readout. A photo response curve presents as a polyline with 5 segments, which enables a 101.7 dB dynamic range. In addition, the exposure time is 5.12us in the simulation, then an over 190k fps is achieved.

  7. Experimental investigation of silicon photomultipliers as compact light readout systems for gamma-ray spectroscopy applications in fusion plasmas

    International Nuclear Information System (INIS)

    A matrix of Silicon Photo Multipliers has been developed for light readout from a large area 1 in. × 1 in. LaBr3 crystal. The system has been characterized in the laboratory and its performance compared to that of a conventional photo multiplier tube. A pulse duration of 100 ns was achieved, which opens up to spectroscopy applications at high counting rates. The energy resolution measured using radioactive sources extrapolates to 3%–4% in the energy range Eγ = 3–5 MeV, enabling gamma-ray spectroscopy measurements at good energy resolution. The results reported here are of relevance in view of the development of compact gamma-ray detectors with spectroscopy capabilities, such as an enhanced gamma-ray camera for high power fusion plasmas, where the use of photomultiplier is impeded by space limitation and sensitivity to magnetic fields

  8. The Evolution of the Trigger and Data Acquisition System in the ATLAS Experiment

    CERN Document Server

    Garelli, N; The ATLAS collaboration

    2014-01-01

    The ATLAS experiment, aimed at recording the results of LHC proton-proton collisions, is upgrading its Trigger and Data Acquisition (TDAQ) system during the current LHC first long shutdown. The purpose of such upgrade is to add robustness and flexibility to the selection and the conveyance of the physics data, simplify the maintenance of the infrastructure, exploit new technologies and, overall, make ATLAS data-taking capable of dealing with increasing event rates. \

  9. System Description of the Electrical Power Supply System for the ATLAS Integral Test Loop

    Energy Technology Data Exchange (ETDEWEB)

    Moon, S. K.; Park, J. K.; Kim, Y. S.; Song, C. H.; Baek, W. P

    2007-02-15

    An integral effect test loop for pressurized water reactors (PWRs), the ATLAS (Advanced Thermal-hydraulic Test Loop for Accident Simulation), is constructed by Thermal-Hydraulics Safety Research Team in Korea Atomic Energy Research Institute (KAERI). The ATLAS facility has been designed to have the length scale of 1/2 and area scale of 1/144 compared with the reference plant, APR1400. This report describes the design and technical specifications of the electrical power supply system which supplies the electrical powers to core heater rods, other heaters, various pumps and other systems. The electrical power supply system had acquired the final approval on the operation from the Korea Electrical Safety Corporation. During performance tests for the operation and control, the electrical power supply system showed completely acceptable operation and control performance.

  10. The ATLAS Event Builder

    CERN Document Server

    Vandelli, W; Battaglia, A; Beck, H P; Blair, R; Bogaerts, A; Bosman, M; Ciobotaru, M; Cranfield, R; Crone, G; Dawson, J; Dobinson, Robert W; Dobson, M; Dos Anjos, A; Drake, G; Ermoline, Y; Ferrari, R; Ferrer, M L; Francis, D; Gadomski, S; Gameiro, S; Gorini, B; Green, B; Haberichter, W; Haberli, C; Hauser, R; Hinkelbein, C; Hughes-Jones, R; Joos, M; Kieft, G; Klous, S; Korcyl, K; Kordas, K; Kugel, A; Leahu, L; Lehmann, G; Martin, B; Mapelli, L; Meessen, C; Meirosu, C; Misiejuk, A; Mornacchi, G; Müller, M; Nagasaka, Y; Negri, A; Pasqualucci, E; Pauly, T; Petersen, J; Pope, B; Schlereth, J L; Spiwoks, R; Stancu, S; Strong, J; Sushkov, S; Szymocha, T; Tremblet, L; Ünel, G; Vermeulen, J; Werner, P; Wheeler-Ellis, S; Wickens, F; Wiedenmann, W; Yu, M; Yasu, Y; Zhang, J; Zobernig, H; 2007 IEEE Nuclear Science Symposium and Medical Imaging Conference

    2008-01-01

    Event data from proton-proton collisions at the LHC will be selected by the ATLAS experiment in a three-level trigger system, which, at its first two trigger levels (LVL1+LVL2), reduces the initial bunch crossing rate of 40~MHz to $sim$3~kHz. At this rate, the Event Builder collects the data from the readout system PCs (ROSs) and provides fully assembled events to the Event Filter (EF). The EF is the third trigger level and its aim is to achieve a further rate reduction to $sim$200~Hz on the permanent storage. The Event Builder is based on a farm of O(100) PCs, interconnected via a Gigabit Ethernet to O(150) ROSs. These PCs run Linux and multi-threaded software applications implemented in C++. All the ROSs, and substantial fractions of the Event Builder and Event Filter PCs have been installed and commissioned. We report on performance tests on this initial system, which is capable of going beyond the required data rates and bandwidths for Event Building for the ATLAS experiment.

  11. SIGNAL RECONSTRUCTION PERFORMANCE OF THE ATLAS HADRONIC TILE CALORIMETER

    CERN Document Server

    Do Amaral Coutinho, Y; The ATLAS collaboration

    2013-01-01

    "The Tile Calorimeter for the ATLAS experiment at the CERN Large Hadron Collider (LHC) is a sampling calorimeter with steel as absorber and scintillators as active medium. The scintillators are readout by wavelength shifting fibers coupled to photomultiplier tubes (PMT). The analogue signals from the PMTs are amplified, shaped and digitized by sampling the signal every 25 ns. The TileCal front-end electronics allows to read out the signals produced by about 10000 channels measuring energies ranging from ~30 MeV to ~2 TeV. The read-out system is responsible for reconstructing the data in real-time fulfilling the tight time constraint imposed by the ATLAS first level trigger rate (100 kHz). The main component of the read-out system is the Digital Signal Processor (DSP) which, using an Optimal Filtering reconstruction algorithm, allows to compute for each channel the signal amplitude, time and quality factor at the required high rate. Currently the ATLAS detector and the LHC are undergoing an upgrade program tha...

  12. A segmented scintillator-lead photon calorimeter using a double wavelength shifter optical readout system

    International Nuclear Information System (INIS)

    The construction and performance of a prototype scintillator-lead photon calorimeter using a double wavelength shifter optical readout is described. The calorimeter is divided into four individual cells consisting of 44 layers of 3 mm lead plus 1 cm thick scintillator. The edges of each scintillator plate are covered by acrylic bars doped with a wavelength shifting material. The light produced in each scintillator plate is first converted in these bars, then converted a second time in a set of acrylic rods which run longitudinally through the calorimeter along the corners of each calorimeter cell. A photomultiplier is attached to each of these rods at the back end of the calorimeter. The energy resolution obtained with incident in the energy range 2-30 GeV is sigma/E = 0.12/√E. The uniformity of response across the front face of each cell was measured. Showers within each cell can be localised with an accuracy of better than sigma = 7 mm. (orig.)

  13. Self-triggering readout system for the neutron lifetime experiment PENeLOPE

    Science.gov (United States)

    Gaisbauer, D.; Bai, Y.; Konorov, I.; Paul, S.; Steffen, D.

    2016-02-01

    PENeLOPE is a neutron lifetime measurement developed at the Technische Universität München and located at the Forschungs-Neutronenquelle Heinz Maier-Leibnitz (FRM II) aiming to achieve a precision of 0.1 seconds. The detector for PENeLOPE consists of about 1250 Avalanche Photodiodes (APDs) with a total active area of 1225 cm2. The decay proton detector and electronics will be operated at a high electrostatic potential of -30 kV and a magnetic field of 0.6 T. This includes shaper, preamplifier, ADC and FPGA cards. In addition, the APDs will be cooled to 77 K. The 1250 APDs are divided into 14 groups of 96 channels, including spares. A 12-bit ADC digitizes the detector signals with 1 MSps. A firmware was developed for the detector including a self-triggering readout with continuous pedestal calculation and configurable signal detection. The data transmission and configuration is done via the Switched Enabling Protocol (SEP). It is a time-division multiplexing low layer protocol which provides determined latency for time critical messages, IPBus, and JTAG interfaces. The network has a n:1 topology, reducing the number of optical links.

  14. Role Based Access Control system in the ATLAS experiment

    International Nuclear Information System (INIS)

    The complexity of the ATLAS experiment motivated the deployment of an integrated Access Control System in order to guarantee safe and optimal access for a large number of users to the various software and hardware resources. Such an integrated system was foreseen since the design of the infrastructure and is now central to the operations model. In order to cope with the ever growing needs of restricting access to all resources used within the experiment, the Roles Based Access Control (RBAC) previously developed has been extended and improved. The paper starts with a short presentation of the RBAC design, implementation and the changes made to the system to allow the management and usage of roles to control access to the vast and diverse set of resources. The RBAC implementation uses a directory service based on Lightweight Directory Access Protocol to store the users (∼3000), roles (∼320), groups (∼80) and access policies. The information is kept in sync with various other databases and directory services: human resources, central CERN IT, CERN Active Directory and the Access Control Database used by DCS. The paper concludes with a detailed description of the integration across all areas of the system.

  15. Role Based Access Control system in the ATLAS experiment

    Science.gov (United States)

    Valsan, M. L.; Dobson, M.; Lehmann Miotto, G.; Scannicchio, D. A.; Schlenker, S.; Filimonov, V.; Khomoutnikov, V.; Dumitru, I.; Zaytsev, A. S.; Korol, A. A.; Bogdantchikov, A.; Avolio, G.; Caramarcu, C.; Ballestrero, S.; Darlea, G. L.; Twomey, M.; Bujor, F.

    2011-12-01

    The complexity of the ATLAS experiment motivated the deployment of an integrated Access Control System in order to guarantee safe and optimal access for a large number of users to the various software and hardware resources. Such an integrated system was foreseen since the design of the infrastructure and is now central to the operations model. In order to cope with the ever growing needs of restricting access to all resources used within the experiment, the Roles Based Access Control (RBAC) previously developed has been extended and improved. The paper starts with a short presentation of the RBAC design, implementation and the changes made to the system to allow the management and usage of roles to control access to the vast and diverse set of resources. The RBAC implementation uses a directory service based on Lightweight Directory Access Protocol to store the users (~3000), roles (~320), groups (~80) and access policies. The information is kept in sync with various other databases and directory services: human resources, central CERN IT, CERN Active Directory and the Access Control Database used by DCS. The paper concludes with a detailed description of the integration across all areas of the system.

  16. The Associative Memory System Infrastructure of the ATLAS Fast Tracker

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00525014; The ATLAS collaboration

    2016-01-01

    The Associative Memory (AM) system of the Fast Tracker (FTK) processor has been designed to perform pattern matching using the hit information of the ATLAS experiment silicon tracker. The AM is the heart of FTK and is mainly based on the use of ASICs (AM chips) designed on purpose to execute pattern matching with a high degree of parallelism. It finds track candidates at low resolution that are seeds for a full resolution track fitting. The AM system implementation is based on a collection of boards, named “Serial Link Processor” (AMBSLP), since it is based on a network of 900 2 Gb/s serial links to sustain huge data traffic. The AMBSLP has high power consumption (~250 W) and the AM system needs custom power and cooling. This presentation reports on the integration of the AMBSLP inside FTK, the infrastructure needed to run and cool the system which foresees many AMBSLPs in the same crate, the performance of the produced prototypes tested in the global FTK integration, an important milestone to be satisfie...

  17. Portable Gathering System for Monitoring and Online Calibration at ATLAS

    CERN Document Server

    Conde-Muíño, P; Dos Anjos, A; Baines, J T M; Bee, C P; Biglietti, M; Bogaerts, J A C; Boisvert, V; Bosman, M; Caron, B; Casado, M P; Cataldi, G; Cavalli, D; Cervetto, M; Comune, G; De Santo, A; Díaz-Gómez, M; Dosil, M; Ellis, Nick; Emeliyanov, D; Epp, B; Falciano, S; Farilla, A; George, S; Ghete, V M; González, S; Grothe, M; Kabana, S; Khomich, A; Kilvington, G; Konstantinidis, N P; Kootz, A; Lowe, A; Luminari, L; Maeno, T; Masik, J; Di Mattia, A; Meessen, C; Mello, A G; Merino, G; Moore, R; Morettini, P; Negri, A; Nikitin, N V; Nisati, A; Padilla, C; Panikashvili, N; Parodi, F; Pérez-Réale, V; Pinfold, J L; Pinto, P; Qian, Z; Resconi, S; Rosati, S; Sánchez, C; Santamarina-Rios, C; Scannicchio, D A; Schiavi, C; Segura, E; De Seixas, J M; Sivoklokov, S Yu; Soluk, R A; Stefanidis, E; Sushkov, S; Sutton, M; Tapprogge, Stefan; Thomas, E; Touchard, F; Venda-Pinto, B; Vercesi, V; Werner, P; Wheeler, S; Wickens, F J; Wiedenmann, W; Wielers, M; Zobernig, G; Computing In High Energy Physics

    2005-01-01

    During the runtime of any experiment, a central monitoring system that detects problems as soon as they appear has an essential role. In a large experiment, like ATLAS, the online data acquisition system is distributed across the nodes of large farms, each of them running several processes that analyse a fraction of the events. In this architecture, it is necessary to have a central process that collects all the monitoring data from the different nodes, produces full statistics histograms and analyses them. In this paper we present the design of such a system, called the gatherer. It allows to collect any monitoring object, such as histograms, from the farm nodes, from any process in the DAQ, trigger and reconstruction chain. It also adds up the statistics, if required, and processes user defined algorithms in order to analyse the monitoring data. The results are sent to a centralized display, that shows the information online, and to the archiving system, triggering alarms in case of problems. The innovation...

  18. Common accounting system for monitoring the ATLAS distributed computing resources

    International Nuclear Information System (INIS)

    This paper covers in detail a variety of accounting tools used to monitor the utilisation of the available computational and storage resources within the ATLAS Distributed Computing during the first three years of Large Hadron Collider data taking. The Experiment Dashboard provides a set of common accounting tools that combine monitoring information originating from many different information sources; either generic or ATLAS specific. This set of tools provides quality and scalable solutions that are flexible enough to support the constantly evolving requirements of the ATLAS user community.

  19. Software framework developed for the slice test of the ATLAS endcap muon trigger system

    CERN Document Server

    Komatsu, S; Ishida, Y; Tanaka, K; Hasuko, K; Kano, H; Matsumoto, Y; Yakamura, Y; Sakamoto, H; Ikeno, M; Nakayoshi, K; Sasaki, O; Yasu, Y; Hasegawa, Y; Totsuka, M; Tsuji, S; Maeno, T; Ichimiya, R; Kurashige, H

    2002-01-01

    A sliced system test of the ATLAS end cap muon level 1 trigger system has been done in 2001 and 2002 separately. We have developed an own software framework for property and run controls for the slice test in 2001. The system is described in C++ throughout. The multi-PC control system is accomplished using the CORBA system. We have then restructured the software system on top of the ATLAS online software framework, and used this one for the slice test in 2002. In this report we discuss two systems in detail with emphasizing the module property configuration and run control. (8 refs).

  20. BATS, the readout control of UA1

    Energy Technology Data Exchange (ETDEWEB)

    Botlo, M.; Dorenbosch, J.; Jimack, M.; Szoncso, F.; Taurok, A.; Walzel, G. (European Organization for Nuclear Research, Geneva (Switzerland))

    1991-04-15

    A steadily rising luminosity and different readout architectures for the various detector systems of UA1 required a new data flow control to minimize the dead time. BATS, a finite state machine conceived around two microprocessors in a single VME crate, improved flexibility and reliability. Compatibility with BATS streamlined all readout branches. BATS also proved to be a valuable asset in spotting readout problems and previously undetected data flow bottlenecks. (orig.).