WorldWideScience

Sample records for atlas daq system

  1. The Message Reporting System of the ATLAS DAQ System

    CERN Document Server

    Caprini, M; Kolos, S; 10th ICATPP Conference on Astroparticle, Particle, Space Physics, Detectors and Medical Physics Applications

    2008-01-01

    The Message Reporting System (MRS) in the ATLAS data acquisition system (DAQ) is one package of the Online Software which acts as a glue of various elements of DAQ, High Level Trigger (HLT) and Detector Control System (DCS). The aim of the MRS is to provide a facility which allows all software components in ATLAS to report messages to other components of the distributed DAQ system. The processes requiring a MRS are on one hand applications that report error conditions or information and on the other hand message processors that receive reported messages. A message reporting application can inject one or more messages into the MRS at any time. An application wishing to receive messages can subscribe to a message group according to defined criteria. The application receives messages that fulfill the subscription criteria when they are reported to MRS. The receiver message processing can consist of anything from simply logging the messages in a file/terminal to performing message analysis. The inter-process comm...

  2. ATLAS DAQ/HLT rack DCS

    International Nuclear Information System (INIS)

    Ermoline, Yuri; Burckhart, Helfried; Francis, David; Wickens, Frederick J.

    2007-01-01

    The ATLAS Detector Control System (DCS) group provides a set of standard tools, used by subsystems to implement their local control systems. The ATLAS Data Acquisition and High Level Trigger (DAQ/HLT) rack DCS provides monitoring of the environmental parameters (air temperatures, humidity, etc.). The DAQ/HLT racks are located in the underground counting room (20 racks) and in the surface building (100 racks). The rack DCS is based on standard ATLAS tools and integrated into overall operation of the experiment. The implementation is based on the commercial control package and additional components, developed by CERN Joint Controls Project Framework. The prototype implementation and measurements are presented

  3. Application of the ATLAS DAQ and Monitoring System for MDT and RPC Commissioning

    CERN Document Server

    Pasqualucci, E

    2007-01-01

    The ATLAS DAQ and monitoring software are currently commonly used to test detectors during the commissioning phase. In this paper, their usage in MDT and RPC commissioning is described, both at the surface pre-commissioning and commissioning stations and in the ATLAS pit. Two main components are heavily used for detector tests. The ROD Crate DAQ software is based on the ATLAS Readout application. Based on the plug-in mechanism, it provides a complete environment to interface any kind of detector or trigger electronics to the ATLAS DAQ system. All the possible flavours of this application are used to test and run the MDT and RPC detectors at the pre-commissioning and commissioning sites. Ad-hoc plug-ins have been developed to implement data readout via VME, both with ROD prototypes and emulating final electronics to read out data with temporary solutions, and to provide trigger distribution and busy management in a multi-crate environment. Data driven event building functionality is also used to combine data f...

  4. Performance and scalability of the back-end sub-system in the ATLAS DAQ/EF prototype

    CERN Document Server

    Alexandrov, I N; Badescu, E; Burckhart, Doris; Caprini, M; Cohen, L; Duval, P Y; Hart, R; Jones, R; Kazarov, A; Kolos, S; Kotov, V; Laugier, D; Mapelli, Livio P; Moneta, L; Qian, Z; Radu, A A; Ribeiro, C A; Roumiantsev, V; Ryabov, Yu; Schweiger, D; Soloviev, I V

    2000-01-01

    The DAQ group of the future ATLAS experiment has developed a prototype system based on the trigger/DAQ architecture described in the ATLAS Technical Proposal to support studies of the full system functionality, architecture as well as available hardware and software technologies. One sub-system of this prototype is the back- end which encompasses the software needed to configure, control and monitor the DAQ, but excludes the processing and transportation of physics data. The back-end consists of a number of components including run control, configuration databases and message reporting system. The software has been developed using standard, external software technologies such as OO databases and CORBA. It has been ported to several C++ compilers and operating systems including Solaris, Linux, WNT and LynxOS. This paper gives an overview of the back-end software, its performance, scalability and current status. (17 refs).

  5. Communication between Trigger/DAQ and DCS in ATLAS

    International Nuclear Information System (INIS)

    Burckhart, H.; Jones, R.; Hart, R.; Khomoutnikov, V.; Ryabov, Y.

    2001-01-01

    Within the ATLAS experiment Trigger/DAQ and DCS are both logically and physically separated. Nevertheless there is a need to communicate. The initial problem definition and analysis suggested three subsystems the Trigger/DAQ DCS Communication (DDC) project should support the ability to: 1. exchange data between Trigger/DAQ and DCS; 2. send alarm messages from DCS to Trigger/DAQ; 3. issue commands to DCS from Trigger/DAQ. Each subsystem is developed and implemented independently using a common software infrastructure. Among the various subsystems of the ATLAS Trigger/DAQ the Online is responsible for the control and configuration. It is the glue connecting the different systems such as data flow, level 1 and high-level triggers. The DDC uses the various Online components as an interface point on the Trigger/DAQ side with the PVSS II SCADA system on the DCS side and addresses issues such as partitioning, time stamps, event numbers, hierarchy, authorization and security. PVSS II is a commercial product chosen by CERN to be the SCADA system for all LHC experiments. Its API provides full access to its database, which is sufficient to implement the 3 subsystems of the DDC software. The DDC project adopted the Online Software Process, which recommends a basic software life-cycle: problem statement, analysis, design, implementation and testing. Each phase results in a corresponding document or in the case of the implementation and testing, a piece of code. Inspection and review take a major role in the Online software process. The DDC documents have been inspected to detect flaws and resulted in a improved quality. A first prototype of the DDC is ready and foreseen to be used at the test-beam during summer 2001

  6. Applications of CORBA in the ATLAS prototype DAQ

    CERN Document Server

    Jones, R; Mapelli, Livio P; Ryabov, Yu

    2000-01-01

    This paper presents the experience of using the Common Object Request Broker Architecture (CORBA) in the ATLAS prototype DAQ project. Many communication links in the DAQ system have been designed and implemented using the CORBA standard. A public domain package, called Inter-Language Unification (ILU) has been used to implement CORBA based communications between DAQ components in a local area network (LAN) of heterogeneous computers. The CORBA Naming Service provides the principal mechanism through which most clients of an ORE-based system locate objects that they intend to use. In our project, conventions are employed that meaningfully partition the name space of the Naming Service according to divisions in the DAQ system itself. The Inter Process Communication (IPC) package, implemented in C++ on the top of CORBA/ILU, incorporates this facility and hides the details of the naming schema is described. The development procedure and environment for remote database access using IPC is described. Various end-use...

  7. Applications of CORBA in the ATLAS prototype DAQ

    Science.gov (United States)

    Jones, R.; Kolos, S.; Mapelli, L.; Ryabov, Y.

    2000-04-01

    This paper presents the experience of using the Common Object Request Broker Architecture (CORBA) in the ATLAS prototype DAQ project. Many communication links in the DAQ system have been designed and implemented using the CORBA standard. A public domain package, called Inter-Language Unification (ILU) has been used to implement CORBA based communications between DAQ components in a local area network (LAN) of heterogeneous computers. The CORBA Naming Service provides the principal mechanism through which most clients of an ORE-based system locate objects that they intend to use. In our project, conventions are employed that meaningfully partition the name space of the Naming Service according to divisions in the DAQ system itself. The Inter Process Communication (IPC) package, implemented in C++ on the top of CORBA/ILU, incorporates this facility and hides the details of the naming schema is described. The development procedure and environment for remote database access using IPC is described. Various end-user interfaces have been implemented using the Java language that communicate with C++ servers via CORBA/ILU. To support such interfaces, a second implementation of IPC in Java has been developed. The design and implementation of such connections are described. An alternative CORBA implementation, ORBacus, has been evaluated and compared with ILU.

  8. A rule-based verification and control framework in ATLAS Trigger-DAQ

    CERN Document Server

    Kazarov, A; Lehmann-Miotto, G; Sloper, J E; Ryabov, Yu; Computing In High Energy and Nuclear Physics

    2007-01-01

    In order to meet the requirements of ATLAS data taking, the ATLAS Trigger-DAQ system is composed of O(1000) of applications running on more than 2600 computers in a network. With such system size, s/w and h/w failures are quite often. To minimize system downtime, the Trigger-DAQ control system shall include advanced verification and diagnostics facilities. The operator should use tests and expertise of the TDAQ and detectors developers in order to diagnose and recover from errors, if possible automatically. The TDAQ control system is built as a distributed tree of controllers, where behavior of each controller is defined in a rule-based language allowing easy customization. The control system also includes verification framework which allow users to develop and configure tests for any component in the system with different levels of complexity. It can be used as a stand-alone test facility for a small detector installation, as part of the general TDAQ initialization procedure, and for diagnosing the problems ...

  9. High performance message passing for the ATLAS DAQ/EF-1 project

    CERN Document Server

    Mornacchi, Giuseppe

    1999-01-01

    Summary form only. A message passing library has been developed in the context of the ATLAS DAQ/EF-1 project. It is used for time critical applications within the front-end part of the DAQ system, mainly to exchange data control messages between I/O processors. Key objectives of the design were low message overheads, efficient use of the data transfer buses, provision of broadcast functionality and a hardware and operating system independent implementation of the application interface. The design and implementation of the message passing library are presented. As required by the project, the implementation is based on commercial components, namely VMEbus, PCI, the Lynx-OS real-time operating system and an additional inter- processor link, PVIC. The latter offers broadcast functionality identified as being important to the overall performance of the message passing. In addition, performance benchmarks for all implementing buses are presented for both simple test programs and the full DAQ applications. (0 refs)...

  10. Trigger and DAQ in the Combined Test Beam

    CERN Multimedia

    Dobson, M; Padilla, C

    2004-01-01

    Introduction During the Combined Test Beam the latest prototype of the ATLAS Trigger and DAQ system is being used to support the data taking of all the detectors. Further development of the TDAQ subsystems benefits from the direct experience given by the integration in the beam test. Support of detectors for the Combined Test Beam All ATLAS detectors need their own detector-specific DAQ development. The readout electronics is controlled by a Readout Driver (ROD), custom-built for each detector. The ROD receives data for events that are accepted by the first level trigger. The detector-specific part of the DAQ system needs to control the ROD and to respond to commands of the central DAQ (e.g. to "Start" a run). The ROD module then sends event data to a Readout System (ROS), a PC with special receiver modules/buffers. At this point the data enters the realm of the ATLAS DAQ and High Level Trigger system, constructed from Linux PCs connected with gigabit Ethernet networks. Most ATLAS detectors, representing s...

  11. A DAQ system for CAMAC controller CC/NET using DAQ-Middleware

    International Nuclear Information System (INIS)

    Inoue, E; Yasu, Y; Nakayoshi, K; Sendai, H

    2010-01-01

    DAQ-Middleware is a framework for the DAQ system which is based on RT-Middleware (Robot Technology Middleware) and dedicated to making DAQ systems. DAQ-Middleware has come into use as a one of the DAQ system framework for the next generation Particle Physics experiment at KEK in recent years. DAQ-Middleware comprises DAQ-Components with all necessary basic functions of the DAQ and is easily extensible. So, using DAQ-Middleware, you are able to construct easily your own DAQ system by combining these components. As an example, we have developed a DAQ system for a CC/NET [1] using DAQ-Middleware by the addition of GUI part and CAMAC readout part. The CC/NET, the CAMAC controller was developed to accomplish high speed read-out of CAMAC data. The basic design concept of CC/NET is to realize data taking through networks. So, it is consistent with the DAQ-Middleware concept. We show how it is convenient to use DAQ-Middleware.

  12. Editor for Remote Database used in ATLAS Trigger/DAQ

    CERN Document Server

    Meessen, C; Valenta, J

    2006-01-01

    The poster gives brief summary of the ATLAS T/DAQ system, then it introduces the RDB database and describes the RDB Editor application, including its internal structure, GUI features, etc. The RDB Editor is an easy-to-use Java application which allows simple navigation between huge number of objects stored in the RDB. It supports bookmarks, histories, etc. in the way usual in the web browsers. Moreover, it is possible to enhance the application by specialized (graphical) viewers for objects of particular class which will allow the user to see, for example, details that are hard to spot in textual view. As an example of such a plug-in, viewer for EFD_Configuration class was developed.

  13. The use of Ethernet in the DataFlow of the ATLAS Trigger & DAQ

    CERN Document Server

    Stancu, Stefan; Dobinson, Bob; Korcyl, Krzysztof; Knezo, Emil; CHEP 2003 Computing in High Energy Physics

    2003-01-01

    The article analyzes a proposed network topology for the ATLAS DAQ DataFlow, and identifies the Ethernet features required for a proper operation of the network: MAC address table size, switch performance in terms of throughput and latency, the use of Flow Control, Virtual LANs and Quality of Service. We investigate these features on some Ethernet switches, and conclude on their usefulness for the ATLAS DataFlow network

  14. The 2002 Test Beam DAQ

    CERN Multimedia

    Mapelli, L.

    The ATLAS Tilecal group has been the first user of the Test Beam version of the DAQ/EF-1 prototype in 2000. The prototype was successfully tested in lab in summer 1999 and it has been officially adopted as baseline solution for the Test Beam DAQ at the end of 1999. It provides the right solution for users who need to have a modern data acquisition chain for final or almost final front-end and off-detector electronics (RODs and ROD emulators). The typical architecture for the readout and the DAQ is sketched in the figure below. A number of detector crates can send data over the Read Out Link to the Read Out System. The Read Out System sends data over an Ethernet link to a SubFarm PC that provides to send the data to Central Data Recording. In 2001 also the Muon MDT group has adopted this modern DAQ where for the first time a PC-based ReadOut System has been used, instead of the VME based implementation used in 2000, and for the Tilecal DAQ in 2001. In 2002 also Tilecal has adopted the PC-based implement...

  15. Using Linux PCs in DAQ applications

    CERN Document Server

    Ünel, G; Beck, H P; Cetin, S A; Conka, T; Crone, G J; Fernandes, A; Francis, D; Joosb, M; Lehmann, G; López, J; Mailov, A A; Mapelli, Livio P; Mornacchi, Giuseppe; Niculescu, M; Petersen, J; Tremblet, L J; Veneziano, Stefano; Wildish, T; Yasu, Y

    2000-01-01

    ATLAS Data Acquisition/Event Filter "-1" (DAQ/EF1) project provides the opportunity to explore the use of commodity hardware (PCs) and Open Source Software (Linux) in DAQ applications. In DAQ/EF-1 there is an element called the LDAQ which is responsible for providing local run-control, error-handling and reporting for a number of read- out modules in front end crates. This element is also responsible for providing event data for monitoring and for the interface with the global control and monitoring system (Back-End). We present the results of an evaluation of the Linux operating system made in the context of DAQ/EF-1 where there are no strong real-time requirements. We also report on our experience in implementing the LDAQ on a VMEbus based PC (the VMIVME-7587) and a desktop PC linked to VMEbus with a Bit3 interface both running Linux. We then present the problems encountered during the integration with VMEbus, the status of the LDAQ implementation and draw some conclusions on the use of Linux in DAQ applica...

  16. An Introduction to ATLAS Pixel Detector DAQ and Calibration Software Based on a Year's Work at CERN for the Upgrade from 8 to 13 TeV

    CERN Document Server

    AUTHOR|(CDS)2094561

    An overview is presented of the ATLAS pixel detector Data Acquisition (DAQ) system obtained by the author during a year-long opportunity to work on calibration software for the 2015-16 Layer‑2 upgrade. It is hoped the document will function more generally as an easy entry point for future work on ATLAS pixel detector calibration systems. To begin with, the overall place of ATLAS pixel DAQ within the CERN Large Hadron Collider (LHC), the purpose of the Layer-2 upgrade and the fundamentals of pixel calibration are outlined. This is followed by a brief look at the high level structure and key features of the calibration software. The paper concludes by discussing some difficulties encountered in the upgrade project and how these led to unforeseen alternative enhancements, such as development of calibration “simulation” software allowing the soundness of the ongoing upgrade work to be verified while not all of the actual readout hardware was available for the most comprehensive testing.

  17. DAQ

    CERN Multimedia

    F. Meijers

    2010-01-01

     The DAQ system (see Figure 2) consists of: - the full detector read-out of a total of 633 FEDs (Front-End Drivers) – the FRL (Front-end Readout - Link) provides the common interface between the sub-detector specific FEDs and the central DAQ; - 8 DAQ slices with a 100 GB/s event building capacity – corresponding to a nominal 2 kB per FRL at a Level-1 (L1) trigger rate of 100 kHz; - an event filter to run the HLT (High Level Trigger) comprising 720 PCs with two quad-core 2.6 GHz CPUs; - a 16-node storage manager system allowing a writing rate that exceeds 1 GB/s, with concurrent transfers to Tier 0 at the same rate, and a total storage capacity of 250 TB. It also forwards events to the online DQM (Data Quality Monitoring). Figure 2: The CMS DAQ system The DAQ system for the 2010 physics runs The DAQ system has been deployed for pp and heavy-ion physics data-taking. It can be easily ...

  18. Three Generations of FPGA DAQ Development for the ATLAS Pixel Detector

    CERN Document Server

    AUTHOR|(CDS)2091916; Hsu, Shih-Chieh; Hauck, Scott Alan

    The Large Hadron Collider (LHC) at the European Center for Nuclear Research (CERN) tracks a schedule of long physics runs, followed by periods of inactivity known as Long Shutdowns (LS). During these LS phases both the LHC, and the experiments around its ring, undergo maintenance and upgrades. For the LHC these upgrades improve their ability to create data for physicists; the more data the LHC can create the more opportunities there are for rare events to appear that physicists will be interested in. The experiments upgrade so they can record the data and ensure the event won’t be missed. Currently the LHC is in Run 2 having completed the first LS of three. This thesis focuses on the development of Field-Programmable Gate Array (FPGA)-based readout systems that span across three major tasks of the ATLAS Pixel data acquisition (DAQ) system. The evolution of Pixel DAQ’s Readout Driver (ROD) card is presented. Starting from improvements made to the new Insertable B-Layer (IBL) ROD design, which was part of t...

  19. DAQ

    CERN Multimedia

    F. Meijers.

    The DAQ system consists of the full detector readout, 8 DAQ slices with a 1 Tbit/s event building capacity, an event filter to run the HLT comprising 720 8-core PCs, and a 16-node storage manager system allowing a writing rate up to 2 GByte/s and a total capacity of 250 TBytes. Operation: The DAQ system has been successfully deployed to capture the first LHC collisions. Here trigger rates were typically in the range 1 – 11 kHz. The DAQ system serviced global cosmics and commissioning data taking. Here typically data were taken with ~1 kHz cosmic trigger rate and raw event size of ~500 kByte. Often an additional ~100 kHz of random triggers were mixed, which were pre-scaled for storage, to stress test the overall system. Operational procedures for DAQ shifters and on-call experts have been consolidated. Throughout 2009, the online cluster, the production online Oracle database, and the central Detector Control System (DCS) have been operational 24/7. A development and integration database has been ...

  20. DAQ

    CERN Multimedia

    J.A. Coarasa Perez

    Event Builder One of the key design features of CMS is the large Central Data Acquisition System capable of bringing over 100 GB of data to the surface and building 100,000 events every second. This very large DAQ system is ex¬pected to give CMS a competitive advantage since we can have a very flexible High Level Trigger entirely run¬ning on standard computer processors. The first stage of what will be the largest DAQ system in the world is now being commissioned at Point 5. While the detector has been read out until now by a small system called the mini-DAQ, the large central DAQ Event Builder has been put together and debugged over the last 4 months. During the month of September, the full system from FED (front end connection to the detector readout) to Filter Unit is being commissioned and we hope to use the central DAQ Event Builder for the Global Run at the end of September. The first batch of 400 computers arrived around in mid-April. These computers became Readout Units (RUs), wit...

  1. The operational performance of the ATLAS trigger and data acquisition system and its possible evolution

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The first part of this presentation will give an overview of the operational performance of the DAQ system during 2011 and the first months of data taking in 2012. It will describe how the flexibility inherent in the design of the system has be exploited to meet the changing needs of ATLAS data taking and in some cases push performance beyond the original design performance specification. The experience accumulated in the ATLAS DAQ/HLT system operation during these years stimulated also interest to explore possible evolutions, despite the success of the current design. One attractive direction is to merge three systems - the se...

  2. Overview and performance of the FNAL KTeV DAQ system

    International Nuclear Information System (INIS)

    Nakaya, T.; O'Dell, V.; Hazumi, M.; Yamanaka, T.

    1995-11-01

    KTeV is a new fixed target experiment at Fermilab designed to study CP violation in the neutral kaon system. The KTeV Data Acquisition System (DAQ) is out of the highest performance DAQ's in the field of high energy physics. The sustained data throughput of the KTeV DAQ reaches 160 Mbytes/sec, and the available online level 3 processing power is 3600 Mips. In order to handle such high data throughput, the KTeV DAQ is designed around a memory matrix core where the data flow is divided and parallelized. In this paper, we present the architecture and test results of the KTeV DAQ system

  3. Physics Requirements for the ALICE DAQ system

    CERN Document Server

    Vande Vyvre, P

    2000-01-01

    Abstract Abstract The goal of this note is to review the requirements for the DAQ system originated from the various physics topics that will be studied by the ALICE experiment. It summarises all the current requirements both for Pb-Pb and p-p interactions. The consequences in terms of throughput at different stages of the DAQ system are presented for different running scenarios.

  4. Belle DAQ system upgrade at 2001

    CERN Document Server

    Suzuki, S Y; Kim, H W; Kim, H J; Kim, H O; Nakao, M; Won, E; Yamauchi, M

    2002-01-01

    We renewed the data acquisition system for the Belle experiment. Previous data acquisition system, which has been used since December 1998, did not have level 2 trigger facility. To improve the data reduction factor and total throughput, we replaced event builder, online computer farm and the storage system. The event builder and online computer farm are unified into one system. This event building farm uses commodity hardware and newly appended level 2 trigger functionality. This data acquisition system started its operation since last autumn and is very stable. We took 36 fb sup - sup 1 with new DAQ system, it had already overtaken 30 fb sup - sup 1 that is total amount of previous DAQ system.

  5. The Data Acquisition and Calibration System for the ATLAS Semiconductor Tracker

    CERN Document Server

    Abdesselam, A; Barr, A J; Bell, P; Bernabeu, J; Butterworth, J M; Carter, J R; Carter, A A; Charles, E; Clark, A; Colijn, A P; Costa, M J; Dalmau, J M; Demirkoz, B; Dervan, P J; Donega, M; D'Onifrio, M; Escobar, C; Fasching, D; Ferguson, D P S; Ferrari, P; Ferrère, D; Fuster, J; Gallop, B; García, C; González, S; González-Sevilla, S; Goodrick, M J; Gorisek, A; Greenall, A; Grillo, A A; Hessey, N P; Hill, J C; Jackson, J N; Jared, R C; Johannson, P D C; de Jong, P; Joseph, J; Lacasta, C; Lane, J B; Lester, C G; Limper, M; Lindsay, S W; McKay, R L; Magrath, C A; Mangin-Brinet, M; Martí i García, S; Mellado, B; Meyer, W T; Mikulec, B; Minano, M; Mitsou, V A; Moorhead, G; Morrissey, M; Paganis, E; Palmer, M J; Parker, M A; Pernegger, H; Phillips, A; Phillips, P W; Postranecky, M; Robichaud-Véronneau, A; Robinson, D; Roe, S; Sandaker, H; Sciacca, F; Sfyrla, A; Stanecka, E; Stapnes, S; Stradling, A; Tyndel, M; Tricoli, A; Vickey, T; Vossebeld, J H; Warren, M R M; Weidberg, A R; Wells, P S; Wu, S L

    2008-01-01

    The SemiConductor Tracker (SCT) data acquisition (DAQ) system will calibrate, configure, and control the approximately six million front-end channels of the ATLAS silicon strip detector. It will provide a synchronized bunch-crossing clock to the front-end modules, communicate first-level triggers to the front-end chips, and transfer information about hit strips to the ATLAS high-level trigger system. The system has been used extensively for calibration and quality assurance during SCT barrel and endcap assembly and for performance confirmation tests after transport of the barrels and endcaps to CERN. Operating in data-taking mode, the DAQ has recorded nearly twenty million synchronously-triggered events during commissioning tests including almost a million cosmic ray triggered events. In this paper we describe the components of the data acquisition system, discuss its operation in calibration and data-taking modes and present some detector performance results from these tests.

  6. The data acquisition and calibration system for the ATLAS Semiconductor Tracker

    International Nuclear Information System (INIS)

    Abdesselam, A; Barr, A J; Demirkoez, B; Barber, T; Carter, J R; Bell, P; Bernabeu, J; Costa, M J; Escobar, C; Butterworth, J M; Carter, A A; Dalmau, J M; Charles, E; Fasching, D; Ferguson, D P S; Clark, A; Donega, M; D'Onifrio, M; Colijn, A-P; Dervan, P J

    2008-01-01

    The SemiConductor Tracker (SCT) data acquisition (DAQ) system will calibrate, configure, and control the approximately six million front-end channels of the ATLAS silicon strip detector. It will provide a synchronized bunch-crossing clock to the front-end modules, communicate first-level triggers to the front-end chips, and transfer information about hit strips to the ATLAS high-level trigger system. The system has been used extensively for calibration and quality assurance during SCT barrel and endcap assembly and for performance confirmation tests after transport of the barrels and endcaps to CERN. Operating in data-taking mode, the DAQ has recorded nearly twenty million synchronously-triggered events during commissioning tests including almost a million cosmic ray triggered events. In this paper we describe the components of the data acquisition system, discuss its operation in calibration and data-taking modes and present some detector performance results from these tests

  7. Development of DAQ-Middleware

    International Nuclear Information System (INIS)

    Yasu, Y; Nakayoshi, K; Sendai, H; Inoue, E; Tanaka, M; Suzuki, S; Satoh, S; Muto, S; Otomo, T; Nakatani, T; Uchida, T; Ando, N; Kotoku, T; Hirano, S

    2010-01-01

    DAQ-Middleware is a software framework of network-distributed DAQ system based on Robot Technology Middleware, which is an international standard of Object Management Group (OMG) in Robotics and its implementation was developed by AIST. DAQ-Component is a software unit of DAQ-Middleware. Basic components have been already developed. For examples, Gatherer is a readout component, Logger is a data logging component, Monitor is an analysis component and Dispatcher, which is connected to Gatherer as input of data path and to Logger/Monitor as output of data path. DAQ operator is a special component, which controls those components by using the control/status path. The control/status path and data path as well as XML-based system configuration and XML/HTTP-based system interface are well defined in DAQ-Middleware framework. DAQ-Middleware was adopted by experiments at J-PARC while the commissioning at the first beam had been successfully carried out. The functionality of DAQ-Middleware and the status of DAQ-Middleware at J-PARC are presented.

  8. DAQ

    CERN Multimedia

    F. Meijers

    2011-01-01

    The DAQ system (see Figure 2) consists of: – the full detector read-out of a total of 633 FEDs (front-end drivers). The FRL (front-end readout link) provides the common interface between the sub-detector specific FEDs and the central DAQ; – 8 DAQ slices with a 100 GB/s event building capacity – corresponding to a nominal 2 kB per FRL at a Level-1 trigger rate of 100 kHz; – an event filter to run the HLT (High Level Trigger) composing 720 PCs with two quad-core 2.6 GHz CPUs; – a 16-node storage manager system allowing a writing rate that exceeds 1 GB/s, with concurrent transfers to Tier 0 at the same rate, and a total storage capacity of 250 TB. It also forwards events to the online DQM (Data Quality Monitoring). Figure 2: The CMS DAQ system. The two-stage event builder assembles event fragments from typically eight front-ends located underground (USC) into one super-...

  9. FELIX - the new detector readout system for the ATLAS experiment

    CERN Document Server

    AUTHOR|(SzGeCERN)754725; The ATLAS collaboration; Anderson, John Thomas; Borga, Andrea; Boterenbrood, Hendrik; Chen, Hucheng; Chen, Kai; Drake, Gary; Donszelmann, Mark; Francis, David; Gorini, Benedetto; Guest, Daniel; Lanni, Francesco; Lehmann Miotto, Giovanna; Levinson, Lorne; Roich, Alexander; Schreuder, Frans Philip; Schumacher, J\\"orn; Vandelli, Wainer; Vermeulen, Jos; Wu, Weihao; Zhang, Jinlong

    2016-01-01

    From the ATLAS Phase-I upgrade and onward, new or upgraded detectors and trigger systems will be interfaced to the data acquisition, detector control and timing (TTC) systems by the Front-End Link eXchange (FELIX). FELIX is the core of the new ATLAS Trigger/DAQ architecture. Functioning as a router between custom serial links and a commodity network, FELIX is implemented by server PCs with commodity network interfaces and PCIe cards with large FPGAs and many high speed serial fiber transceivers. By separating data transport from data manipulation, the latter can be done by software in commodity servers attached to the network. Replacing traditional point-to-point links between Front-end components and the DAQ system by a switched network, FELIX provides scaling, flexibility uniformity and upgradability. Different Front-end data types or different data sources can be routed to different network endpoints that handle that data type or source: e.g. event data, configuration, calibration, detector control, monito...

  10. SPHERE DAQ and off-line systems: implementation based on the qdpb system

    International Nuclear Information System (INIS)

    Isupov, A.Yu.

    2003-01-01

    Design of the on-line data acquisition (DAQ) system for the SPHERE setup (LHE, JINR) is described. SPHERE DAQ is based on the qdpb (Data Processing with Branchpoints) system and configurable experimental data and CAMAC hardware representations. Implementation of the DAQ and off-line program code, depending on the SPHERE setup's hardware layout and experimental data contents, is explained as well as software modules specific for such implementation

  11. DATA ACQUISITION (DAQ)

    CERN Multimedia

    Frans Meijers

    The installation of the 50 kHz DAQ/HLT system has been completed during 2008. The equipment consists of the full detector readout, 8 DAQ slices with a 1 Tbit/s event building capacity, an event filter to run the High Level Trigger (HLT) comprising 720 8-core PCs, and a 16-node storage manager system allowing a write throughput up to 2 GByte/s and a total capacity of 300 TByte. The 50 kHz DAQ system has been commissioned and has been put into service for global cosmics and commissioning data taking. During CRAFT, data was taken with the full detector at ~600 Hz cosmic trigger rate. Often an additional 20 kHz of random triggers were mixed, which were pre-scaled for storage.  The random rate has been increased to ~90 kHz for the commissioning and cosmics runs in 2009, which included all detectors except tracker. The DAQ system is used, in addition to global data taking, for further commissioning and testing of the central DAQ. To this end data emulators are used at the front-end of the central DAQ (in...

  12. DAQ

    CERN Multimedia

    F. Meijers and C. Schwick

    2010-01-01

    The DAQ system has been deployed for physics data taking as well as supporting global test and commissioning activities. In addition to 24/7 operations, activities addressing performance and functional improvements are ongoing. The DAQ system consists of the full detector readout, 8 DAQ slices with a 1 Tbit/s event building capacity, an event filter to run the HLT comprising 720 8-core PCs, and a 16-node storage manager system allowing up to 2 GByte/s writing rate and a total capacity of 250 TBytes. Operation The LHC delivered the highest luminosity in fills with 6-8 colliding bunches and reached peak luminosities of 1-2 1029/cm2/s. The DAQ was typically operating in those conditions with a ~15 kHz trigger rate, a raw event size of ~500 kByte, and a ~150 Hz recording of stream-A with a size of ~50 kB. The CPU load on the HLT was ~10%. Tests for Heavy-Ion operation Tests have been carried out to examine the situation for data-taking in the future Heavy Ion (HI) run. The high occupancy expected in HI run...

  13. ATLAS Detector Interface Group

    CERN Multimedia

    Mapelli, L

    Originally organised as a sub-system in the DAQ/EF-1 Prototype Project, the Detector Interface Group (DIG) was an information exchange channel between the Detector systems and the Data Acquisition to provide critical detector information for prototype design and detector integration. After the reorganisation of the Trigger/DAQ Project and of Technical Coordination, the necessity to provide an adequate context for integration of detectors with the Trigger and DAQ lead to organisation of the DIG as one of the activities of Technical Coordination. Such an organisation emphasises the ATLAS wide coordination of the Trigger and DAQ exploitation aspects, which go beyond the domain of the Trigger/DAQ project itself. As part of Technical Coordination, the DIG provides the natural environment for the common work of Trigger/DAQ and detector experts. A DIG forum for a wide discussion of all the detector and Trigger/DAQ integration issues. A more restricted DIG group for the practical organisation and implementation o...

  14. Large scale and performance tests of the ATLAS online software

    International Nuclear Information System (INIS)

    Alexandrov; Kotov, V.; Mineev, M.; Roumiantsev, V.; Wolters, H.; Amorim, A.; Pedro, L.; Ribeiro, A.; Badescu, E.; Caprini, M.; Burckhart-Chromek, D.; Dobson, M.; Jones, R.; Kazarov, A.; Kolos, S.; Liko, D.; Lucio, L.; Mapelli, L.; Nassiakou, M.; Schweiger, D.; Soloviev, I.; Hart, R.; Ryabov, Y.; Moneta, L.

    2001-01-01

    One of the sub-systems of the Trigger/DAQ system of the future ATLAS experiment is the Online Software system. It encompasses the functionality needed to configure, control and monitor the DAQ. Its architecture is based on a component structure described in the ATLAS Trigger/DAQ technical proposal. Regular integration tests ensure its smooth operation in test beam setups during its evolutionary development towards the final ATLAS online system. Feedback is received and returned into the development process. Studies of the system behavior have been performed on a set of up to 111 PCs on a configuration which is getting closer to the final size. Large scale and performance test of the integrated system were performed on this setup with emphasis on investigating the aspects of the inter-dependence of the components and the performance of the communication software. Of particular interest were the run control state transitions in various configurations of the run control hierarchy. For the purpose of the tests, the software from other Trigger/DAQ sub-systems has been emulated. The author presents a brief overview of the online system structure, its components and the large scale integration tests and their results

  15. The DoubleChooz DAQ systems.

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The Double Chooz (DC) reactor anti-neutrino experiment consists of a neutrino detector and a large area Outer Veto detector. A custom data-acquisition (DAQ) system written in Ada language for all the sub-detector in the neutrino detector systems and a generic object oriented data acquisition system for the Outer Veto detector were developed. Generic object-oriented programming was also used to support several electronic systems to be readout providing a simple interface for any new electronics to be added given its dedicated driver. The core electronics of the experiment is based on FADC electronics (500MHz sampling rate), therefore a data-reduction scheme has been implemented to reduce the data volume per trigger. A dynamic data-format was created to allow dynamic reduction of each trigger before data is written to disk. The decision is based on low level information that determines the relevance of each trigger. The DAQ is structured internally into two types of processors: several read-out processors readi...

  16. DAQ

    CERN Multimedia

    J. Hegeman

    2013-01-01

    The DAQ2 system for post-LS1 is a re-implementation of the central DAQ event data flow with the capability to read-out the majority of legacy back-end sub-detector electronics FEDs, as well as the new MicroTCA-based back-end electronics (see for example the previous (December 2012) issue of the CMS bulletin). A further upgrade in the DAQ and Trigger is the development of the new TCDS, outlined in the forthcoming Level-1 Trigger Upgrade TDR. The new TCDS (Trigger Control and Distribution System) Currently, CMS trigger control comprises three more-or-less separate systems. The Trigger Timing and Control (TTC) system distributes the L1A signals and synchronisation commands to all front-ends. The Trigger Throttling System (TTS) collects front-end readiness information and propagates those up to the central Trigger Control System (TCS). The TCS allows or vetoes Level-1 triggers from the Global Trigger (GT) based on the TTS state and on the trigger rules. These three systems will be combined in the new control ...

  17. A prototype DAQ system for the ALICE experiment based on SCI

    International Nuclear Information System (INIS)

    Skaali, B.; Ingebrigtsen, L.; Wormald, D.; Polovnikov, S.; Roehrig, H.

    1998-01-01

    A prototype DAQ system for the ALICE/PHOS beam test an commissioning program is presented. The system has been taking data since August 1997, and represents one of the first applications of the Scalable Coherent Interface (SCI) as interconnect technology for an operational DAQ system. The front-end VMEbus address space is mapped directly from the DAQ computer memory space through SCI via PCI-SCI bridges. The DAQ computer is a commodity PC running the Linux operating system. The results of measurements of data transfer rate and latency for the PCI-SCI bridges in a PC-VMEbus SCI-configuration are presented. An optical SCI link based on the Motorola Optobus I data link is described

  18. DAQ

    CERN Multimedia

    F. Meijers

    2010-01-01

    The DAQ system consists of the full detector readout, 8 DAQ slices with a 1 Tbit/s event building capacity, an event filter to run the HLT comprising 720 8-core PCs, and a 16-node storage manager system allowing a writing rate up to 2 GByte/s and a total capacity of 250 TBytes. Operation Returning after the Christmas stop, the DAQ system serviced global cosmics and commissioning data taking. Typically data were taken with ~1 kHz cosmic trigger rate and raw event size of ~500 kByte. Often an additional ~100 kHz of random triggers were mixed, which were pre-scaled for storage, to stress test the overall system. The online cluster, the production online Oracle database, and the central Detector Control System (DCS) have been operational 24/7. Infrastructure Immediately after the Christmas break, the on-line data center has been into maximum heat production mode to stress the cooling infrastructure.  The maximum heat load produced in the room was about 570 kW. It appeared that the current settings ...

  19. DAQ

    CERN Multimedia

    F. Meijers

    2011-01-01

    Operation for the 2011 physics run For the 2011 run, the HLT farm has been extended with additional PCs comprising 288 system boards with two 6-core CPUs each. This brought the total HLT capacity from 5760 cores to 9216 cores and 18 TB of memory. It provides a capacity for HLT of about 100 ms/event (on a 2.7 GHz E5430 core) at 100 kHz L1 rate in pp collisions. All central DAQ nodes have been migrated to SLC5/64-bit kernel and 64-bit applications. The DAQ system has been deployed for pp physics data-taking in 2011 and performed with high efficiency (downtime for central DAQ was less than 1%). For pp physics data-taking, the DAQ was operating with a L1 trigger rate up to ~100 kHz and, typically, a raw event size of ~500 kB, and ~400 Hz recording of stream-A (which includes all physics triggers) with a size of ~250 kB after compression. The event size increases linearly with the pile-up, as expected. The CPU load on the HLT reached close to 100%, depending on L1 and HLT menus. By changing the L1 and HLT pre-...

  20. Gated integrator PXI-DAQ system for Thomson scattering diagnostics

    Energy Technology Data Exchange (ETDEWEB)

    Patel, Kiran, E-mail: kkpatel@ipr.res.in; Pillai, Vishal; Singh, Neha; Thomas, Jinto; Kumar, Ajai

    2017-06-15

    Gated Integrator (GI) PXI based data acquisition (DAQ) system has been designed and developed for the ease of acquiring fast Thomson Scattered signals (∼50 ns pulse width). The DAQ system consists of in-house designed and developed GI modules and PXI-1405 chassis with several PXI-DAQ modules. The performance of the developed system has been validated during the SST-1 campaigns. The dynamic range of the GI module depends on the integrating capacitor (C{sub i}) and the modules have been calibrated using 12 pF and 27 pF integrating capacitors. The developed GI module based data acquisition system consists of sixty four channels for simultaneous sampling using eight PXI based digitization modules having eight channels per module. The error estimation and functional tests of this unit are carried out using standard source and also with the fast detectors used for Thomson scattering diagnostics. User friendly Graphical User Interface (GUI) has been developed using LabVIEW on Windows platform to control and acquire the Thomson scattering signal. A robust, easy to operate and maintain with low power consumption, having higher dynamic range with very good sensitivity and cost effective DAQ system is developed and tested for the SST-1 Thomson scattering diagnostics.

  1. FASTBUS readout system for the CDF DAQ upgrade

    International Nuclear Information System (INIS)

    Andresen, J.; Areti, H.; Black, D.

    1993-11-01

    The Data Acquisition System (DAQ) at the Collider Detector at Fermilab is currently being upgraded to handle a minimum of 100 events/sec for an aggregate bandwidth that is at least 25 Mbytes/sec. The DAQ System is based on a commercial switching network that has interfaces to VME bus. The modules that readout the front end crates (FASTBUS and RABBIT) have to deliver the data to the VME bus based host adapters of the switch. This paper describes a readout system that has the required bandwidth while keeping the experiment dead time due to the readout to a minimum

  2. Components for the data acquisition system of the ATLAS testbeams 1996

    International Nuclear Information System (INIS)

    Caprini, M; Niculescu, Michaela

    1997-01-01

    ATLAS is one of the experiments developed at CERN for the Large Hadron Collider. For the sub-detector testbeams a data acquisition system (DAQ) was designed. The Bucharest group is a member of the ATLAS DAQ collaboration and contributed to the development of some components of the testbeam DAQ: -read-out modules for standalone and combined test-beams; - readout module for the liquid argon detector; - run control graphical user interface; - central data recording system. The readout module is able to acquire data event by event from the detector electronics and is based on a Finite State Machine (FSM) incorporating a general scheme for the calibration procedure. The FSM allows detectors to take data either in standalone mode, with local control and recording, or in combined mode together with other sub-detectors, with a very easy switching between the two different configurations. The readout module for the liquid argon detector is written as a data flow element which takes raw data and creates a formatted event. At initialization stage the run and detector parameters are read from the Run Control Parameters database. Then the state changes are driven by three interrupt signals (Start of Burst, Trigger, End of Burst) generated by hardware. In calibration mode at each trigger the event is built (calibration data are taken outside the beam) and then the conditions for the next calibration trigger are prepared (DAQ values, delays, pulsers). The graphical user interface is designed to be used for the control of the data acquisition system. The interface provides a global experiment panel for the activation and navigation in all the command and display panels. The user can start, stop or change the state of the system, obtain the most important information about the whole system states and activate other service programs in order to select parameters, databases and to display information about the evolution of the system. Central data recording system lays on the client

  3. DAQ

    CERN Multimedia

    E. Meschi

    2013-01-01

    The File-based Filter Farm in the CMS DAQ MarkII The CMS DAQ system will be upgraded after LS1 in order to replace obsolete network equipment, use more homogeneous switching technologies, prepare the ground for future upgrade of the detector front-ends. The experiment parameters for the post-LS1 data taking remain similar to the ones of Run 1: a Level-1 aggregate rate of 100 kHz and an aggregate HLT output bandwidth of up to 2 GB/s. A moderate event-size increase is anticipated from increased pile-up and changes in the detector readout. For the output bandwidth, the figure of 2 GB/s is assumed. The original Filter Farm design has been successfully operated in 2010–2013 and its efficiency and fault tolerance brought to an excellent level. There are, however, a number of disadvantages in that design at the interface between the DAQ data flow and the High-Level Trigger that warrant a careful scrutiny in view of the deployment of DAQ2, after the LS1: The reduction of the number of RU bui...

  4. Concepts and technologies used in contemporary DAQ systems

    CERN Multimedia

    CERN. Geneva

    2004-01-01

    based trigger processor and event building farms. We have also seen a shift from standard or proprietary bus systems used in event building to GigaBit networks and commodity components, such as PCs. With the advances in processing power, network throughput, and storage technologes, today's data rates in large experiments routinely reach hundreds of MegaBytes/s. We will present examples of contemporary DAQ systems from different experiments, try to identify or categorize new approaches, and will compare the performance and throughput of existing DAQ systems with the projected data rates of the LHC experiments to see how close we have come to accomplish these goals. We will also tr...

  5. DAQ

    CERN Document Server

    A. Racz

    The CMS DAQ installation status The year 2005 was dedicated to the production/test of the custom made electronic boards and the procurement of the commercial items needed to operate the underground part of the Data Acquisition System of CMS. The first half of 2006 was spent to install the DAQ infrastructures in USC55 (dedicated cable trays in the false floor) and to prepare the racks to receive the hardware elements. The second half of 2006 was dedicated to the installation of the CMS DAQ elements in the underground control. As a quick reminder, the underground part of the Data Acquisition System performs two tasks: a) Front End data collection and transmission to the online computing farm on the surface (SCX). b) Front End status collection and elaboration of a smart back pressure signal preventing the overflow of the Front End electronic. The hardware elements installed to perform these two tasks are the following:     500 FRL cards receiving the data of one or two sender...

  6. New COMPASS DAQ

    Energy Technology Data Exchange (ETDEWEB)

    Bai, Yunpeng; Konorov, Igor

    2015-07-01

    This contribution focuses on the deployment and first results of the new FPGA-based data acquisition system (DAQ) of the COMPASS experiment. Since 2002, the number of channels increased to approximately 300000, trigger rate increased to 30 kHz; the average event size remained roughly 35 kB. In order to handle the increased data rates, the new DAQ system with custom FPGA based data handling cards (DHC) had been decided to replace the event building network. The DHCs are equipped with 16 high speed serial links, 2GB of DDR3 memory with bandwidth of 6 GB/s, Gigabit Ethernet connection, and COMPASS Trigger Control System. It uses two different firmware versions: multiplexer and switch. The multiplexer DHC can combine 15 incoming links into one outgoing, whereas the switch combines 8 data streams from multiplexers and using information from look-up table sends the full events to the readout engine servers equipped by spillbuffer PCI-Express cards that receive the data. Both types of DHC can buffer data which allows to distribute the load over the cycle of accelerator. For the purposes of configuration, run control, and monitoring, software tools are developed. Communication between processes in the system is implemented using the DIM library. The DAQ is fully configurable from the web interface. New DAQ system has been deployed for the pilot run starting from the September 2014. In the poster, the preliminary performance and stability results of the new DAQ are presented and compared with the original system in more detail.

  7. A New ATLAS Muon CSC Readout System with System on Chip Technology on ATCA Platform

    CERN Document Server

    AUTHOR|(SzGeCERN)696050; Garelli, N.; Herbst, R.T.; Huffer, M.; Iakovidis, G.; Iordanidou, K.; Kwan, K.; Kocian, M.; Lankford, A.J.; Moschovakos, P.; Nelson, A.; Ntekas, K.; Ruckman, L.; Russell, J.; Schernau, M.; Schlenker, S.; Su, D.; Valderanis, C.; Wittgen, M.; Bartoldus, R.

    2016-01-01

    The ATLAS muon Cathode Strip Chamber (CSC) backend readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run 2 luminosity. The readout design is based on the Reconfigurable Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the Advanced Telecommunication Computing Architecture (ATCA) platform. The RCE design is based on the new System on Chip XILINX ZYNQ series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources together with auxiliary memories to form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the ZYNQ for G-link, S-link and TTC allowed the full system of 320 G-links from the 32 chambe...

  8. A New ATLAS Muon CSC Readout System with System on Chip Technology on ATCA Platform

    CERN Document Server

    ATLAS CSC Collaboration; The ATLAS collaboration

    2016-01-01

    The ATLAS muon Cathode Strip Chamber (CSC) backend readout system has been upgrade during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run 2 luminosity. The readout design is based on the Reconfigurable Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the Advanced Telecommunication Computing Architecture (ATCA) platform. The RCE design is based on the new System on Chip XILINX ZYNQ series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources together with auxiliary memories to form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the ZYNQ for G-link, S-link and TTC allowed the full system of 320 G-links from the 32 chamber...

  9. A New ATLAS Muon CSC Readout System with System on Chip Technology on ATCA Platform

    CERN Document Server

    AUTHOR|(SzGeCERN)664042

    2016-01-01

    The ATLAS muon Cathode Strip Chamber (CSC) back-end readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run 2 luminosity. The readout design is based on the Reconfiguration Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the ATCA platform. The RCE design is based on the new System on Chip Xilinx Zynq series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources together with auxiliary memories to form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the Zynq for G-link, S-link and TTC allowed the full system of 320 G-links from the 32 chambers to be processed by 6 COBs in one ATCA shelf thr...

  10. A New ATLAS Muon CSC Readout System with System on Chip Technology on ATCA Platform

    CERN Document Server

    Claus, Richard; The ATLAS collaboration

    2015-01-01

    The ATLAS muon Cathode Strip Chamber (CSC) backend readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run 2 luminosity. The readout design is based on the Reconfiguration Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the ATCA platform. The RCE design is based on the new System on Chip XILINX ZYNQ series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources together with auxiliary memories to form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the ZYNQ for G-link, S-link and TTC allowed the full system of 320 G-links from the 32 chambers to be processed by 6 COBs in one ATCA shelf thro...

  11. Orthos, an alarm system for the ALICE DAQ operations

    Science.gov (United States)

    Chapeland, Sylvain; Carena, Franco; Carena, Wisla; Chibante Barroso, Vasco; Costa, Filippo; Denes, Ervin; Divia, Roberto; Fuchs, Ulrich; Grigore, Alexandru; Simonetti, Giuseppe; Soos, Csaba; Telesca, Adriana; Vande Vyvre, Pierre; von Haller, Barthelemy

    2012-12-01

    ALICE (A Large Ion Collider Experiment) is the heavy-ion detector studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). The DAQ (Data Acquisition System) facilities handle the data flow from the detectors electronics up to the mass storage. The DAQ system is based on a large farm of commodity hardware consisting of more than 600 devices (Linux PCs, storage, network switches), and controls hundreds of distributed hardware and software components interacting together. This paper presents Orthos, the alarm system used to detect, log, report, and follow-up abnormal situations on the DAQ machines at the experimental area. The main objective of this package is to integrate alarm detection and notification mechanisms with a full-featured issues tracker, in order to prioritize, assign, and fix system failures optimally. This tool relies on a database repository with a logic engine, SQL interfaces to inject or query metrics, and dynamic web pages for user interaction. We describe the system architecture, the technologies used for the implementation, and the integration with existing monitoring tools.

  12. Orthos, an alarm system for the ALICE DAQ operations

    International Nuclear Information System (INIS)

    Chapeland, Sylvain; Carena, Franco; Carena, Wisla; Chibante Barroso, Vasco; Costa, Filippo; Divia, Roberto; Fuchs, Ulrich; Grigore, Alexandru; Simonetti, Giuseppe; Soos, Csaba; Telesca, Adriana; Vande Vyvre, Pierre; Von Haller, Barthelemy; Denes, Ervin

    2012-01-01

    ALICE (A Large Ion Collider Experiment) is the heavy-ion detector studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). The DAQ (Data Acquisition System) facilities handle the data flow from the detectors electronics up to the mass storage. The DAQ system is based on a large farm of commodity hardware consisting of more than 600 devices (Linux PCs, storage, network switches), and controls hundreds of distributed hardware and software components interacting together. This paper presents Orthos, the alarm system used to detect, log, report, and follow-up abnormal situations on the DAQ machines at the experimental area. The main objective of this package is to integrate alarm detection and notification mechanisms with a full-featured issues tracker, in order to prioritize, assign, and fix system failures optimally. This tool relies on a database repository with a logic engine, SQL interfaces to inject or query metrics, and dynamic web pages for user interaction. We describe the system architecture, the technologies used for the implementation, and the integration with existing monitoring tools.

  13. FPGAs for next gen DAQ and Computing systems at CERN

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The need for FPGAs in DAQ is a given, but newer systems needed to be designed to meet the substantial increase in data rate and the challenges that it brings. FPGAs are also power efficient computing devices. So the work also looks at accelerating HEP algorithms and integration of FPGAs with CPUs taking advantage of programming models like OpenCL. Other explorations involved using OpenCL to model a DAQ system.

  14. Development and test of the DAQ system for a Micromegas prototype installed into the ATLAS experiment

    CERN Document Server

    Zibell, Andre; The ATLAS collaboration; Bianco, Michele; Martoiu, Victor Sorin

    2015-01-01

    A Micromegas (MM) quadruplet prototype with an active area of 0.5 m$^2$ that adopts the general design foreseen for the upgrade of the innermost forward muon tracking systems (Small Wheels) of the ATLAS detector in 2018-2019, has been built at CERN and is going to be tested in the ATLAS cavern environment during the LHC RUN-II period 2015-2017. The integration of this prototype detector into the ATLAS data acquisition system using custom ATCA equipment is presented. An ATLAS compatible ReadOutDriver (ROD) based on the Scalable Readout System (SRS), the Scalable Readout Unit (SRU), will be used in order to transmit the data after generating valid event fragments to the high-level Read Out System (ROS). The SRU will be synchronized with the LHC bunch crossing clock (40.08 MHz) and will receive the Level-1 trigger signals from the Central Trigger Processor (CTP) through the TTCrx receiver ASIC. The configuration of the system will be driven directly from the ATLAS Run Control System. By using the ATLAS TDAQ Soft...

  15. A DAQ system for pixel detectors R and D

    International Nuclear Information System (INIS)

    Battaglia, M.; Bisello, D.; Contarato, D.; Giubilato, P.; Pantano, D.; Tessaro, M.

    2009-01-01

    Pixel detector R and D for HEP and imaging applications require an easily configurable and highly versatile DAQ system able to drive and read out many different chip designs in a transparent way, with different control logics and/or clock signals. An integrated, real-time data collection and analysis environment is essential to achieve fast and reliable detector characterization. We present a DAQ system developed to fulfill these specific needs, able to handle multiple devices at the same time while providing a convenient, ROOT based data display and online analysis environment.

  16. Contributions to the back-end software sub-system of the ATLAS data acquisition of event filter prototype -1 project

    International Nuclear Information System (INIS)

    Badescu, E.; Caprini, M.; Niculescu, M.; Radu, A.

    1998-01-01

    A project has been approved by the ATLAS Collaboration for the design and implementation of a Data Acquisition (DAQ) and Event Filter (EF) prototype, based on the functional architecture described in the ATLAS Technical Proposal. The prototype consists of a full 'vertical' slice of the ATLAS Data Acquisition and Event Filter architecture and can be seen as made of 4 sub-systems: the Detector Interface, the Dataflow, the Back-end DAQ and the Event Filter. The Bucharest group is member of DAQ/EF collaboration and during 1997 was involved in the Back-end activities. The back-end software encompasses the software for configuring, controlling and monitoring the DAQ but specifically excludes the management, processing or transportation of physics data. The user requirements gathered for the back-end sub-system have been divided into groups related to activities providing similar functionality. The groups have been further developed into components of the Back-end with a well defined purpose and boundaries. Each component offers some unique functionality and has its own architecture. The actual Back-end component model includes 5 core components (run control, configuration databases, message reporting system, process manager and information service) and 6 detector integration components (partition and resource manager, status display, run bookkeeper, event dump, test manager and diagnostic package). The Bucharest group participated to the high level design, implementation and testing of three components (information service, message reporting system and status display). The Information Service (IS) provides an information exchange facility for software components of the DAQ. Information (defined by the supplier) from many sources can be categorized and made available to requesting applications asynchronously or on demand. The design of the information service followed an object oriented approach. It is a multiple server configuration in which servers are dedicated to

  17. DAQ Architecture for the LHCb Upgrade

    International Nuclear Information System (INIS)

    Liu, Guoming; Neufeld, Niko

    2014-01-01

    LHCb will have an upgrade of its detector in 2018. After the upgrade, the LHCb experiment will run at a high luminosity of 2 × 10 33 cm −2 s −1 . The upgraded detector will be read out at 40 MHz with a highly flexible software-based triggering strategy. The Data Acquisition (DAQ) system of LHCb reads out the data fragments from the Front-End Electronics and transports them to the High-Lever Trigger farm at an aggregate throughput of ∼ 32 Tbit/s. The DAQ system will be based on high speed network technologies such as InfiniBand and/or 10/40/100 Gigabit Ethernet. Independent of the network technology, there are different possible architectures for the DAQ system. In this paper, we present our studies on the DAQ architecture, where we analyze size, complexity and relative cost. We evaluate and compare several data-flow schemes for a network-based DAQ: push, pull and push with barrel-shifter traffic shaping. We also discuss the requirements and overall implications of the data-flow schemes on the DAQ system.

  18. DAQ system for testing RPC front-end electronics of the INO experiment

    International Nuclear Information System (INIS)

    Hari Prasad, K.; Sukhwani, Menka; Kesarkar, Tushar A.; Kumar, Sandeep; Chandratre, V.B.; Das, D.; Shinde, R.R.; Satyanarayana, B.

    2015-01-01

    The Resistive Plate Chamber (RPC) is the active detector element in the INO experiment. The in-house developed ANUSPARSH-III ASICs are being used as front-end electronics of the detector. The 2 m X 2 m RPC being used has 64-readout channels on X-side and 64-readout channels on Y-side. In order to test and validate the FE along with the RPC, a 64-channel DAQ system has been designed and developed. The detector parameters to be measured are noise rate, efficiency, hit pattern register and time resolution. The salient features of the DAQ system are: 64-channel LVDS receiver in FPGA, FPGA based parameter calculations and a micro controller for acquiring the processed data from FPGAs and sent through Ethernet and USB interfaces. The DAQ system consists of following parts: Two FPGAs each receiving 32 LVDS channels, FPGA firm-ware, micro controller firm-ware, Ethernet interface, embedded web server hosting data analysis software, USB interface, and Lab-windows based data analysis software. The DAQ system has been tested at TIFR with 1 m X 1 m RPC

  19. Development of multi-channel gated integrator and PXI-DAQ system for nuclear detector arrays

    International Nuclear Information System (INIS)

    Kong Jie; Su Hong; Chen Zhiqiang; Dong Chengfu; Qian Yi; Gao Shanshan; Zhou Chaoyang; Lu Wan; Ye Ruiping; Ma Junbing

    2010-01-01

    A multi-channel gated integrator and PXI based data acquisition system have been developed for nuclear detector arrays with hundreds of detector units. The multi-channel gated integrator can be controlled by a programmable GI controller. The PXI-DAQ system consists of NI PXI-1033 chassis with several PXI-DAQ cards. The system software has a user-friendly GUI which is written in C language using LabWindows/CVI under Windows XP operating system. The performance of the PXI-DAQ system is very reliable and capable of handling event rate up to 40 kHz.

  20. The readiness of ATLAS Trigger-DAQ system for the second LHC run

    CERN Document Server

    Rammensee, Michael; The ATLAS collaboration

    2015-01-01

    After its first shutdown, LHC will provide pp collisions with increased luminosity and energy. In the ATLAS experiment, the Trigger and Data Acquisition (TDAQ) system has been upgraded to deal with the increased event rates. The updated system is radically different from the previous implementation, both in terms of architecture and expected performance. The main architecture has been reshaped in order to profit from the technological progress and to maximize the flexibility and efficiency of the data selection process. The trigger system in ATLAS consists of a hardware Level-1 (L1) and a software based high-level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. The pre-existing two-level software filtering, known as L2 and the Event Filter, are now merged into a single process, performing incremental data collection and analysis. This design has many advantages, among which are: the radical simplification of the architec...

  1. Online remote monitoring facilities for the ATLAS experiment

    CERN Document Server

    Kolos, S; The ATLAS collaboration; Feng, E; Hauser, R; Yakovlev, A; Zaytsev, A

    2011-01-01

    ATLAS is one of the 4 LHC experiments which started to be operated in the collisions mode in 2010. The ATLAS apparatus itself as well as the Trigger and the DAQ system are extremely complex facilities which have been built up by the collaboration including 144 institutes from 33 countries. The effective running of the experiment is supported by a large number of experts distributed all over the world. This paper describes the online remote monitoring system which has been developed in the ATLAS Trigger and DAQ(TDAQ) community in order to support efficient participation of the experts from remote institutes in the exploitation of the experiment. The facilities provided by the remote monitoring system are ranging from the WEB based access to the general status and data quality for the ongoing data taking session to the scalable service providing real-time mirroring of the detailed monitoring data from the experimental area to the dedicated computers in the CERN public network, where this data is made available ...

  2. Evolution of the ReadOut System of the ATLAS experiment

    CERN Document Server

    Borga, A; The ATLAS collaboration; Green, B; Kugel, A; Joos, M; Panduro Vazquez, W; Schumacher, J; Teixeira-Dias, P; Tremblet, L; Vandelli, W; Vermeulen, J; Werner, P; Wickens, F

    2014-01-01

    The ReadOut System (ROS) is a central and essential part of the ATLAS DAQ system. It receives and buffers data of events accepted by the first-level trigger from all subdetectors and first-level trigger subsystems. Event data are subsequently forwarded to the High-Level Trigger system and Event Builder via a 1 GbE-based network. The ATLAS ROS is completely renewed in view of the demanding conditions expected during LHC Run 2 and Run 3, to replace obsolete technologies and space constraints require it to be compact. The new ROS will consist of roughly 100 Linux-based 2U high rack mounted server PCs, each equipped with 2 PCIe I/O cards and two four 10 GbE interfaces. The FPGA-based PCIe I/O cards, developed by the ALICE collaboration, will be configured with ATLAS-specific firmware, the so-called RobinNP firmware. They will provide the connectivity to about 2000 optical point-to-point links conveying the ATLAS event data. This dense configuration provides an excellent test bench for studying I/O efficiency and ...

  3. LHCb Silicon Tracker DAQ and DCS Online Systems

    CERN Multimedia

    Buechler, A; Rodriguez, P

    2009-01-01

    The LHCb experiment at the Large Hadron Collider (LHC) at CERN in Geneva Switzerland is specialized on precision measurements of b quark decays. The Silicon Tracker (ST) contributes a crucial part in tracking the particle trajectories and consists of two silicon micro-strip detectors, the Tracker Turicensis upstream of the LHCb magnet and the Inner Tracker downstream. The radiation and the magnetic field represent new challenges for the implementation of a Detector Control System (DCS) and the data acquisition (DAQ). The DAQ has to deal with more than 270K analog readout channels, 2K readout chips and real time DAQ at a rate of 1.1 MHz with data processing at TELL1 level. The TELL1 real time algorithms for clustering thresholds and other computations run on dedicated FPGAs that implement 13K configurable parameters per board, in total 1.17 K parameters for the ST. After data processing the total throughput amounts to about 6.4 Gbytes from an input data rate of around ~337 Gbytes per second. A finite state ma...

  4. First-year experience with the ATLAS online monitoring framework

    International Nuclear Information System (INIS)

    Corso-Radu, A

    2010-01-01

    ATLAS is one of the four experiments in the Large Hadron Collider (LHC) at CERN, which has been put in operation this year. The challenging experimental environment and the extreme detector complexity required development of a highly scalable distributed monitoring framework, which is currently being used to monitor the quality of the data being taken as well as operational conditions of the hardware and software elements of the detector, trigger and data acquisition systems. At the moment the ATLAS Trigger/DAQ system is distributed over more than 1000 computers, which is about one third of the final ATLAS size. At every minute of an ATLAS data taking session the monitoring framework serves several thousands physics events to monitoring data analysis applications, handles more than 4 million histograms updates coming from more than 4 thousands applications, executes 10 thousands advanced data quality checks for a subset of those histograms, displays histograms and results of these checks on several dozens of monitors installed in main and satellite ATLAS control rooms. This note presents the overview of the online monitoring software framework, and describes the experience, which was gained during an extensive commissioning period as well as at the first phase of LHC beam in September 2008. Performance results, obtained on the current ATLAS DAQ system will also be presented, showing that the performance of the framework is adequate for the final ATLAS system.

  5. FPGA-based 10-Gbit Ethernet Data Acquisition Interface for the Upgraded Electronics of the ATLAS Liquid Argon Calorimeters

    CERN Document Server

    Grohs, J P; The ATLAS collaboration

    2013-01-01

    The readout of the trigger signals of the ATLAS Liquid Argon (LAr) calorimeters is foreseen to be upgraded in order to prepare for operation during the first high-luminosity phase of the Large Hadron Collider (LHC). Signals with improved spatial granularity are planned to be received from the detector by a Digitial Processing System (DPS) in ATCA technology and will be sent in real-time to the ATLAS trigger system using custom optical links. These data are also sampled by the DPS for monitoring and will be read out by the regular Data Acquisition (DAQ) system of ATLAS which is a network-based PC-farm. The bandwidth between DPS module and DAQ system is expected to be in the order of 10 Gbit/s per module and a standard Ethernet protocol is foreseen to be used. DSP data will be prepared and sent by a modern FPGA either through a switch or directly to a Read-Out System (ROS) PC serving as buffer interface of the ATLAS DAQ. In a prototype setup, an ATCA blade equipped with a Xilinx Virtex-5 FPGA is used to send da...

  6. LHCb; DAQ Architecture for the LHCb Upgrade

    CERN Multimedia

    Neufeld, N

    2013-01-01

    LHCb will have an upgrade of its detector in 2018. After the upgrade, the LHCb experiment will run at a high luminosity of 2x 10$^{33}$ cm$^{-2}$ . s$^{-1}$. The upgraded detector will be read out at 40 MHz with a highly flexible software-based triggering strategy. The Data Acquisition (DAQ) system of HCb reads out the data fragments from the Front-End Electronics and transports them to the High-Lever Trigger farm at an aggregate throughput of 32 Tbit/s. The DAQ system will be based on high speed network technologies such as InfiniBand and/or 10/40/100 Gigabit Ethernet. Independent of the network technology, there are different possible architectures for the DAQ system. In this paper, we present our studies on the DAQ architecture, where we analyze size, complexity and (relative) cost. We evaluate and compare several data-flow schemes for a network-based DAQ: push, pull and push with barrel-shifter traffic shaping. We also discuss the requirements and overall implications of the data-flow schemes on the DAQ ...

  7. Development of BPM/BLM DAQ System for KOMAC Beam Line

    Energy Technology Data Exchange (ETDEWEB)

    Song, Young-Gi; Kim, Jae-Ha; Yun, Sang-Pil; Kim, Han-Sung; Kwon, Hyeok-Jung; Cho, Yong-Sub [Korea Atomic Energy Research Institute, Gyeongju (Korea, Republic of)

    2016-10-15

    The proton beam is accelerated from 3 MeV to 100 MeV through 11 DTL tanks. The KOMAC installed 10 beam lines, 5 for 20-MeV beams and 5 for 100-MeV beams. The proton beam is transmitted to two target room. The KOMAC has been operating two beam lines, one for 20 MeV and one for 100 MeV. New beam line, RI beam line is under commissioning. A Data Acquisition (DAQ) system is essential to monitor beam signals in an analog front-end circuitry from BPM and BLM at beam lines. A data acquisition (DAQ) system is essential to monitor beam signals in an analog front-end circuitry from BPM and BLM at beam lines. The DAQ digitizes beam signal and the sampling is synchronized with a reference signal which is an external trigger for beam operation. The digitized data is accessible by the Experimental Physics and Industrial Control System (EPICS)-based control system, which manages the whole accelerator control. The beam monitoring system integrates BLM and BPM signals into the control system and offers realtime data to operators. The IOC, which is implemented with Linux and a PCI driver, supports data acquisition as a very flexible solution.

  8. High Performance Gigabit Ethernet Switches for DAQ Systems

    CERN Document Server

    Barczyk, Artur

    2005-01-01

    Commercially available high performance Gigabit Ethernet (GbE) switches are optimized mostly for Internet and standard LAN application traffic. DAQ systems on the other hand usually make use of very specific traffic patterns, with e.g. deterministic arrival times. Industry's accepted loss-less limit of 99.999% may be still unacceptably high for DAQ purposes, as e.g. in the case of the LHCb readout system. In addition, even switches passing this criteria under random traffic can show significantly higher loss rates if subject to our traffic pattern, mainly due to buffer memory limitations. We have evaluated the performance of several switches, ranging from "pizza-box" devices with 24 or 48 ports up to chassis based core switches in a test-bed capable to emulate realistic traffic patterns as expected in the readout system of our experiment. The results obtained in our tests have been used to refine and parametrize our packet level simulation of the complete LHCb readout network. In this paper we report on the...

  9. Development and test of the DAQ system for a Micromegas prototype to be installed in the ATLAS experiment

    CERN Document Server

    Zibell, Andre; The ATLAS collaboration; Bianco, Michele; Martoiu, Victor Sorin

    2015-01-01

    A Micromegas (MM) quadruplet prototype with an active area of 0.5 m 2 that adopts the general design foreseen for the upgrade of the innermost forward muon tracking systems (Small Wheels) of the ATLAS detector in 2018-2019, has been built at CERN and is going to be tested in the ATLAS cavern environment during the LHC RUN-II period 2015-2017. The integration of this prototype detector into the ATLAS data acquisition system using custom ATCA equipment is presented. An ATLAS compatible Read Out Driver (ROD) based on the Scalable Readout System (SRS), the Scalable Readout Unit (SRU), will be used in order to transmit the data after generating valid event fragments to the high-level Read Out System (ROS). The SRU will be synchronized with the LHC bunch crossing clock (40.08 MHz) and will receive the Level-1 trigger signals from the Central Trigger Processor (CTP) through the TTCrx receiver ASIC. The configuration of the system will be driven directly from the ATLAS Run Control System. By using the ATLAS TDAQ Soft...

  10. The readiness of the ATLAS Trigger-DAQ system for the second LHC run

    CERN Document Server

    Rammensee, Michael; The ATLAS collaboration

    2015-01-01

    After its first shutdown, the Large Hadron Collider (LHC) will provide proton-proton collisions with increased luminosity and energy. In the ATLAS experiment~\\cite{Atlas}, the Trigger and Data Acquisition (TDAQ) system has been upgraded to deal with the increased event rates~\\cite{TDAQPhase1}. The updated system is radically different from the previous implementation, both in terms of architecture and expected performance. The main architecture has been reshaped in order to profit from the technological progress and to maximize the flexibility and efficiency of the data selection process. Design choices and the strategies employed to minimize the data-collection and the selection latency will be discussed. First results of tests done during the commissioning phase and the operational performance after the first months of data taking will be presented.

  11. DAQ INSTALLATION IN USC COMPLETED

    CERN Multimedia

    A. Racz

    After one year of work at P5 in the underground control rooms (USC55-S1&S2), the DAQ installation in USC55 is completed. The first half of 2006 was dedicated to the DAQ infrastructures installation (private cable trays, rack equipment for a very dense cabling, connection to services i.e. water, power, network). The second half has been spent to install the custom made electronics (FRLs and FMMs) and place all the inter-rack cables/fibers connecting all sub-systems to central DAQ (more details are given in the internal pages). The installation has been carried out by DAQ group members, coming from the hardware and software side as well. The pictures show the very nice team spirit !

  12. Core component integration tests for the back-end software sub-system in the ATLAS data acquisition and event filter prototype -1 project

    International Nuclear Information System (INIS)

    Badescu, E.; Caprini, M.; Niculescu, M.; Radu, A.

    2000-01-01

    The ATLAS data acquisition (DAQ) and Event Filter (EF) prototype -1 project was intended to produce a prototype system for evaluating candidate technologies and architectures for the final ATLAS DAQ system on the LHC accelerator at CERN. Within the prototype project, the back-end sub-system encompasses the software for configuring, controlling and monitoring the DAQ. The back-end sub-system includes core components and detector integration components. The core components provide the basic functionality and had priority in terms of time-scale for development in order to have a baseline sub-system that can be used for integration with the data-flow sub-system and event filter. The following components are considered to be the core of the back-end sub-system: - Configuration databases, describe a large number of parameters of the DAQ system architecture, hardware and software components, running modes and status; - Message reporting system (MRS), allows all software components to report messages to other components in the distributed environment; - Information service (IS) allows the information exchange for software components; - Process manager (PMG), performs basic job control of software components (start, stop, monitoring the status); - Run control (RC), controls the data taking activities by coordinating the operations of the DAQ sub-systems, back-end software and external systems. Performance and scalability tests have been made for individual components. The back-end subsystem integration tests bring together all the core components and several trigger/DAQ/detector integration components to simulate the control and configuration of data taking sessions. For back-end integration tests a test plan was provided. The tests have been done using a shell script that goes through different phases as follows: - starting the back-end server processes to initialize communication services and PMG; - launching configuration specific processes via DAQ supervisor as

  13. The HLT, DAQ and DCS TDR

    CERN Multimedia

    Wickens, F. J

    At the end of June the Trigger-DAQ community achieved a major milestone with the submission to the LHCC of the Technical Design Report (TDR) for DAQ, HLT and DCS. The first unbound copies were handed to the LHCC referees on the scheduled date of 30th June, this was followed a few days later by a limited print run which produced the first bound copies (see Figure 1). As had previously been announced both to the LHCC and the ATLAS Collaboration it was not possible on this timescale to give a complete validation of all of the aspects of the architecture in the TDR. So it had been agreed that further work would continue over the summer to provide more complete results for the formal review by the LHCC of the TDR in September. Thus there followed an intense programme of measurements and analysis: especially to provide results for HLT both in testbeds and for the event selection software itself; to provide additional information on scaling of the dataflow aspects; to provide first results on the new prototype ROBin...

  14. Test Management Framework for the ATLAS Experiment

    CERN Document Server

    Kazarov, Andrei; The ATLAS collaboration; Avolio, Giuseppe

    2018-01-01

    Test Management Framework for the Data Acquisition of the ATLAS Experiment Data Acquisition (DAQ) of the ATLAS experiment is a large distributed and inhomogeneous system: it consists of thousands of interconnected computers and electronics devices that operate coherently to read out and select relevant physics data. Advanced diagnostics capabilities of the TDAQ control system are a crucial feature which contributes significantly to smooth operation and fast recovery in case of the problems and, finally, to the high efficiency of the whole experiment. The base layer of the verification and diagnostic functionality is a test management framework. We have developed a flexible test management system that allows the experts to define and configure tests for different components, indicate follow-up actions to test failures and describe inter-dependencies between DAQ or detector elements. This development is based on the experience gained with the previous test system that was used during the first three years of th...

  15. A new ATLAS muon CSC readout system with system on chip technology on ATCA platform

    Science.gov (United States)

    Claus, R.; ATLAS Collaboration

    2016-07-01

    The ATLAS muon Cathode Strip Chamber (CSC) back-end readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run 2 luminosity. The readout design is based on the Reconfiguration Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the ATCA platform. The RCE design is based on the new System on Chip Xilinx Zynq series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources together with auxiliary memories to form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the Zynq for G-link, S-link and TTC allowed the full system of 320 G-links from the 32 chambers to be processed by 6 COBs in one ATCA shelf through software waveform feature extraction to output 32 S-links. The full system was installed in Sept. 2014. We will present the RCE/COB design concept, the firmware and software processing architecture, and the experience from the intense commissioning towards LHC Run 2.

  16. A new ATLAS muon CSC readout system with system on chip technology on ATCA platform

    International Nuclear Information System (INIS)

    Claus, R.

    2016-01-01

    The ATLAS muon Cathode Strip Chamber (CSC) back-end readout system has been upgraded during the LHC 2013–2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run 2 luminosity. The readout design is based on the Reconfiguration Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the ATCA platform. The RCE design is based on the new System on Chip Xilinx Zynq series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources together with auxiliary memories to form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the Zynq for G-link, S-link and TTC allowed the full system of 320 G-links from the 32 chambers to be processed by 6 COBs in one ATCA shelf through software waveform feature extraction to output 32 S-links. The full system was installed in Sept. 2014. We will present the RCE/COB design concept, the firmware and software processing architecture, and the experience from the intense commissioning towards LHC Run 2.

  17. A new ATLAS muon CSC readout system with system on chip technology on ATCA platform

    Science.gov (United States)

    Bartoldus, R.; Claus, R.; Garelli, N.; Herbst, R. T.; Huffer, M.; Iakovidis, G.; Iordanidou, K.; Kwan, K.; Kocian, M.; Lankford, A. J.; Moschovakos, P.; Nelson, A.; Ntekas, K.; Ruckman, L.; Russell, J.; Schernau, M.; Schlenker, S.; Su, D.; Valderanis, C.; Wittgen, M.; Yildiz, S. C.

    2016-01-01

    The ATLAS muon Cathode Strip Chamber (CSC) backend readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run-2 luminosity. The readout design is based on the Reconfigurable Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the Advanced Telecommunication Computing Architecture (ATCA) platform. The RCE design is based on the new System on Chip XILINX ZYNQ series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources. Together with auxiliary memories, all these components form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the ZYNQ for high speed input and output fiberoptic links and TTC allowed the full system of 320 input links from the 32 chambers to be processed by 6 COBs in one ATCA shelf. The full system was installed in September 2014. We will present the RCE/COB design concept, the firmware and software processing architecture, and the experience from the intense commissioning for LHC Run 2.

  18. A new ATLAS muon CSC readout system with system on chip technology on ATCA platform

    International Nuclear Information System (INIS)

    Bartoldus, R.; Claus, R.; Garelli, N.; Herbst, R.T.; Huffer, M.; Kocian, M.; Ruckman, L.; Russell, J.; Su, D.; Wittgen, M.; Iakovidis, G.; Iordanidou, K.; Moschovakos, P.; Ntekas, K.; Kwan, K.; Lankford, A.J.; Nelson, A.; Schernau, M.; Schlenker, S.; Valderanis, C.

    2016-01-01

    The ATLAS muon Cathode Strip Chamber (CSC) backend readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run-2 luminosity. The readout design is based on the Reconfigurable Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the Advanced Telecommunication Computing Architecture (ATCA) platform. The RCE design is based on the new System on Chip XILINX ZYNQ series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources. Together with auxiliary memories, all these components form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the ZYNQ for high speed input and output fiberoptic links and TTC allowed the full system of 320 input links from the 32 chambers to be processed by 6 COBs in one ATCA shelf. The full system was installed in September 2014. We will present the RCE/COB design concept, the firmware and software processing architecture, and the experience from the intense commissioning for LHC Run 2

  19. A new ATLAS muon CSC readout system with system on chip technology on ATCA platform

    Energy Technology Data Exchange (ETDEWEB)

    Claus, R., E-mail: claus@slac.stanford.edu

    2016-07-11

    The ATLAS muon Cathode Strip Chamber (CSC) back-end readout system has been upgraded during the LHC 2013–2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run 2 luminosity. The readout design is based on the Reconfiguration Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the ATCA platform. The RCE design is based on the new System on Chip Xilinx Zynq series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources together with auxiliary memories to form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the Zynq for G-link, S-link and TTC allowed the full system of 320 G-links from the 32 chambers to be processed by 6 COBs in one ATCA shelf through software waveform feature extraction to output 32 S-links. The full system was installed in Sept. 2014. We will present the RCE/COB design concept, the firmware and software processing architecture, and the experience from the intense commissioning towards LHC Run 2.

  20. On-chamber readout system for the ATLAS MDT Muon Spectrometer

    CERN Document Server

    Chapman, J; Ball, R; Brandenburg, G; Hazen, E; Oliver, J; Posch, C

    2004-01-01

    The ATLAS MDT Muon Spectrometer is a system of approximately 380,000 pressurized cylindrical drift tubes of 3 cm diameter and up to 6 meters in length. These Monitored Drift Tubes (MDTs) are precision- glued to form super-layers, which in turn are assembled into precision chambers of up to 432 tubes each. Each chamber is equipped with a set of mezzanine cards containing analog and digital readout circuitry sufficient to read out 24 MDTs per card. Up to 18 of these cards are connected to an on-chamber DAQ element referred to as a Chamber Service Module, or CSM. The CSM multiplexes data from the mezzanine cards and outputs this data on an optical fiber which is received by the off-chamber DAQ system. Thus, the chamber forms a highly self-contained unit with DC power in and a single optical fiber out. The Monitored Drift Tubes, due to their length, require a terminating resistor at their far end to prevent reflections. The readout system has been designed so that thermal noise from this resistor remains the domi...

  1. DAQ application of PC oscilloscope for chaos fiber-optic fence system based on LabVIEW

    Science.gov (United States)

    Lu, Manman; Fang, Nian; Wang, Lutang; Huang, Zhaoming; Sun, Xiaofei

    2011-12-01

    In order to obtain simultaneously high sample rate and large buffer in data acquisition (DAQ) for a chaos fiber-optic fence system, we developed a double-channel high-speed DAQ application of a digital oscilloscope of PicoScope 5203 based on LabVIEW. We accomplished it by creating call library function (CLF) nodes to call the DAQ functions in the two dynamic link libraries (DLLs) of PS5000.dll and PS5000wrap.dll provided by Pico Technology Company. The maximum real-time sample rate of the DAQ application can reach 1GS/s. We can control the resolutions of the application at the sample time and data amplitudes by changing their units in the block diagram, and also control the start and end times of the sampling operations. The experimental results show that the application has enough high sample rate and large buffer to meet the demanding DAQ requirements of the chaos fiber-optic fence system.

  2. Rate Predictions and Trigger/DAQ Resource Monitoring in ATLAS

    CERN Document Server

    Schaefer, D M; The ATLAS collaboration

    2012-01-01

    Since starting in 2010, the Large Hadron Collider (LHC) has pro- duced collisions at an ever increasing rate. The ATLAS experiment successfully records the collision data with high eciency and excel- lent data quality. Events are selected using a three-level trigger system, where each level makes a more re ned selection. The level-1 trigger (L1) consists of a custom-designed hardware trigger which seeds two higher software based trigger levels. Over 300 triggers compose a trig- ger menu which selects physics signatures such as electrons, muons, particle jets, etc. Each trigger consumes computing resources of the ATLAS trigger system and oine storage. The LHC instantaneous luminosity conditions, desired physics goals of the collaboration, and the limits of the trigger infrastructure determine the composition of the ATLAS trigger menu. We describe a trigger monitoring frame- work for computing the costs of individual trigger algorithms such as data request rates and CPU consumption. This framework has been used...

  3. Final Test at the Surface of the ATLAS Endcap Muon Trigger Chamber Electronics

    CERN Document Server

    Kubota, T; Kanaya, N; Kawamoto, T; Kobayashi, T; Kuwabara, T; Nomoto, H; Sakamoto, H; Yamaguchi, T; Fukunaga, C; Ikeno, M; Iwasaki, H; Nagano, K; Nozaki, M; Sasaki, O; Tanaka, S; Yasu, Y; Hasegawa, Y; Oshita, H; Takeshita, T; Nomachi, M; Sugaya, Y; Sugimoto, T; Okumura, Y; Takahashi, Y; Tomoto, M; Kadosaka, T; Kawagoe, K; Kiyamura, H; Kurashige, H; Niwa, T; Ochi, A; Omachi, C; Takeda, H; Lifshitz, R; Lupu, N; Bressler, S; Tarem, S; Kajomovitz, E; Ben Ami, S; Bahat Treidel, O; Benhammou, Ya; Etzion, E; Lellouch, D; Levinson, L; Mikenberg, G; Roich, A

    2007-01-01

    For the detector commissioning planned in 2007, sector assembly of the ATLAS muon-endcap trigger chambers and final test at the surface for the assembled electronics are being done in CERN and almost completed. For the test, we built up the Data Acquisition (DAQ) system using test pulse of two types and cosmic rays in order to check functionality of the various aspects of the electronics mounted on a sector. So far, 99% of all 320,000 channels have been tested and most of them were installed into the ATLAS cavern. In this presentation, we will describe the DAQ systems and mass-test procedure in detail, and report the result of electronics test with some actual experiences

  4. Contributions to dataflow sub-system of the ATLAS data acquisition and event filter prototype-1 project

    International Nuclear Information System (INIS)

    Badescu, E.; Caprini, M.; Niculescu, M.; Radu, A.

    1998-01-01

    A project has been approved by the ATLAS Collaboration for the design and implementation of a Data Acquisition (DAQ) and Event Filter (EF) prototype. The prototype consists of a full 'vertical' slice of the ATLAS Data Acquisition and Event Filter architecture and can be seen as made of 4 sub-systems: the Detector Interface, the Dataflow, the Back-end DAQ and the Event Filter. The Bucharest group is member of DAQ/EF collaboration and during 1997 it was involved in the Dataflow activities. The Dataflow component of the ATLAS DAQ/EF prototype is responsible for moving the event data from the detector read-out links to the final mass storage. It also provides event data for monitoring purposes and implements local control for the various elements. The Dataflow system is designed to cover three main functions, namely: the collection and buffering of the data from the detector, the merging of fragments into full events and the interaction with event filter sub-farm. The event building function is covered by a Dataflow building block named Event Builder. All the other functions of the Dataflow system are covered by the two modular building blocks, the read-out crate (ROC) and the sub-farm DAQ (SFC). The Bucharest group was mainly involved in the activities related to the high level design, initial implementation and tests of the ROC supporting the read-out from one or more read-out drivers and having one or more connections to the event builder. The main data flow within the ROC is handled by three input/output modules named IOMs: the trigger module (TRG), the event builder interface module (EBIF) and the read-out buffer module (ROB). The TRG receives and buffers data control messages from level 1 and from level 2 trigger system, the EBIF builds fragments and makes them available to the event building sub-system and the ROB receives and buffers ROB fragments from the read-out link, S-LINK. In order to estimate the performance which could be achieved with the actual

  5. Full system test of module to DAQ for ATLAS IBL

    Energy Technology Data Exchange (ETDEWEB)

    Behpour, Rouhina; Mattig, Peter; Wensing, Marius [Wuppertal University (Germany); Bindi, Marcello [Goettingen University (Germany)

    2015-07-01

    IBL (Insertable B Layer) as the inner most layer in the ATLAS detector at the LHC has been successfully integrated to the system last June 2014. IBL system reliability and consistency is under investigation during ongoing milestone runs at CERN. Back of Crate card (BOC) and Read out Driver (ROD) as two of the main electronic cards act as an interface between the IBL modules and the TDAQ chain. The detector data will be received and processed and then formatted by an interaction between these two electronic cards. The BOC takes advantage of using S-Link implementation inside the main FPGAs. The S-Link protocol as a standard high performance data acquisition link between the readout electronic cards and the TDAQ system is developed and used at CERN. It is based on the idea that detector formatted data will be transferred through optical fibers to the ROS (Read out System) PC for being stored via the ROBIN (Read out Buffer) cards. This talk presents the results that confirm a stable and good performance of the system, from the modules to the read out electronic cards and then to the ROS PCs via S-Link.

  6. Implementation of KoHLT-EB DAQ System using compact RIO with EPICS

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Dae-Sik; Kim, Suk-Kwon; Lee, Dong Won [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Cho, Seungyon [National Fusion Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    EPICS (Experimental Physics and Industrial Control System) is a collection of software tools collaboratively developed which can be integrated to provide a comprehensive and scalable control system. Currently there is an increase in use of such systems in large Physics experiments like KSTAR, ITER and DAIC (Daejeon Accelerator Ion Complex). The Korean heat load test facility (KoHLT-EB) was installed at KAERI. This facility is utilized for a qualification test of the plasma facing component (PFC) for the ITER first wall and DEMO divertor, and the thermo-hydraulic experiments. The existing data acquisition device was Agilent 34980A multifunction switch and measurement unit and controlled by Agilent VEE. In the present paper, we report the EPICS based newly upgraded KoHLT-EB DAQ system which is the advanced data acquisition system using FPGA-based reconfigurable DAQ devices like compact RIO. The operator interface of KoHLT-EB DAQ system is composed of Control-System Studio (CSS) and another server is able to archive the related data using the standalone archive tool and the archiveviewer can retrieve that data at any time in the infra-network.

  7. Configurable data and CAMAC hardware representations for implementation of the SPHERE DAQ and offline systems

    International Nuclear Information System (INIS)

    Isupov, A.Yu.

    2001-01-01

    An implementation of the experimental data configurable representation for using in the DAQ and offline systems of the SPHERE setup at the LHE, JINR is described. A software scheme of the SPHERE CAMAC hardware's configurable description, intended to online data acquisition (DAQ) implementation based on the qdpb system, is issued

  8. The ATLAS ROBIN – A High-Performance Data-Acquisition Module

    CERN Document Server

    Kugel, Andreas

    2009-01-01

    This work presents the re-configurable processor ROBIN, which is a key element of the data-acquisition-system of the ATLAS experiment, located at the new LHC at CERN. The ATLAS detector provides data over 1600 channels simultaneously towards the DAQ system. The ATLAS dataflow model follows the “PULL” strategy in contrast to the commonly used “PUSH” strategy. The data volume transported is reduced by a factor of 10, however the data must be temporarily stored at the entry to the DAQ system. The input layer consists of approx. 160 ROS read-out units comprising 1 PC and 4 ROBIN modules. Each ROBIN device acquires detector data via 3 input channels and performs local buffering. Board control is done via a 64-bit PCI interface. Event selection and data transmission runs via PCI in the baseline bus-based ROS. Alternatively, a local GE interface can take over part or all of the data traffic in the switch-based ROS, in order to reduce the load on the host PC. The performance of the ROBIN module stems from the...

  9. The New CMS DAQ System for Run 2 of the LHC

    CERN Document Server

    AUTHOR|(CDS)2087644; Behrens, Ulf; Branson, James; Chaze, Olivier; Cittolin, Sergio; Darlea, Georgiana Lavinia; Deldicque, Christian; Dobson, Marc; Dupont, Aymeric; Erhan, Samim; Forrest, Andrew Kevin; Gigi, Dominique; Glege, Frank; Gomez Ceballos, Guillelmo; Gomez-Reino Garrido, Robert; Hegeman, Jeroen Guido; Holzner, Andre Georg; Masetti, Lorenzo; Meijers, Franciscus; Meschi, Emilio; Mommsen, Remigius; Morovic, Srecko; Vivian O'Dell; Orsini, Luciano; Paus, Christoph Maria Ernst; Petrucci, Andrea; Pieri, Marco; Racz, Attila; Sakulin, Hannes; Schwick, Christoph; Stieger, Benjamin Bastian; Sumorok, Konstanty; Veverka, Jan; Zejdl, Petr

    2015-01-01

    The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GB/s to the high level trigger (HLT) farm. The HLT farm selects interesting events for storage and offline analysis at a rate of around 1 kHz. The DAQ system has been redesigned during the accelerator shutdown in 2013/14. The motivation is twofold Firstly, the current compute nodes, networking, and storage infrastructure will have reached the end of their lifetime by the time the LHC restarts. Secondly, in order to handle higher LHC luminosities and event pileup, a number of sub-detectors will be upgraded, increasing the number of readout channels and replacing the off-detector readout electronics with a micro-TCA implementation. The new DAQ architecture will take advantage of the latest developments in the computing industry. For data concentration, 10/40 Gb/s Ethernet technologies will be used, as well as an implementation...

  10. DAQ system for high energy polarimeter at the LHE, JINR: implementation based on the qdpb (data processing with branchpoints) system

    International Nuclear Information System (INIS)

    Isupov, A.Yu.

    2001-01-01

    Online data acquisition (DAQ) system's implementation for the High Energy Polarimeter (HEP) at the LHE, JINR is described. HEP DAQ is based on the qdpb system. Software modules specific for such implementation (HEP data and hardware dependent) are discussed

  11. Production Performance of the ATLAS Semiconductor Tracker Readout System

    CERN Document Server

    Mitsou, V A

    2006-01-01

    The ATLAS Semiconductor Tracker (SCT) together with the pixel and the transition radiation detectors will form the tracking system of the ATLAS experiment at LHC. It will consist of 20000 single-sided silicon microstrip sensors assembled back-to-back into modules mounted on four concentric barrels and two end-cap detectors formed by nine disks each. The SCT module production and testing has finished while the macro-assembly is well under way. After an overview of the layout and the operating environment of the SCT, a description of the readout electronics design and operation requirements will be given. The quality control procedure and the DAQ software for assuring the electrical functionality of hybrids and modules will be discussed. The focus will be on the electrical performance results obtained during the assembly and testing of the end-cap SCT modules.

  12. Web-based DAQ systems: connecting the user and electronics front-ends

    International Nuclear Information System (INIS)

    Lenzi, Thomas

    2016-01-01

    Web technologies are quickly evolving and are gaining in computational power and flexibility, allowing for a paradigm shift in the field of Data Acquisition (DAQ) systems design. Modern web browsers offer the possibility to create intricate user interfaces and are able to process and render complex data. Furthermore, new web standards such as WebSockets allow for fast real-time communication between the server and the user with minimal overhead. Those improvements make it possible to move the control and monitoring operations from the back-end servers directly to the user and to the front-end electronics, thus reducing the complexity of the data acquisition chain. Moreover, web-based DAQ systems offer greater flexibility, accessibility, and maintainability on the user side than traditional applications which often lack portability and ease of use. As proof of concept, we implemented a simplified DAQ system on a mid-range Spartan6 Field Programmable Gate Array (FPGA) development board coupled to a digital front-end readout chip. The system is connected to the Internet and can be accessed from any web browser. It is composed of custom code to control the front-end readout and of a dual soft-core Microblaze processor to communicate with the client.

  13. Web-based DAQ systems: connecting the user and electronics front-ends

    Science.gov (United States)

    Lenzi, Thomas

    2016-12-01

    Web technologies are quickly evolving and are gaining in computational power and flexibility, allowing for a paradigm shift in the field of Data Acquisition (DAQ) systems design. Modern web browsers offer the possibility to create intricate user interfaces and are able to process and render complex data. Furthermore, new web standards such as WebSockets allow for fast real-time communication between the server and the user with minimal overhead. Those improvements make it possible to move the control and monitoring operations from the back-end servers directly to the user and to the front-end electronics, thus reducing the complexity of the data acquisition chain. Moreover, web-based DAQ systems offer greater flexibility, accessibility, and maintainability on the user side than traditional applications which often lack portability and ease of use. As proof of concept, we implemented a simplified DAQ system on a mid-range Spartan6 Field Programmable Gate Array (FPGA) development board coupled to a digital front-end readout chip. The system is connected to the Internet and can be accessed from any web browser. It is composed of custom code to control the front-end readout and of a dual soft-core Microblaze processor to communicate with the client.

  14. Design of data transmission for a portable DAQ system

    International Nuclear Information System (INIS)

    Zhou Wenxiong; Nan Gangyang; Zhang Jianchuan; Wang Yanyu

    2014-01-01

    Field Programmable Gate Array (FPGA), combined with ARM (Advanced RISC Machines) is increasingly employed in the portable data acquisition (DAQ) system for nuclear experiments to reduce the system volume and achieve powerful and multifunctional capacity. High-speed data transmission between FPGA and ARM is one of the most challenging issues for system implementation. In this paper, we propose a method to realize the high-speed data transmission by using the FPGA to acquire massive data from FEE (Front-end electronics) and send it to the ARM whilst the ARM to transmit the data to the remote computer through the TCP/IP protocol for later process. This paper mainly introduces the interface design of the high-speed transmission method between the FPGA and the ARM, the transmission logic of the FPGA, and the program design of the ARM. The theoretical research shows that the maximal transmission speed between the FPGA and the ARM through this way can reach 50 MB/s. In a realistic nuclear physics experiment, this portable DAQ system achieved 2.2 MB/s data acquisition speed. (authors)

  15. Upgrade of the TOTEM DAQ using the Scalable Readout System (SRS)

    International Nuclear Information System (INIS)

    Quinto, M; Cafagna, F; Fiergolski, A; Radicioni, E

    2013-01-01

    The main goals of the TOTEM Experiment at the LHC are the measurements of the elastic and total p-p cross sections and the studies of the diffractive dissociation processes. At LHC, collisions are produced at a rate of 40 MHz, imposing strong requirements for the Data Acquisition Systems (DAQ) in terms of trigger rate and data throughput. The TOTEM DAQ adopts a modular approach that, in standalone mode, is based on VME bus system. The VME based Front End Driver (FED) modules, host mezzanines that receive data through optical fibres directly from the detectors. After data checks and formatting are applied in the mezzanine, data is retransmitted to the VME interface and to another mezzanine card plugged in the FED module. The VME bus maximum bandwidth limits the maximum first level trigger (L1A) to 1 kHz rate. In order to get rid of the VME bottleneck and improve scalability and the overall capabilities of the DAQ, a new system was designed and constructed based on the Scalable Readout System (SRS), developed in the framework of the RD51 Collaboration. The project aims to increase the efficiency of the actual readout system providing higher bandwidth, and increasing data filtering, implementing a second-level trigger event selection based on hardware pattern recognition algorithms. This goal is to be achieved preserving the maximum back compatibility with the LHC Timing, Trigger and Control (TTC) system as well as with the CMS DAQ. The obtained results and the perspectives of the project are reported. In particular, we describe the system architecture and the new Opto-FEC adapter card developed to connect the SRS with the FED mezzanine modules. A first test bench was built and validated during the last TOTEM data taking period (February 2013). Readout of a set of 3 TOTEM Roman Pot silicon detectors was carried out to verify performance in the real LHC environment. In addition, the test allowed a check of data consistency and quality

  16. DAQ system for low density plasma parameters measurement

    International Nuclear Information System (INIS)

    Joshi, Rashmi S.; Gupta, Suryakant B.

    2015-01-01

    In various cases where low density plasmas (number density ranges from 1E4 to 1E6 cm -3 ) exist for example, basic plasma studies or LEO space environment measurement of plasma parameters becomes very critical. Conventional tip (cylindrical) Langmuir probes often result into unstable measurements in such lower density plasma. Due to larger surface area, a spherical Langmuir probe is used to measure such lower plasma densities. Applying a sweep voltage signal to the probe and measuring current values corresponding to these voltages gives V-I characteristics of plasma which can be plotted on a digital storage oscilloscope. This plot is analyzed for calculating various plasma parameters. The aim of this paper is to measure plasma parameters using a spherical Langmuir probe and indigenously developed DAQ system. DAQ system consists of Keithley source-meter and a host system connected by a GPIB interface. An online plasma parameter diagnostic system is developed for measuring plasma properties for non-thermal plasma in vacuum. An algorithm is developed using LabVIEW platform. V-I characteristics of plasma are plotted with respect to different filament current values and different locations of Langmuir probe with reference to plasma source. V-I characteristics is also plotted for forward and reverse voltage sweep generated programmatically from the source meter. (author)

  17. The LHCb DAQ system

    CERN Document Server

    Jost, B

    2000-01-01

    The LHCb experiment is the most recently approved of the 4 experiments under construction at CERN's LHC accelerator. It is a special purpose experiment designed to precisely measure the CP violation parameters in the B-B system. Triggering poses special problems since the interesting events containing B-mesons are immersed in a large background of inelastic p-p reactions. We therefore decided to implement a 4 level triggering scheme. The LHCb Data Acquisition (DAQ) system will have to cope with an average trigger rate of similar to 40 kHz, after two levels of hardware triggers, and an average event size of similar to 150 kB. Thus an event-building network which can sustain an average bandwidth of 6 GB /s is required. A powerful software trigger farm will have to be installed to reduce the rate from the 40 kHz to similar to 200 Hz of events written to permanent storage. In this paper we will concentrate on the networking aspects of the LHCb data acquisition and the controls system. 11 Refs.

  18. Data Acquisition (DAQ) system dedicated for remote sensing applications on Unmanned Aerial Vehicles (UAV)

    Science.gov (United States)

    Keleshis, C.; Ioannou, S.; Vrekoussis, M.; Levin, Z.; Lange, M. A.

    2014-08-01

    Continuous advances in unmanned aerial vehicles (UAV) and the increased complexity of their applications raise the demand for improved data acquisition systems (DAQ). These improvements may comprise low power consumption, low volume and weight, robustness, modularity and capability to interface with various sensors and peripherals while maintaining the high sampling rates and processing speeds. Such a system has been designed and developed and is currently integrated on the Autonomous Flying Platforms for Atmospheric and Earth Surface Observations (APAESO/NEA-YΠOΔOMH/NEKΠ/0308/09) however, it can be easily adapted to any UAV or any other mobile vehicle. The system consists of a single-board computer with a dual-core processor, rugged surface-mount memory and storage device, analog and digital input-output ports and many other peripherals that enhance its connectivity with various sensors, imagers and on-board devices. The system is powered by a high efficiency power supply board. Additional boards such as frame-grabbers, differential global positioning system (DGPS) satellite receivers, general packet radio service (3G-4G-GPRS) modems for communication redundancy have been interfaced to the core system and are used whenever there is a mission need. The onboard DAQ system can be preprogrammed for automatic data acquisition or it can be remotely operated during the flight from the ground control station (GCS) using a graphical user interface (GUI) which has been developed and will also be presented in this paper. The unique design of the GUI and the DAQ system enables the synchronized acquisition of a variety of scientific and UAV flight data in a single core location. The new DAQ system and the GUI have been successfully utilized in several scientific UAV missions. In conclusion, the novel DAQ system provides the UAV and the remote-sensing community with a new tool capable of reliably acquiring, processing, storing and transmitting data from any sensor integrated

  19. The ATLAS Trigger Core Configuration and Execution System in Light of the ATLAS Upgrade for LHC Run 2

    CERN Document Server

    Heinrich, Lukas; The ATLAS collaboration

    2015-01-01

    During the 2013/14 shutdown of the Large Hadron Collider (LHC) the ATLAS first level trigger (L1T) and the data acquisition system (DAQ) were substantially upgraded to cope with the increase in luminosity and collision multiplicity, expected to be delivered by the LHC in 2015. To name a few, the L1T was extended on the calorimeter side (L1Calo) to better cope with pile-up and apply better-tuned isolation criteria on electron, photon, and jet candidates. The central trigger (CT) was widened to analyze twice as many inputs, provide more trigger lines, and serve multiple sub-detectors in parallel during calibration periods. A new FPGA-based trigger, capable of analyzing event topologies at 40 MHz, was added to provide further input to forming the level 1 trigger decision (L1Topo). On the DAQ side the dataflow was completely remodeled, merging the two previously existing stages of the software-based high level trigger into one. Partially because of these changes, partially because of the new trigger paradigm to h...

  20. FELIX: The New Approach for Interfacing to Front-end Electronics for the ATLAS Experiment

    CERN Document Server

    AUTHOR|(SzGeCERN)754725; The ATLAS collaboration; Anderson, John Thomas; Borga, Andrea; Boterenbrood, Hendrik; Chen, Hucheng; Chen, Kai; Drake, Gary; Donszelmann, Mark; Francis, David; Gorini, Benedetto; Guest, Daniel; Lanni, Francesco; Lehmann Miotto, Giovanna; Levinson, Lorne; Roich, Alexander; Schreuder, Frans Philip; Schumacher, J\\"orn; Vandelli, Wainer; Zhang, Jinlong

    2016-01-01

    From the ATLAS Phase-I upgrade and onward, new or upgraded detectors and trigger systems will be interfaced to the data acquisition, detector control and timing (TTC) systems by the Front-End Link eXchange (FELIX). FELIX is the core of the new ATLAS Trigger/DAQ architecture. Functioning as a router between custom serial links and a commodity network, FELIX is implemented by server PCs with commodity network interfaces and PCIe cards with large FPGAs and many high speed serial fiber transceivers. By separating data transport from data manipulation, the latter can be done by software in commodity servers attached to the network. Replacing traditional point-to-point links between Front-end components and the DAQ system by a switched network, FELIX provides scaling, flexibility uniformity and upgradability and reduces the diversity of custom hardware solutions in favour of software.

  1. The DAQ system of OPERA experiment and its specifications for the spectrometers

    International Nuclear Information System (INIS)

    Dusini, S.; Barichello, G.; Dal Corso, F.; Felici, G.; Lindozzi, M.; Stalio, S.; Sorrentino, G.

    2004-01-01

    We present an overview of the data acquisition system (DAQ) and event building of OPERA. OPERA is a long baseline neutrino experiment with a high modularity detector and low event rate. To deal with these features a distributed DAQ system base on Ethernet standards for the data transfer has been chosen. A distributed GPS clock signal is used for synchronizations and time stamp of the data. This architecture allows very modular and flexible event building based on a software trigger strategy. We also present its specific application to the spectrometer sub-detector where RPC trackers are installed. Self-triggerability is a dedicated feature to be also sensitive to out-of-spill events and to possibly allow data taking before the official start of the experiment

  2. Evolution of the Trigger and Data Acquisition System for the ATLAS experiment

    CERN Document Server

    Negri, A; The ATLAS collaboration

    2012-01-01

    The ATLAS experiment at the Large Hadron Collider at CERN relies on a complex and highly distributed Trigger and Data Acquisition (TDAQ) system to gather and select particle collision data at unprecedented energy and rates. The TDAQ is composed of three levels which reduces the event rate from the design bunch-crossing rate of 40 MHz to an average event recording rate of about 200 Hz. The first part of this paper gives an overview of the operational performance of the DAQ system during 2011 and the first months of data taking in 2012. It describes how the flexibility inherent in the design of the system has be exploited to meet the changing needs of ATLAS data taking and in some cases push performance beyond the original design performance specification. The experience accumulated in the TDAQ system operation during these years stimulated also interest to explore possible evolutions, despite the success of the current design. One attractive direction is to merge three systems - the second trigger level (L2), ...

  3. BTeV trigger/DAQ innovations

    International Nuclear Information System (INIS)

    Votava, Margaret

    2005-01-01

    The BTeV experiment was a collider based high energy physics (HEP) B-physics experiment proposed at Fermilab. It included a large-scale, high speed trigger/data acquisition (DAQ) system, reading data off the detector at 500 Gbytes/sec and writing to mass storage at 200 Mbytes/sec. The online design was considered to be highly credible in terms of technical feasibility, schedule and cost. This paper will give an overview of the overall trigger/DAQ architecture, highlight some of the challenges, and describe the BTeV approach to solving some of the technical challenges. At the time of termination in early 2005, the experiment had just passed its baseline review. Although not fully implemented, many of the architecture choices, design, and prototype work for the online system (both trigger and DAQ) were well on their way to completion. Other large, high-speed online systems may have interest in the some of the design choices and directions of BTeV, including (a) a commodity-based tracking trigger running asynchronously at full rate, (b) the hierarchical control and fault tolerance in a large real time environment, (c) a partitioning model that supports offline processing on the online farms during idle periods with plans for dynamic load balancing, and (d) an independent parallel highway architecture

  4. The LHCb RICH Upgrade: Development of the DCS and DAQ system.

    CERN Multimedia

    Cavallero, Giovanni

    2018-01-01

    The LHCb experiment is preparing for an upgrade during the second LHC long shutdown in 2019-2020. In order to fully exploit the LHC flavour physics potential with a five-fold increase in the instantaneous luminosity, a trigger-less readout will be implemented. The RICH detectors will require new photon detectors and a brand new front-end electronics. The status of the integration of the RICH photon detector modules with the MiniDAQ, the prototype of the upgraded LHCb readout architecture, has been reported. The development of the prototype of the RICH Upgrade Experiment Control System, integrating the DCS and DAQ partitions in a single FSM, has been described. The status of the development of the RICH Upgrade Inventory, Bookkeeping and Connectivity database has been reported as well.

  5. Readout and Trigger for the AFP Detector at the ATLAS Experiment

    CERN Document Server

    Kocian, Martin; The ATLAS collaboration

    2018-01-01

    AFP, the ATLAS Forward Proton consists of silicon detectors at 205 m and 217 m on each side of ATLAS. In 2016 two detectors in one side were installed. The FEI4 chips are read at 160 Mbps over the optical fibers. The DAQ system uses a FPGA board with Artix chip and a mezzanine card with RCE data processing module based on a Zynq chip with ARM processor running Linux. In this contribution we give an overview of the AFP detector with the commissioning steps taken to integrate with the ATLAS TDAQ. Furthermore first performance results are presented.

  6. The ATLAS event filter

    CERN Document Server

    Beck, H P; Boissat, C; Davis, R; Duval, P Y; Etienne, F; Fede, E; Francis, D; Green, P; Hemmer, F; Jones, R; MacKinnon, J; Mapelli, Livio P; Meessen, C; Mommsen, R K; Mornacchi, Giuseppe; Nacasch, R; Negri, A; Pinfold, James L; Polesello, G; Qian, Z; Rafflin, C; Scannicchio, D A; Stanescu, C; Touchard, F; Vercesi, V

    1999-01-01

    An overview of the studies for the ATLAS Event Filter is given. The architecture and the high level design of the DAQ-1 prototype is presented. The current status if the prototypes is briefly given. Finally, future plans and milestones are given. (11 refs).

  7. Overview of DAQ developments for the CBM experiment

    Energy Technology Data Exchange (ETDEWEB)

    Emschermann, David [GSI Helmholtzzentrum fuer Schwerionenforschung GmbH (Germany); Collaboration: CBM-Collaboration

    2015-07-01

    The Compressed Baryonic Matter experiment (CBM) at the future Facility for Antiproton and Ion Research (FAIR) is a a fixed-target setup operating at very high interaction rates up to 10 MHz. The high rate capability can be achieved with fast and radiation hard detectors equipped with free-streaming readout electronics. A high-speed data acquisition (DAQ) system will forward data volumes of up to 1 TB/s from the CBM cave to the first level event selector (FLES), located 400 m apart. This presentation showcases recent developments of DAQ components for CBM. We highlight the anticipated DAQ setup for beam tests scheduled for the end of 2015.

  8. Distributed inter process communication framework of BES III DAQ online software

    International Nuclear Information System (INIS)

    Li Fei; Liu Yingjie; Ren Zhenyu; Wang Liang; Chinese Academy of Sciences, Beijing; Chen Mali; Zhu Kejun; Zhao Jingwei

    2006-01-01

    DAQ (Data Acquisition) system is one important part of BES III, which is the large scale high-energy physics detector on the BEPC. The inter process communication (IPC) of online software in distributed environments is very pivotal for design and implement of DAQ system. This article will introduce one distributed inter process communication framework, which is based on CORBA and used in BES III DAQ online software. The article mainly presents the design and implementation of the IPC framework and application based on IPC. (authors)

  9. Verification and Diagnostics Framework in ATLAS Trigger/DAQ

    CERN Document Server

    Barczyk, M.; Caprini, M.; Da Silva Conceicao, J.; Dobson, M.; Flammer, J.; Jones, R.; Kazarov, A.; Kolos, S.; Liko, D.; Lucio, L.; Mapelli, L.; Soloviev, I.; Hart, R.; Amorim, A.; Klose, D.; Lima, J.; Pedro, J.; Wolters, H.; Badescu, E.; Alexandrov, I.; Kotov, V.; Mineev, M.; Ryabov, Yu.; Ryabov, Yu.

    2003-01-01

    Trigger and data acquisition (TDAQ) systems for modern HEP experiments are composed of thousands of hardware and software components depending on each other in a very complex manner. Typically, such systems are operated by non-expert shift operators, which are not aware of system functionality details. It is therefore necessary to help the operator to control the system and to minimize system down-time by providing knowledge-based facilities for automatic testing and verification of system components and also for error diagnostics and recovery. For this purpose, a verification and diagnostic framework was developed in the scope of ATLAS TDAQ. The verification functionality of the framework allows developers to configure simple low-level tests for any component in a TDAQ configuration. The test can be configured as one or more processes running on different hosts. The framework organizes tests in sequences, using knowledge about components hierarchy and dependencies, and allowing the operator to verify the fun...

  10. The C-RORC PCIe Card and its Application in the ALICE and ATLAS Experiments

    CERN Document Server

    Engel, H; Costa, F; Crone, G J; Eschweiler, D; Francis, D; Green, B; Joos, M; Kebschull, U; Kiss, T; Kugel, A; Panduro Vasquez, J G; Soos, C; Teixeira-Dias, P; Tremblet, L; Vande Vyvre, P; Vandelli, W; Vermeulen, J C; Werner, P; Wickens, F J

    2015-01-01

    The ALICE and ATLAS DAQ systems read out detector data via point-to-point serial links into custom hardware modules, the ALICE RORC and ATLAS ROBIN. To meet the increase in operational requirements both experiments are replacing their respective modules with a new common module, the C-RORC. This card, developed by ALICE, implements a PCIe Gen 2 x8 interface and interfaces to twelve optical links via three QSFP transceivers. This paper presents the design of the C-RORC, its performance and its application in the ALICE and ATLAS experiments.

  11. Results from the commissioning of the ATLAS Pixel Detector

    CERN Document Server

    Masetti, L

    2008-01-01

    The Pixel Detector is the innermost detector of the ATLAS experiment at the Large Hadron Collider at CERN. It is an 80 million channel silicon tracking system designed to detect charged tracks and secondary vertices with very high precision. After connection of cooling and services and verification of their operation, the ATLAS Pixel Detector is now in the final stage of its commissioning phase. Calibration of optical connections, verification of the analog performance and special DAQ runs for noise studies have been performed and the first tracks in combined operation with the other subdetectors of the ATLAS Inner Detector were observed. The results from calibration tests on the whole detector and from cosmic muon data are presented.

  12. The ALICE DAQ infoLogger

    Science.gov (United States)

    Chapeland, S.; Carena, F.; Carena, W.; Chibante Barroso, V.; Costa, F.; Dénes, E.; Divià, R.; Fuchs, U.; Grigore, A.; Ionita, C.; Delort, C.; Simonetti, G.; Soós, C.; Telesca, A.; Vande Vyvre, P.; Von Haller, B.; Alice Collaboration

    2014-04-01

    ALICE (A Large Ion Collider Experiment) is a heavy-ion experiment studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). The ALICE DAQ (Data Acquisition System) is based on a large farm of commodity hardware consisting of more than 600 devices (Linux PCs, storage, network switches). The DAQ reads the data transferred from the detectors through 500 dedicated optical links at an aggregated and sustained rate of up to 10 Gigabytes per second and stores at up to 2.5 Gigabytes per second. The infoLogger is the log system which collects centrally the messages issued by the thousands of processes running on the DAQ machines. It allows to report errors on the fly, and to keep a trace of runtime execution for later investigation. More than 500000 messages are stored every day in a MySQL database, in a structured table keeping track for each message of 16 indexing fields (e.g. time, host, user, ...). The total amount of logs for 2012 exceeds 75GB of data and 150 million rows. We present in this paper the architecture and implementation of this distributed logging system, consisting of a client programming API, local data collector processes, a central server, and interactive human interfaces. We review the operational experience during the 2012 run, in particular the actions taken to ensure shifters receive manageable and relevant content from the main log stream. Finally, we present the performance of this log system, and future evolutions.

  13. Development of an ADC Radiation Tolerance Characterization System for the Upgrade of the ATLAS LAr Calorimeter

    CERN Document Server

    INSPIRE-00445642; Chen, Kai; Kierstead, James; Lanni, Francesco; Takai, Helio; Jin, Ge

    2016-01-01

    ATLAS LAr calorimeter will perform its Phase-I upgrade during the long shut down (LS2) in 2018, a new LAr Trigger Digitizer Board (LTDB) will be designed and installed. Several commercial-off-the-shelf (COTS) multichannel high-speed ADCs have been selected as possible backups of the radiation tolerant ADC ASICs for LTDB. In order to evaluate the radiation tolerance of these back up commercial ADCs, we developed an ADC radiation tolerance characterization system, which includes the ADC boards, data acquisition (DAQ) board, signal generator, external power supplies and a host computer. The ADC board is custom designed for different ADCs, which has ADC driver and clock distribution circuits integrated on board. The Xilinx ZC706 FPGA development board is used as DAQ board. The data from ADC are routed to the FPGA through the FMC (FPGA Mezzanine Card) connector, de-serialized and monitored by the FPGA, and then transmitted to the host computer through the Gigabit Ethernet. A software program has been developed wit...

  14. artdaq: DAQ software development made simple

    Science.gov (United States)

    Biery, Kurt; Flumerfelt, Eric; Freeman, John; Ketchum, Wesley; Lukhanin, Gennadiy; Rechenmacher, Ron

    2017-10-01

    For a few years now, the artdaq data acquisition software toolkit has provided numerous experiments with ready-to-use components which allow for rapid development and deployment of DAQ systems. Developed within the Fermilab Scientific Computing Division, artdaq provides data transfer, event building, run control, and event analysis functionality. This latter feature includes built-in support for the art event analysis framework, allowing experiments to run art modules for real-time filtering, compression, disk writing and online monitoring. As art, also developed at Fermilab, is also used for offline analysis, a major advantage of artdaq is that it allows developers to easily switch between developing online and offline software. artdaq continues to be improved. Support for an alternate mode of running whereby data from some subdetector components are only streamed if requested has been added; this option will reduce unnecessary DAQ throughput. Real-time reporting of DAQ metrics has been implemented, along with the flexibility to choose the format through which experiments receive the reports; these formats include the Ganglia, Graphite and syslog software packages, along with flat ASCII files. Additionally, work has been performed investigating more flexible modes of online monitoring, including the capability to run multiple online monitoring processes on different hosts, each running its own set of art modules. Finally, a web-based GUI interface through which users can configure details of their DAQ system has been implemented, increasing the ease of use of the system. Already successfully deployed on the LArlAT, DarkSide-50, DUNE 35ton and Mu2e experiments, artdaq will be employed for SBND and is a strong candidate for use on ICARUS and protoDUNE. With each experiment comes new ideas for how artdaq can be made more flexible and powerful. The above improvements will be described, along with potential ideas for the future.

  15. Status of the Melbourne experimental particle physics DAQ, silicon hodoscope and readout systems

    International Nuclear Information System (INIS)

    Moorhead, G.F.

    1995-01-01

    This talk will present a brief review of the current status of the Melbourne Experimental Particle Physics group's primary data acquisition system (DAQ), the associated silicon hodoscope and trigger systems, and of the tests currently underway and foreseen. Simulations of the propagation of 106-Ru β particles through the system will also be shown

  16. A verilog simulation of the CDF DAQ system

    Energy Technology Data Exchange (ETDEWEB)

    Schurecht, K.; Harris, R. (Fermi National Accelerator Lab., Batavia, IL (United States)); Sinervo, P.; Grindley, R. (Toronto Univ., ON (Canada). Dept. of Physics)

    1991-11-01

    A behavioral simulation of the CDF data acquisition system was written in the Verilog modeling language in order to investigate the effects of various improvements to the existing system. This system is modeled as five separate components that communicate with each other via Fastbus interrupt messages. One component of the system, the CDF event builder, is modeled in substantially greater detail due to its complex structure. This simulation has been verified by comparing its performance with that of the existing DAQ system. Possible improvements to the existing systems were studied using the simulation, and the optimal upgrade path for the system was chosen on the basis of these studies. The overall throughput of the modified system is estimated to be double that of the existing setup. Details of this modeling effort will be discussed, including a comparison of the modeled and actual performance of the existing system.

  17. A verilog simulation of the CDF DAQ system

    International Nuclear Information System (INIS)

    Schurecht, K.; Harris, R.; Sinervo, P.; Grindley, R.

    1991-11-01

    A behavioral simulation of the CDF data acquisition system was written in the Verilog modeling language in order to investigate the effects of various improvements to the existing system. This system is modeled as five separate components that communicate with each other via Fastbus interrupt messages. One component of the system, the CDF event builder, is modeled in substantially greater detail due to its complex structure. This simulation has been verified by comparing its performance with that of the existing DAQ system. Possible improvements to the existing systems were studied using the simulation, and the optimal upgrade path for the system was chosen on the basis of these studies. The overall throughput of the modified system is estimated to be double that of the existing setup. Details of this modeling effort will be discussed, including a comparison of the modeled and actual performance of the existing system

  18. ATLAS: triggers for B-physics

    International Nuclear Information System (INIS)

    George, Simon

    2000-01-01

    The LHC will produce bb-bar events at an unprecedented rate. The number of events recorded by ATLAS will be limited by the rate at which they can be stored offline and subsequently analysed. Despite the huge number of events, the small branching ratios mean that analysis of many of the most interesting channels for CP violation and other measurements will be limited by statistics. The challenge for the Trigger and Data Acquisition (DAQ) system is therefore to maximise the fraction of interesting B decays in the B-physics data stream. The ATLAS Trigger/DAQ system is split into three levels. The initial B-physics selection is made in the first-level trigger by an inclusive low-p T muon trigger (∼6 GeV). The second-level trigger strategy is based on identifying classes of final states by their partial reconstruction. The muon trigger is confirmed before proceeding to a track search. Electron/hadron separation is given by the transition radiation tracking detector and the Electromagnetic calorimeter. Muon identification is possible using the muon detectors and the hadronic calorimeter. From silicon strips, pixels and straw tracking, precise track reconstruction is used to make selections based on invariant mass, momentum and impact parameter. The ATLAS trigger group is currently engaged in algorithm development and performance optimisation for the B-physics trigger. This is closely coupled to the R and D programme for the higher-level triggers. Together the two programmes of work will optimise the hardware, architecture and algorithms to meet the challenging requirements. This paper describes the current status and progress of this work

  19. ATLAS tile calorimeter cesium calibration control and analysis software

    International Nuclear Information System (INIS)

    Solovyanov, O; Solodkov, A; Starchenko, E; Karyukhin, A; Isaev, A; Shalanda, N

    2008-01-01

    An online control system to calibrate and monitor ATLAS Barrel hadronic calorimeter (TileCal) with a movable radioactive source, driven by liquid flow, is described. To read out and control the system an online software has been developed, using ATLAS TDAQ components like DVS (Diagnostic and Verification System) to verify the hardware before running, IS (Information Server) for data and status exchange between networked computers, and other components like DDC (DCS to DAQ Connection), to connect to PVSS-based slow control systems of Tile Calorimeter, high voltage and low voltage. A system of scripting facilities, based on Python language, is used to handle all the calibration and monitoring processes from hardware perspective to final data storage, including various abnormal situations. A QT based graphical user interface to display the status of the calibration system during the cesium source scan is described. The software for analysis of the detector response, using online data, is discussed. Performance of the system and first experience from the ATLAS pit are presented

  20. ATLAS tile calorimeter cesium calibration control and analysis software

    Energy Technology Data Exchange (ETDEWEB)

    Solovyanov, O; Solodkov, A; Starchenko, E; Karyukhin, A; Isaev, A; Shalanda, N [Institute for High Energy Physics, Protvino 142281 (Russian Federation)], E-mail: Oleg.Solovyanov@ihep.ru

    2008-07-01

    An online control system to calibrate and monitor ATLAS Barrel hadronic calorimeter (TileCal) with a movable radioactive source, driven by liquid flow, is described. To read out and control the system an online software has been developed, using ATLAS TDAQ components like DVS (Diagnostic and Verification System) to verify the hardware before running, IS (Information Server) for data and status exchange between networked computers, and other components like DDC (DCS to DAQ Connection), to connect to PVSS-based slow control systems of Tile Calorimeter, high voltage and low voltage. A system of scripting facilities, based on Python language, is used to handle all the calibration and monitoring processes from hardware perspective to final data storage, including various abnormal situations. A QT based graphical user interface to display the status of the calibration system during the cesium source scan is described. The software for analysis of the detector response, using online data, is discussed. Performance of the system and first experience from the ATLAS pit are presented.

  1. Future of DAQ Frameworks and Approaches, and Their Evolution towards the Internet of Things

    Science.gov (United States)

    Neufeld, Niko

    2015-12-01

    Nowadays, a DAQ system is a complex network of processors, sensors and many other active devices. Historically, providing a framework for DAQ has been a very important role of host institutes of experiments. Reviewing evolution of such DAQ frameworks is a very interesting subject of the conference. “Internet of Things” is a recent buzz word but a DAQ framework could be a good example of IoT.

  2. The ATLAS ROBIN. A high-performance data-acquisition module

    Energy Technology Data Exchange (ETDEWEB)

    Kugel, Andreas

    2009-08-19

    This work presents the re-configurable processor ROBIN, which is a key element of the data-acquisition-system of the ATLAS experiment, located at the new LHC at CERN. The ATLAS detector provides data over 1600 channels simultaneously towards the DAQ system. The ATLAS dataflow model follows the ''PULL'' strategy in contrast to the commonly used ''PUSH'' strategy. The data volume transported is reduced by a factor of 10, however the data must be temporarily stored at the entry to the DAQ system. The input layer consists of approx. 160 ROS read-out units comprising 1 PC and 4 ROBIN modules. Each ROBIN device acquires detector data via 3 input channels and performs local buffering. Board control is done via a 64-bit PCI interface. Event selection and data transmission runs via PCI in the baseline bus-based ROS. Alternatively, a local GE interface can take over part or all of the data traffic in the switch-based ROS, in order to reduce the load on the host PC. The performance of the ROBIN module stems from the close cooperation of a fast embedded processor with a complex FPGA. The efficient task-distribution lets the processor handle all complex management functionality, programmed in ''C'' while all movement of data is performed by the FPGA via multiple, concurrently operating DMA engines. The ROBIN-project was carried-out by and international team and comprises the design specification, the development of the ROBIN hardware, firmware (VHDL and C-Code), host-code (C++), prototyping, volume production and installation of 700 boards. The project was led by the author of this thesis. The hardware platform is an evolution of a FPGA processor previously designed by the author. He has contributed elementary concepts of the communication mechanisms and the ''C''-coded embedded application software. He also organised and supervised the prototype and series productions including the various design reports and presentations. The results show that the ROBIN-module is able to meet

  3. The ATLAS ROBIN. A high-performance data-acquisition module

    International Nuclear Information System (INIS)

    Kugel, Andreas

    2009-01-01

    This work presents the re-configurable processor ROBIN, which is a key element of the data-acquisition-system of the ATLAS experiment, located at the new LHC at CERN. The ATLAS detector provides data over 1600 channels simultaneously towards the DAQ system. The ATLAS dataflow model follows the ''PULL'' strategy in contrast to the commonly used ''PUSH'' strategy. The data volume transported is reduced by a factor of 10, however the data must be temporarily stored at the entry to the DAQ system. The input layer consists of approx. 160 ROS read-out units comprising 1 PC and 4 ROBIN modules. Each ROBIN device acquires detector data via 3 input channels and performs local buffering. Board control is done via a 64-bit PCI interface. Event selection and data transmission runs via PCI in the baseline bus-based ROS. Alternatively, a local GE interface can take over part or all of the data traffic in the switch-based ROS, in order to reduce the load on the host PC. The performance of the ROBIN module stems from the close cooperation of a fast embedded processor with a complex FPGA. The efficient task-distribution lets the processor handle all complex management functionality, programmed in ''C'' while all movement of data is performed by the FPGA via multiple, concurrently operating DMA engines. The ROBIN-project was carried-out by and international team and comprises the design specification, the development of the ROBIN hardware, firmware (VHDL and C-Code), host-code (C++), prototyping, volume production and installation of 700 boards. The project was led by the author of this thesis. The hardware platform is an evolution of a FPGA processor previously designed by the author. He has contributed elementary concepts of the communication mechanisms and the ''C''-coded embedded application software. He also organised and supervised the prototype and series productions including the various design reports and presentations. The results show that the ROBIN-module is able to meet

  4. LabVIEW DAQ for NE213 Neutron Detector

    International Nuclear Information System (INIS)

    Al-Adeeb, Mohammed

    2003-01-01

    A neutron spectroscopy system, based on a NE213 liquid scintillation detector, to be placed at the Stanford Linear Accelerator Center to measure neutron spectra from a few MeV up to 800 MeV, beyond shielding. The NE213 scintillator, coupled with a Photomultiplier Tube (PMT), detects and converts radiation into current for signal processing. Signals are processed through Nuclear Instrument Modules (NIM) and Computer Automated Measurement and Control (CAMAC) modules. CAMAC is a computer automated data acquisition and handling system. Pulses are properly prepared and fed into an analog to digital converter (ADC), a standard CAMAC module. The ADC classifies the incoming analog pulses into 1 of 2048 digital channels. Data acquisition (DAQ) software based on LabVIEW, version 7.0, acquires and organizes data from the CAMAC ADC. The DAQ system presents a spectrum showing a relationship between pulse events and respective charge (digital channel number). Various photon sources, such as Co-60, Y-88, and AmBe-241, are used to calibrate the NE213 detector. For each source, a Compton edge and reference energy [units of MeVee] is obtained. A complete calibration curve results (at a given applied voltage to the PMT and pre-amplification gain) when the Compton edge and reference energy for each source is plotted. This project is focused to development of a DAQ system and control setup to collect and process information from a NE213 liquid scintillation detector. A manual is created to document the process of the development and interpretation of the LabVIEW-based DAQ system. Future high-energy neutron measurements can be referenced and normalized according to this calibration curve

  5. Network based on statistical multiplexing for event selection and event builder systems in high energy physics experiments

    International Nuclear Information System (INIS)

    Calvet, D.

    2000-03-01

    Systems for on-line event selection in future high energy physics experiments will use advanced distributed computing techniques and will need high speed networks. After a brief description of projects at the Large Hadron Collider, the architectures initially proposed for the Trigger and Data AcQuisition (TD/DAQ) systems of ATLAS and CMS experiments are presented and analyzed. A new architecture for the ATLAS T/DAQ is introduced. Candidate network technologies for this system are described. This thesis focuses on ATM. A variety of network structures and topologies suited to partial and full event building are investigated. The need for efficient networking is shown. Optimization techniques for high speed messaging and their implementation on ATM components are described. Small scale demonstrator systems consisting of up to 48 computers (∼1:20 of the final level 2 trigger) connected via ATM are described. Performance results are presented. Extrapolation of measurements and evaluation of needs lead to a proposal of implementation for the main network of the ATLAS T/DAQ system. (author)

  6. Network based on statistical multiplexing for event selection and event builder systems in high energy physics experiments; Reseau a multiplexage statistique pour les systemes de selection et de reconstruction d'evenements dans les experiences de physique des hautes energies

    Energy Technology Data Exchange (ETDEWEB)

    Calvet, D

    2000-03-01

    Systems for on-line event selection in future high energy physics experiments will use advanced distributed computing techniques and will need high speed networks. After a brief description of projects at the Large Hadron Collider, the architectures initially proposed for the Trigger and Data AcQuisition (TD/DAQ) systems of ATLAS and CMS experiments are presented and analyzed. A new architecture for the ATLAS T/DAQ is introduced. Candidate network technologies for this system are described. This thesis focuses on ATM. A variety of network structures and topologies suited to partial and full event building are investigated. The need for efficient networking is shown. Optimization techniques for high speed messaging and their implementation on ATM components are described. Small scale demonstrator systems consisting of up to 48 computers ({approx}1:20 of the final level 2 trigger) connected via ATM are described. Performance results are presented. Extrapolation of measurements and evaluation of needs lead to a proposal of implementation for the main network of the ATLAS T/DAQ system. (author)

  7. Network based on statistical multiplexing for event selection and event builder systems in high energy physics experiments; Reseau a multiplexage statistique pour les systemes de selection et de reconstruction d'evenements dans les experiences de physique des hautes energies

    Energy Technology Data Exchange (ETDEWEB)

    Calvet, D

    2000-03-01

    Systems for on-line event selection in future high energy physics experiments will use advanced distributed computing techniques and will need high speed networks. After a brief description of projects at the Large Hadron Collider, the architectures initially proposed for the Trigger and Data AcQuisition (TD/DAQ) systems of ATLAS and CMS experiments are presented and analyzed. A new architecture for the ATLAS T/DAQ is introduced. Candidate network technologies for this system are described. This thesis focuses on ATM. A variety of network structures and topologies suited to partial and full event building are investigated. The need for efficient networking is shown. Optimization techniques for high speed messaging and their implementation on ATM components are described. Small scale demonstrator systems consisting of up to 48 computers ({approx}1:20 of the final level 2 trigger) connected via ATM are described. Performance results are presented. Extrapolation of measurements and evaluation of needs lead to a proposal of implementation for the main network of the ATLAS T/DAQ system. (author)

  8. The D0 online monitoring and automatic DAQ recovery

    International Nuclear Information System (INIS)

    Haas, A.

    2004-01-01

    The DZERO experiment, located at the Fermi National Accelerator Laboratory, has recently started the Run 2 physics program. The detector upgrade included a new Data Acquisition/Level 3 Trigger system. Part of the design for the DAQ/Trigger system was a new monitoring infrastructure. The monitoring was designed to satisfy real-time requirements with 1-second resolution as well as nonreal-time data. It was also designed to handle a large number of displays without putting undue load on the sources of monitoring information. The resulting protocol is based on XML, is easily extensible, and has spawned a large number of displays, clients, and other applications. It is also one of the few sources of detector performance available outside the Online System's security wall. A tool, based on this system, which provides for auto-recovery of DAQ errors, has been designed. This talk will include a description of the DZERO DAQ/Online monitor server, based on the ACE framework, the protocol, the auto-recovery tool, and several of the unique displays which include an ORACLE-based archiver and numerous GUIs

  9. Cold front-end electronics and Ethernet-based DAQ systems for large LAr TPC readout

    CERN Document Server

    D.Autiero,; B.Carlus,; Y.Declais,; S.Gardien,; C.Girerd,; J.Marteau; H.Mathez

    2010-01-01

    Large LAr TPCs are among the most powerful detectors to address open problems in particle and astro-particle physics, such as CP violation in leptonic sector, neutrino properties and their astrophysical implications, proton decay search etc. The scale of such detectors implies severe constraints on their readout and DAQ system. We are carrying on a R&D in electronics on a complete readout chain including an ASIC located close to the collecting planes in the argon gas phase and a DAQ system based on smart Ethernet sensors implemented in a µTCA standard. The choice of the latter standard is motivated by the similarity in the constraints with those existing in Network Telecommunication Industry. We also developed a synchronization scheme developed from the IEEE1588 standard integrated by the use of the recovered clock from the Gigabit link

  10. Development of a cost-effective and flexible vibration DAQ system for long-term continuous structural health monitoring

    Science.gov (United States)

    Nguyen, Theanh; Chan, Tommy H. T.; Thambiratnam, David P.; King, Les

    2015-12-01

    In the structural health monitoring (SHM) field, long-term continuous vibration-based monitoring is becoming increasingly popular as this could keep track of the health status of structures during their service lives. However, implementing such a system is not always feasible due to on-going conflicts between budget constraints and the need of sophisticated systems to monitor real-world structures under their demanding in-service conditions. To address this problem, this paper presents a comprehensive development of a cost-effective and flexible vibration DAQ system for long-term continuous SHM of a newly constructed institutional complex with a special focus on the main building. First, selections of sensor type and sensor positions are scrutinized to overcome adversities such as low-frequency and low-level vibration measurements. In order to economically tackle the sparse measurement problem, a cost-optimized Ethernet-based peripheral DAQ model is first adopted to form the system skeleton. A combination of a high-resolution timing coordination method based on the TCP/IP command communication medium and a periodic system resynchronization strategy is then proposed to synchronize data from multiple distributed DAQ units. The results of both experimental evaluations and experimental-numerical verifications show that the proposed DAQ system in general and the data synchronization solution in particular work well and they can provide a promising cost-effective and flexible alternative for use in real-world SHM projects. Finally, the paper demonstrates simple but effective ways to make use of the developed monitoring system for long-term continuous structural health evaluation as well as to use the instrumented building herein as a multi-purpose benchmark structure for studying not only practical SHM problems but also synchronization related issues.

  11. DAQ

    CERN Multimedia

    F. Meijers

    2012-01-01

    The DAQ operated efficiently for the remainder of the pp 2012 run, where LHC reached a peak luminosity of 7.5E33 (at 50 ns bunch spacing). At the start of a fill, typical conditions are: an L1 trigger rate close to 90 kHz, a raw event size of ~700 kB, and ~1.5 kHz recording of stream-A with a size of ~500 kB after compression. The stream-A High Level Trigger (HLT) output includes the physics triggers and consists of the ‘core’ triggers and the ‘parked’ triggers, at about equal rate. Downtime due to central DAQ was below 1%. During the year, various improvements and enhancements have been implemented. An example is the introduction of the ‘action-matrix’ in run control. This matrix defines a small set of run modes linking a consistent set of configurations of sub-detector read-out configurations, and L1 and HLT settings as a function of LHC modes. This mechanism facilitates operation as it automatically proposes the run mode depending on the actual...

  12. The version control service for ATLAS data acquisition configuration filesDAQ ; configuration ; OKS ; XML

    CERN Document Server

    Soloviev, Igor; The ATLAS collaboration

    2012-01-01

    To configure data taking session the ATLAS systems and detectors store more than 160 MBytes of data acquisition related configuration information in OKS XML files. The total number of the files exceeds 1300 and they are updated by many system experts. In the past from time to time after such updates we had experienced problems caused by XML syntax errors or inconsistent state of files from a point of view of the overall ATLAS configuration. It was not always possible to know who made a modification causing problems or how to go back to a previous version of the modified file. Few years ago a special service addressing these issues has been implemented and deployed on ATLAS Point-1. It excludes direct write access to XML files stored in a central database repository. Instead, for an update the files are copied into a user repository, validated after modifications and committed using a version control system. The system's callback updates the central repository. Also, it keeps track of all modifications providi...

  13. Research and development of common DAQ platform

    International Nuclear Information System (INIS)

    Higuchi, T.; Igarashi, Y.; Nakao, M.; Suzuki, S.Y.; Tanaka, M.; Nagasaka, Y.; Varner, G.

    2003-01-01

    The upgrade of the KEKB accelerator toward L=10 35 cm -2 s -1 requires an upgrade of the Belle data acquisition system. To match the market trend, we develop a DAQ platform based on the PCI bus that enables fastest DAQ with longer lifetime of the system. The platform is a VME-9U motherboard comprising of four slots for signal digitization modules and three PMC slots to house CPU for data compression. The platform is equipped with event FIFOs for data buffering to minimize the dead-time. A trigger module residing on VME-6U size rear board is connected to the 9U board via PCI-PCI bridge to make an interrupt for the CPU upon the level-1 trigger. (author)

  14. Design of low noise front-end ASIC and DAQ system for CdZnTe detector

    International Nuclear Information System (INIS)

    Luo Jie; Deng Zhi; Liu Yinong

    2012-01-01

    A low noise front-end ASIC has been designed for CdZnTe detector. This chip contains 16 channels and each channel consists of a dual-stage charge sensitive preamplifier, 4th order semi-Gaussian shaper, leakage current compensation (LCC) circuit, discriminator and output buffer. This chip has been fabricated in Chartered 0.35 μm CMOS process, the preliminary results show that it works well. The total channel charge gain can be adjusted from 100 mV/fC to 400 mV/fC and the peaking time can be adjusted from 1 μs to 4 μs. The minimum measured ENC at zero input capacitance is 70 e and minimum noise slope is 20 e/pF. The peak detector and derandomizer (PDD) ASIC developed by BNL and an associated USB DAQ board are also introduced in this paper. Two front-end ASICs can be connected to the PDD ASIC on the USB DAQ board and compose a 32 channels DAQ system for CdZnTe detector. (authors)

  15. A Web 2.0 approach to DAQ monitoring and controlling

    Energy Technology Data Exchange (ETDEWEB)

    Penschuck, Manuel [Goethe-Universitaet, Frankfurt (Germany); Collaboration: TRB3-Collaboration

    2014-07-01

    In the scope of experimental set-ups for the upcoming FAIR experiments, a FPGA-based general purpose trigger and read-out board (TRB3) has been developed which is already in use in several detector set-ups (e.g. HADES, CBM-MVD, PANDA). For on- and off-board communication between the DAQ's subsystems, TrbNet, a specialised high-speed, low-latency network protocol developed for the DAQ system of the HADES detector, is used. Communication with any computer infrastructure is provided by Gigabit Ethernet. Monitoring and configuration of all DAQ systems and front-end electronics is consistently managed by the powerful slow-control features of TrbNet and supported by a flexible and mature software tool-chain, designed to meet the diverse requirements during development, setup phase and experiment. Most building blocks offer a graphical-user-interface (GUI) implemented using omnipresent web 2.0 technologies, which enable rapid prototyping, network transparent access and impose minimal software dependencies on the client's machine. This contribution will present the GUI-related features and infrastructure highlighting the multiple interfaces from the DAQ's slow-control to the client's web-browser.

  16. Intra and Inter-IOM Ccommunications Summary Document

    CERN Document Server

    Ambrosini, G; Cetin, S A; Conka, T; Fernandes, A; Francis, D; Joos, M; Lehmann, G; Mailov, A; Mapelli, L; Mornacchi, Giuseppe; Niculescu, M; Nurdan, K; Petersen, J; Spiwoks, R; Tremblet, L J; Ünel, G

    1999-01-01

    This document summarises the work performed, within the context of the DAQ-Unit of the DataFlow system in ATLAS DAQ/EF prototype -1 on intra and inter-Input/Output Module (IOM) message passing. This document fulfils the ATLAS DAQ/EF prototype -1 milestone of February 99.

  17. Development of the Calibrator of Reactivity Meter Using PC-Based DAQ System

    International Nuclear Information System (INIS)

    Edison; Mariatmo, A.; Sujarwono

    2007-01-01

    The reactivity meter calibrator has been developed by applying the PC-Based DAQ System programmed using LabVIEW. The Output of the calibrator is voltage proportional to neutron density n(t) corresponding to the step reactivity change ρ 0 . The “Kalibrator meter reactivitas.vi” program calculates seven roots and coefficients of solution n(t) of Reactor Kinetic equation using the in-hour equation. Based on data of dt = t k+1 - t k and t 0 = 0 input by user, the program approximates n(t) for each time interval t k ≤ t k+1 , where k = 0, 1, 2, 3, .... by a step function n(t) = n 0 ∑ j=1 7 A j e ω j t k . Then the program commands the DAQ device to output voltage V(t) = n(t) Volt at time t. The measurement of standard reactivity with the meter reactivity showed that the maximum deviation of measured reactivity from its standard were less than 1 %. (author)

  18. Embedded DAQ System Design for Temperature and Humidity Measurement

    International Nuclear Information System (INIS)

    Memon, T.R.

    2013-01-01

    In this work, we have proposed a cost effective DAQ (Data Acquisition) system design useful for local industries by using user friendly LABVIEW (Laboratory Virtual Instrumentation Electronic Workbench). The proposed system can measure and control different industrial parameters which can be presented in graphical icon format. The system design is proposed for 8-channels, whereas tested and recorded for two parameters i.e. temperature and RH (Relative Humidity). Both parameters are set as per upper and lower limits and controlled using relays. Embedded system is developed using standard microcontroller to acquire and process the analog data and plug-in for further processing using serial interface with PC using LABVIEW. The designed system is capable of monitoring and recording the corresponding linkage between temperature and humidity in industrial unit's and indicates the abnormalities within the process and control those abnormalities through relays. (author)

  19. The Third ATLAS ROD Workshop

    CERN Multimedia

    Poggioli, L.

    A new-style Workshop After two successful ATLAS ROD Workshops dedicated to the ROD hardware and held at the Geneva University in 1998 and in 2000, a new style Workshop took place at LAPP in Annecy on November 14-15, 2002. This time the Workshop was fully dedicated to the ROD-TDAQ integration and software in view of the near future integration activities of the final RODs for the detector assembly and commissioning. More precisely, the aim of this workshop was to get from the sub-detectors the parameters needed for T-DAQ, as well as status and plans from ROD builders. On the other hand, what was decided and assumed had to be stated (like EB decisions and URDs), and also support plans. The Workshop gathered about 70 participants from all ATLAS sub-detectors and the T-DAQ community. The quite dense agenda allowed nevertheless for many lively discussions, and for a dinner in the old town of Annecy. The Sessions The Workshop was organized in five main sessions: Assumptions and recommendations Sub-de...

  20. DZERO Level 3 DAQ/Trigger Closeout

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The Tevatron Collider, located at the Fermi National Accelerator Laboratory, delivered its last 1.96 TeV proton-antiproton collisions on September 30th, 2011. The DZERO experiment continues to take cosmic data for final alignment for several more months . Since Run 2 started, in March 2001, all DZERO data has been collected by the DZERO Level 3 Trigger/DAQ System. The system is a modern, networked, commodity hardware trigger and data acquisition system based around a large central switch with about 60 front ends and 200 trigger computers. DZERO front end crates are VME based. Single Board Computer interfaces between detector data on VME and the network transport for the DAQ system. Event flow is controlled by the Routing Master which can steer events to clusters of farm nodes based on the low level trigger bits that fired. The farm nodes are multi-core commodity computer boxes, without special hardware, that run isolated software to make the final Level 3 trigger decision. Passed events are transferred to th...

  1. The ATLAS ROBIN. A high-performance data-acquisition module

    Energy Technology Data Exchange (ETDEWEB)

    Kugel, Andreas

    2009-08-19

    This work presents the re-configurable processor ROBIN, which is a key element of the data-acquisition-system of the ATLAS experiment, located at the new LHC at CERN. The ATLAS detector provides data over 1600 channels simultaneously towards the DAQ system. The ATLAS dataflow model follows the ''PULL'' strategy in contrast to the commonly used ''PUSH'' strategy. The data volume transported is reduced by a factor of 10, however the data must be temporarily stored at the entry to the DAQ system. The input layer consists of approx. 160 ROS read-out units comprising 1 PC and 4 ROBIN modules. Each ROBIN device acquires detector data via 3 input channels and performs local buffering. Board control is done via a 64-bit PCI interface. Event selection and data transmission runs via PCI in the baseline bus-based ROS. Alternatively, a local GE interface can take over part or all of the data traffic in the switch-based ROS, in order to reduce the load on the host PC. The performance of the ROBIN module stems from the close cooperation of a fast embedded processor with a complex FPGA. The efficient task-distribution lets the processor handle all complex management functionality, programmed in ''C'' while all movement of data is performed by the FPGA via multiple, concurrently operating DMA engines. The ROBIN-project was carried-out by and international team and comprises the design specification, the development of the ROBIN hardware, firmware (VHDL and C-Code), host-code (C++), prototyping, volume production and installation of 700 boards. The project was led by the author of this thesis. The hardware platform is an evolution of a FPGA processor previously designed by the author. He has contributed elementary concepts of the communication mechanisms and the ''C''-coded embedded application software. He also organised and supervised the prototype and series productions including the various design

  2. Embedded DAQ System Design for Temperature and Humidity Measurement

    Directory of Open Access Journals (Sweden)

    Tarique Rafique Memon

    2016-05-01

    Full Text Available In this work, we have proposed a cost effective DAQ (Data Acquisition system design useful for local industries by using user friendly LABVIEW (Laboratory Virtual Instrumentation Electronic Workbench. The proposed system can measure and control different industrial parameters which can be presented in graphical icon format. The system design is proposed for 8-channels, whereas tested and recorded for two parameters i.e. temperature and RH (Relative Humidity. Both parameters are set as per upper and lower limits and controlled using relays. Embedded system is developed using standard microcontroller to acquire and process the analog data and plug-in for further processing using serial interface with PC using LABVIEW. The designed system is capable of monitoring and recording the corresponding linkage between temperature and humidity in industrial unit's and indicates the abnormalities within the process and control those abnormalities through relays

  3. Front-end DAQ strategy and implementation for the KLOE-2 experiment

    Science.gov (United States)

    Branchini, P.; Budano, A.; Balla, A.; Beretta, M.; Ciambrone, P.; De Lucia, E.; D'Uffizi, A.; Marciniewski, P.

    2013-04-01

    A new front-end data acquisition (DAQ) system has been conceived for the data collection of the new detectors which will be installed by the KLOE2 collaboration. This system consists of a general purpose FPGA based DAQ module and a VME board hosting up to 16 optical links. The DAQ module has been built around a Virtex-4 FPGA and it is able to acquire up to 1024 different channels distributed over 16 front-end slave cards. Each module is a general interface board (GIB) which performs also first level data concentration tasks. The GIB has an optical interface, a RS-232, an USB and a Gigabit Ethernet Interface. The optical interface will be used for DAQ purposes while the Gigabit Ethernet interface for monitoring tasks and debug. Two new detectors exploit this strategy to collect data. Optical links are used to deliver data to the VME board which performs data concentration tasks. The return optical link from the board to the GIB is used to initialize the front-end cards. The VME interface of the module implements the VME 2eSST protocol in order to sustain a peak data rate of up to 320 MB/s. At the moment the system is working at the Frascati National Laboratory (LNF).

  4. Front-end DAQ strategy and implementation for the KLOE-2 experiment

    International Nuclear Information System (INIS)

    Branchini, P; Budano, A; Balla, A; Beretta, M; Ciambrone, P; Lucia, E De; D'Uffizi, A; Marciniewski, P

    2013-01-01

    A new front-end data acquisition (DAQ) system has been conceived for the data collection of the new detectors which will be installed by the KLOE2 collaboration. This system consists of a general purpose FPGA based DAQ module and a VME board hosting up to 16 optical links. The DAQ module has been built around a Virtex-4 FPGA and it is able to acquire up to 1024 different channels distributed over 16 front-end slave cards. Each module is a general interface board (GIB) which performs also first level data concentration tasks. The GIB has an optical interface, a RS-232, an USB and a Gigabit Ethernet Interface. The optical interface will be used for DAQ purposes while the Gigabit Ethernet interface for monitoring tasks and debug. Two new detectors exploit this strategy to collect data. Optical links are used to deliver data to the VME board which performs data concentration tasks. The return optical link from the board to the GIB is used to initialize the front-end cards. The VME interface of the module implements the VME 2eSST protocol in order to sustain a peak data rate of up to 320 MB/s. At the moment the system is working at the Frascati National Laboratory (LNF).

  5. Test Management Framework for the Data Acquisition of the ATLAS Experiment

    CERN Document Server

    Kazarov, Andrei; The ATLAS collaboration

    2017-01-01

    Data Acquisition (DAQ) of the ATLAS experiment is a large distributed and inhomogeneous system: it consists of thousands of interconnected computers and electronics devices that operate coherently to read out and select relevant physics data. Advanced testing and diagnostics capabilities of the TDAQ control system are a crucial feature which contributes significantly to smooth operation and fast recovery in case of the problems and, finally, to the high efficiency of the whole experiment. The base layer of the verification and diagnostic functionality is a test management framework. We have developed a flexible test management system that allows the experts to define and configure tests for different components, indicate follow-up actions to test failures and describe inter-dependencies between DAQ or detector elements. This development is based on the experience gained with the previous test system that was used during the first three years of the data taking. We discovered that more emphasis needed to be pu...

  6. Online remote monitoring facilities for the ATLAS experiment

    CERN Document Server

    Kolos, S; The ATLAS collaboration; Feng, E; Hauser, R; Yakovlev, A; Zaytsev, A

    2010-01-01

    ATLAS is one of the 4 LHC experiments which started to be operated in the collisions mode in 2010. The ATLAS apparatus itself as well as the Trigger and the DAQ system are extremely complex facilities which have been built up by the collaboration including 144 institutes from 33 countries. The effective running of the experiment is supported by a large number of experts distributed all over the world. This paper describes the online remote monitoring system which has been developed in the ATLAS TDAQ community in order to support efficient participation of the experts from remote institutes in the exploitation of the experiment. The facilities provided by the remote monitoring system are ranging from the WEB based access to the general status and data quality for the ongoing data taking session to the scalable service providing real-time mirroring of the detailed monitoring data from the experimental area to the dedicated computers in the CERN public network, where this data is made available to remote users t...

  7. Firmware development and testing of the ATLAS Pixel Detector / IBL ROD card

    International Nuclear Information System (INIS)

    Gabrielli, A.; Balbi, G.; Falchieri, D.; Lama, L.; Travaglini, R.; Backhaus, M.; Bindi, M.; Chen, S.P.; Hauck, S.; Hsu, S.C.; Flick, T.; Wensing, M.; Kretz, M.; Kugel, A.

    2015-01-01

    The ATLAS Experiment is reworking and upgrading systems during the current LHC shut down. In particular, the Pixel detector has inserted an additional inner layer called the Insertable B-Layer (IBL). The Readout-Driver card (ROD), the Back-of-Crate card (BOC), and the S-Link together form the essential frontend data path of the IBL's off-detector DAQ system. The strategy for IBL ROD firmware development was three-fold: keeping as much of the Pixel ROD datapath firmware logic as possible, employing a complete new scheme of steering and calibration firmware, and designing the overall system to prepare for a future unified code version integrating IBL and Pixel layers. Essential features such as data formatting, frontend-specific error handling, and calibration are added to the ROD data path. An IBL DAQ test bench using a realistic front-end chip model was created to serve as an initial framework for full offline electronic system simulation. In this document, major firmware achievements concerning the IBL ROD data path implementation, test on the test bench and ROD prototypes, will be reported. Recent Pixel collaboration efforts focus on finalizing hardware and firmware tests for the IBL. The plan is to approach a complete IBL DAQ hardware-software installation by the end of 2014

  8. Data-flow Performance Optimisation on Unreliable Networks: the ATLAS Data-Acquisition Case

    CERN Document Server

    Colombo, T; The ATLAS collaboration

    2015-01-01

    Abstract The ATLAS detector at CERN records proton-proton collisions delivered by the Large Hadron Collider (LHC). The ATLAS Trigger and Data-Acquisition (TDAQ) system identifies, selects, and stores interesting collision data. These are received from the detector readout electronics at an average rate of 100 kHz. The typical event data size is 1 to 2 MB. Overall, the ATLAS TDAQ can be seen as a distributed software system executed on a farm of roughly 2000 commodity PCs. The worker nodes are interconnected by an Ethernet network that at the restart of the LHC in 2015 is expected to experience a sustained throughput of several 10 GB/s. Abstract A particular type of challenge posed by this system, and by DAQ systems in general, is the inherently bursty nature of the data traffic from the readout buffers to the worker nodes. This can cause instantaneous network congestion and therefore performance degradation. The effect is particularly pronounced for unreliable network interconnections, such as Ethernet. Abstr...

  9. Data-flow performance optimization on unreliable networks: the ATLAS data-acquisition case

    CERN Document Server

    Colombo, T; The ATLAS collaboration

    2014-01-01

    The ATLAS detector at CERN records proton-proton collisions delivered by the Large Hadron Collider (LHC). The ATLAS Trigger and Data-Acquisition (TDAQ) system identifies, selects, and stores interesting collision data. These are received from the detector readout electronics at an average rate of 100 kHz. The typical event data size is 1 to 2 MB. Overall, the ATLAS TDAQ can be seen as a distributed software system executed on a farm of roughly 2000 commodity PCs. The worker nodes are interconnected by an Ethernet network that at the restart of the LHC in 2015 is expected to experience a sustained throughput of several 10 GB/s. A particular type of challenge posed by this system, and by DAQ systems in general, is the inherently bursty nature of the data traffic from the readout buffers to the worker nodes. This can cause instantaneous network congestion and therefore performance degradation. The effect is particularly pronounced for unreliable network interconnections, such as Ethernet. In this presentation we...

  10. Firmware development and testing of the ATLAS Pixel Detector / IBL ROD card

    CERN Document Server

    Gabrielli, Alessandro; The ATLAS collaboration; Balbi, Gabriele; Bindi, Marcello; Chen, Shaw-pin; Falchieri, Davide; Flick, Tobias; Hauck, Scott Alan; Hsu, Shih-Chieh; Kretz, Moritz; Kugel, Andreas; Lama, Luca; Travaglini, Riccardo; Wensing, Marius; ATLAS Pixel Collaboration

    2015-01-01

    The ATLAS Experiment is reworking and upgrading systems during the current LHC shut down. In particular, the Pixel detector has inserted an additional inner layer called Insertable B-Layer (IBL). The Readout-Driver card (ROD), the Back-of-Crate card (BOC), and the S-Link together form the essential frontend data path of the IBL’s off-detector DAQ system. The strategy for IBL ROD firmware development was three-fold: keeping as much of the Pixel ROD datapath firmware logic as possible, employing a complete new scheme of steering and calibration firmware and designing the overall system to prepare for a future unified code version integrating IBL and Pixel layers. Essential features such as data formatting, frontend-specific error handling, and calibration are added to the ROD data path. An IBL DAQ testbench using realistic frontend chip model was created to serve as an initial framework for full offline electronic system simulation. In this document, major firmware achievements concerning the IBL ROD data pat...

  11. Flexible custom designs for CMS DAQ

    CERN Document Server

    Arcidiacono, Roberta; Boyer, Vincent; Brett, Angela Mary; Cano, Eric; Carboni, Andrea; Ciganek, Marek; Cittolin, Sergio; Erhan, Samim; Gigi, Dominique; Glege, Frank; Gomez-Reino Garrido, Robert; Gulmini, Michele; Gutleber, Johannes; Jacobs, Claude; Maron, Gaetano; Meijers, Frans; Meschi, Emilio; Murray, Steven John; Oh, Alexander; Orsini, Luciano; Paus, Christoph; Petrucci, Andrea; Piedra Gomez, Jonatan; Pieri, Marco; Pollet, Lucien; Racz, Attila; Sakulin, Hannes; Schwick, Christoph; Sumorok, Konstanty; Suzuki, Ichiro; Tsirigkas, Dimitrios; Varela, Joao

    2006-01-01

    The CMS central DAQ system is built using commercial hardware (PCs and networking equipment), except for two components: the Front-end Readout Link (FRL) and the Fast Merger Module (FMM). The FRL interfaces the sub-detector specific front-end electronics to the central DAQ system in a uniform way. The FRL is a compact-PCI module with an additional PCI 64bit connector to host a Network Interface Card (NIC). On the sub-detector side, the data are written to the link using a FIFO-like protocol (SLINK64). The link uses the Low Voltage Differential Signal (LVDS) technology to transfer data with a throughput of up to 400 MBytes/s. The FMM modules collect status signals from the front-end electronics of the sub-detectors, merge and monitor them and provide the resulting signals with low latency to the first level trigger electronics. In particular, the throttling signals allow the trigger to avoid buffer overflows and data corruption in the front-end electronics when the data produced in the front-end exceeds the c...

  12. The 40 MHz trigger-less DAQ for the LHCb Upgrade

    Energy Technology Data Exchange (ETDEWEB)

    Campora Perez, D.H. [INFN CNAF, Bologna (Italy); Falabella, A., E-mail: antonio.falabella@cnaf.infn.it [CERN, Geneva (Switzerland); Galli, D. [INFN Sezione di Bologna, Bologna (Italy); Università Bologna, Bologna (Italy); Giacomini, F. [CERN, Geneva (Switzerland); Gligorov, V. [INFN CNAF, Bologna (Italy); Manzali, M. [Università Bologna, Bologna (Italy); Università Ferrara, Ferrara (Italy); Marconi, U. [INFN Sezione di Bologna, Bologna (Italy); Neufeld, N.; Otto, A. [INFN CNAF, Bologna (Italy); Pisani, F. [INFN CNAF, Bologna (Italy); Università la Sapienza, Roma (Italy); Vagnoni, V.M. [INFN Sezione di Bologna, Bologna (Italy)

    2016-07-11

    The LHCb experiment will undergo a major upgrade during the second long shutdown (2018–2019), aiming to let LHCb collect an order of magnitude more data with respect to Run 1 and Run 2. The maximum readout rate of 1 MHz is the main limitation of the present LHCb trigger. The upgraded detector, apart from major detector upgrades, foresees a full read-out, running at the LHC bunch crossing frequency of 40 MHz, using an entirely software based trigger. A new high-throughput PCIe Generation 3 based read-out board, named PCIe40, has been designed for this purpose. The read-out board will allow an efficient and cost-effective implementation of the DAQ system by means of high-speed PC networks. The network-based DAQ system reads data fragments, performs the event building, and transports events to the High-Level Trigger at an estimated aggregate rate of about 32 Tbit/s. Different architecture for the DAQ can be implemented, such as push, pull and traffic shaping with barrel-shifter. Possible technology candidates for the foreseen event-builder under study are InfiniBand and Gigabit Ethernet. In order to define the best implementation of the event-builder we are performing tests of the event-builder on different platforms with different technologies. For testing we are using an event-builder evaluator, which consists of a flexible software implementation, to be used on small size test beds as well as on HPC scale facilities. The architecture of DAQ system and up to date performance results will be presented.

  13. Firmware development and testing of the ATLAS IBL Read-Out Driver card

    CERN Document Server

    Chen, S-P; The ATLAS collaboration; Falchieri, D; Gabrielli, A; Hauck, S; Hsu, S-C; Kretz, M; Kugel, A; Travaglini, R; Wensing, M

    2014-01-01

    The ATLAS Experiment is reworking and upgrading systems during the current LHC shutdown. In particular, the Pixel detector is inserting an additional inner layer called Insertable B-Layer (IBL). The Read-Out Driver card (ROD), the Back-of-Crate card (BOC), and the S-Link together form the essential frontend data path of the IBL’s off-detector DAQ system. The strategy for IBL ROD firmware development focused on migrating and tailoring HDL code blocks from Pixel ROD to ensure modular compatibility in future ROD upgrades, in which a unified code version will interface with IBL and Pixel layers. Essential features such as data formatting, frontend-specific error handling, and calibration are added to the ROD data path. An IBL DAQ testbench using a realistic frontend chip model was created to serve as an initial framework for full offline electronic system simulation. In this document, major firmware achievements concerning the IBL ROD data path implementation, tested in testbench and on ROD prototypes, will be ...

  14. ATLAS TRT Barrel in Test Beam

    CERN Multimedia

    Luehring, F

    In July, the TRT group made a highly successful test of 6 Barrel TRT modules in the ATLAS H8 testbeam. Over 3000 TRT straw tubes (4 mm diameter gas drift tubes) were instrumented and found to operate well. The prototype represents 1/16 of the ATLAS TRT barrel and was assembled from TRT modules produced as spares. This was the largest scale test of the TRT to this date and the measured detector performance was as good as or better than what was expected in all cases. The 2004 TRT testbeam setup before final cabling was attached. The readout chain and central DAQ system used in the TRT testbeam is a final prototype for the ATLAS experiment. The TRT electronics used to read out the data were: The Amplifier/Shaper/Discriminator with Baseline Restoration (ASDBLR) chip is the front-end analog chip that shapes and discriminates the electronic pulses generated by the TRT straws. The Digital Time Measurement Read Out Chip (DTMROC) measures the time of the pulse relative to the beam crossing time. The TRT-ROD ...

  15. LAND/R3B DAQ developments

    Energy Technology Data Exchange (ETDEWEB)

    Toernqvist, Hans; Aumann, Thomas; Loeher, Bastian [Technische Universitaet Darmstadt, Darmstadt (Germany); Simon, Haik [GSI Helmholtzzentrum fuer Schwerionenforschung GmbH, Darmstadt (Germany); Johansson, Haakan [Chalmers Institute of Technology, Goeteborg (Sweden); Collaboration: R3B-Collaboration

    2015-07-01

    Existing experimental setups aim to exploit most of the improved capabilities and specifications of the upcoming FAIR facility at GSI. Their DAQ designs will require some re-evaluation and upgrades. This presentation summarizes the R3B experimental campaigns in 2014, where the R3B DAQ was subject to testing of several new features that will aid researchers in using larger and more complicated experimental setups in the future. It also acted as part of a small testing ground for the NUSTAR DAQ infrastructure. In order to allow to extract correlations between several experimental sites, new suggested triggering and timestamping implementations were tested over significant distances. Also, with growing experimental complexity comes a greater risk of problems that may be difficult to characterize and solve. To this end, essential remote monitoring and debugging tools have been used successfully.

  16. The DAQ system for the AEḡIS experiment

    Science.gov (United States)

    Prelz, F.; Aghion, S.; Amsler, C.; Ariga, T.; Bonomi, G.; Brusa, R. S.; Caccia, M.; Caravita, R.; Castelli, F.; Cerchiari, G.; Comparat, D.; Consolati, G.; Demetrio, A.; Di Noto, L.; Doser, M.; Ereditato, A.; Evans, C.; Ferragut, R.; Fesel, J.; Fontana, A.; Gerber, S.; Giammarchi, M.; Gligorova, A.; Guatieri, F.; Haider, S.; Hinterberger, A.; Holmestad, H.; Kellerbauer, A.; Krasnický, D.; Lagomarsino, V.; Lansonneur, P.; Lebrun, P.; Malbrunot, C.; Mariazzi, S.; Matveev, V.; Mazzotta, Z.; Müller, S. R.; Nebbia, G.; Nedelec, P.; Oberthaler, M.; Pacifico, N.; Pagano, D.; Penasa, L.; Petracek, V.; Prevedelli, M.; Ravelli, L.; Rienaecker, B.; Robert, J.; Røhne, O. M.; Rotondi, A.; Sacerdoti, M.; Sandaker, H.; Santoro, R.; Scampoli, P.; Simon, M.; Smestad, L.; Sorrentino, F.; Testera, G.; Tietje, I. C.; Widmann, E.; Yzombard, P.; Zimmer, C.; Zmeskal, J.; Zurlo, N.

    2017-10-01

    In the sociology of small- to mid-sized (O(100) collaborators) experiments the issue of data collection and storage is sometimes felt as a residual problem for which well-established solutions are known. Still, the DAQ system can be one of the few forces that drive towards the integration of otherwise loosely coupled detector systems. As such it may be hard to complete with off-the-shelf components only. LabVIEW and ROOT are the (only) two software systems that were assumed to be familiar enough to all collaborators of the AEḡIS (AD6) experiment at CERN: working out of the GXML representation of LabVIEW Data types, a semantically equivalent representation as ROOT TTrees was developed for permanent storage and analysis. All data in the experiment is cast into this common format and can be produced and consumed on both systems and transferred over TCP and/or multicast over UDP for immediate sharing over the experiment LAN. We describe the setup that has been able to cater to all run data logging and long term monitoring needs of the AEḡIS experiment so far.

  17. LHCb DAQ network upgrade tests

    CERN Document Server

    Pisani, Flavio

    2013-01-01

    My project concerned the evaluation of new technologies for the DAQ network upgrade of LHCb. The first part consisted in developing and Open Flow-based Clos network. This new technology is very interesting and powerful but, as shown by the results, it still needs further improvements. The second part consisted in testing and benchmarking 40GbE network equipment: Mellanox MT27500, Chelsio T580 and Huawei Cloud Engine 12804. An event-building simulation is currently been performed in order to check the feasibility of the DAQ network upgrade in LS2. The first results are promising.

  18. The data path of the ATLAS level-1 calorimeter trigger preprocessor

    Energy Technology Data Exchange (ETDEWEB)

    Andrei, George Victor

    2010-10-27

    The PreProcessor of the ATLAS Level-1 Calorimeter Trigger provides digital values of transverse energy in real-time to the subsequent object-finding processors. The input comprises more than 7000 analogue signals of reduced granularity from the calorimeters of the ATLAS detector. The Level-1 trigger decision must be verified. For this, the PreProcessor transmits copies of the real-time digital data to the Data Acquisition (DAQ) system. In addition, the PreProcessor system provides a standard VMEbus interface to the computing infrastructure of the experiment, on which configuration data is loaded and control or monitoring data are read out. A dedicated system that ensures both the transfer of event data to storage in ATLAS and the data transfer over the VME was implemented on the 124 modules of the PreProcessor system in the form of a ''Readout Manager''. The ''Field Programmable Gate Array'' (FPGA) is located on each module. The rst part of this work describes the algorithms developed to meet the functionality of the Readout Manager. The second part deals with the tests that were carried out to ensure the proper functionality of the modules before they were installed at CERN in the ATLAS cavern. (orig.)

  19. The data path of the ATLAS level-1 calorimeter trigger preprocessor

    International Nuclear Information System (INIS)

    Andrei, George Victor

    2010-01-01

    The PreProcessor of the ATLAS Level-1 Calorimeter Trigger provides digital values of transverse energy in real-time to the subsequent object-finding processors. The input comprises more than 7000 analogue signals of reduced granularity from the calorimeters of the ATLAS detector. The Level-1 trigger decision must be verified. For this, the PreProcessor transmits copies of the real-time digital data to the Data Acquisition (DAQ) system. In addition, the PreProcessor system provides a standard VMEbus interface to the computing infrastructure of the experiment, on which configuration data is loaded and control or monitoring data are read out. A dedicated system that ensures both the transfer of event data to storage in ATLAS and the data transfer over the VME was implemented on the 124 modules of the PreProcessor system in the form of a ''Readout Manager''. The ''Field Programmable Gate Array'' (FPGA) is located on each module. The rst part of this work describes the algorithms developed to meet the functionality of the Readout Manager. The second part deals with the tests that were carried out to ensure the proper functionality of the modules before they were installed at CERN in the ATLAS cavern. (orig.)

  20. The data path of the ATLAS level-1 calorimeter trigger preprocessor

    Energy Technology Data Exchange (ETDEWEB)

    Andrei, George Victor

    2010-10-27

    The PreProcessor of the ATLAS Level-1 Calorimeter Trigger provides digital values of transverse energy in real-time to the subsequent object-finding processors. The input comprises more than 7000 analogue signals of reduced granularity from the calorimeters of the ATLAS detector. The Level-1 trigger decision must be verified. For this, the PreProcessor transmits copies of the real-time digital data to the Data Acquisition (DAQ) system. In addition, the PreProcessor system provides a standard VMEbus interface to the computing infrastructure of the experiment, on which configuration data is loaded and control or monitoring data are read out. A dedicated system that ensures both the transfer of event data to storage in ATLAS and the data transfer over the VME was implemented on the 124 modules of the PreProcessor system in the form of a ''Readout Manager''. The ''Field Programmable Gate Array'' (FPGA) is located on each module. The rst part of this work describes the algorithms developed to meet the functionality of the Readout Manager. The second part deals with the tests that were carried out to ensure the proper functionality of the modules before they were installed at CERN in the ATLAS cavern. (orig.)

  1. Experience using a distributed object oriented database for a DAQ system

    International Nuclear Information System (INIS)

    Bee, C.P.; Eshghi, S.; Jones, R.

    1996-01-01

    To configure the RD13 data acquisition system, we need many parameters which describe the various hardware and software components. Such information has been defined using an entity-relation model and stored in a commercial memory-resident database. during the last year, Itasca, an object oriented database management system (OODB), was chosen as a replacement database system. We have ported the existing databases (hs and sw configurations, run parameters etc.) to Itasca and integrated it with the run control system. We believe that it is possible to use an OODB in real-time environments such as DAQ systems. In this paper, we present our experience and impression: why we wanted to change from an entity-relational approach, some useful features of Itasca, the issues we meet during this project including integration of the database into an existing distributed environment and factors which influence performance. (author)

  2. The ATLAS Data Acquisition and High Level Trigger Systems: Experience and Upgrade Plans

    CERN Document Server

    Hauser, R; The ATLAS collaboration

    2012-01-01

    The ATLAS DAQ/HLT system reduces the Level 1 rate of 75 kHz to a few kHz event build rate after Level 2 and a few hundred Hz out output rate to disk. It has operated with an average data taking efficiency of about 94% during the recent years. The performance has far exceeded the initial requirements, with about 5 kHz event building rate and 500 Hz of output rate in 2012, driven mostly by physics requirements. Several improvements and upgrades are foreseen in the upcoming long shutdowns, both to simplify the existing architecture and improve the performance. On the network side new core switches will be deployed and possible use of 10GBit Ethernet links for critical areas is foreseen. An improved read-out system to replace the existing solution based on PCI is under development. A major evolution of the high level trigger system foresees a merging of the Level 2 and Event Filter functionality on a single node, including the event building. This will represent a big simplification of the existing system, while ...

  3. DATA ACQUISITION (DAQ)

    CERN Multimedia

    Attila Racz

    DAQ/On-Line Computing installation status After the installation and commissioning of the DAQ underground elements in 2006 and the first months of 2007, all the efforts are now directed to the installation and commissioning of the On-Line Computing farm (OLC) located on the first floor of SCX5 building at the CMS experimental site. In summer 2007, 640 Readout Unit servers (RUs) have been installed and commissioned along with 160 servers providing general services for the users (DCS, database, RCMS, data storage, etc). Since the global run of November 2007, the event fragments are assembled and processed by the OLC. Thanks to the flexibility of the trapezoidal event builder, some RUs are acting as Filter Units (FUs) and hence provide the full processing chain with a single type of server. With this temporary configuration, all FEDs can be readout at a few kHz. Since the March 08 global run, events are stored on the storage manager SAN in the OLC, and subsequently transferred over the dedicated CDR link (2 x...

  4. DAQ

    CERN Multimedia

    P. Schieferdecker

    ConfDB: CMS HLT Configuration Database The CMS High Level Trigger (HLT) is based on the CMSSW reconstruction framework and is therefore configured in much the same way as any offline or analysis job: by passing a document to the internal event processing machinery which is valid according to the CMSSW configuration grammar. For offline reconstruction or analysis, this document can be formatted as a text file or a Python script, which CMSSW can both interpret as to which specific software modules to load, which value to assign to each of their parameters, and in which succession to apply them to a given event. The configuration of the HLT is very complex: saving the most recent version of it into a single text file results in more than 8000 lines of instructions, amounting to more than 350kB in size. As for any other subsystem of the CMS data acquisition system (DAQ), the record of the state of the HLT during data-taking must be meticulously kept and archived. It is crucial that several versions of a part...

  5. DATA ACQUISITION (DAQ)

    CERN Multimedia

    Gerry Bauer

    The CMS Storage Manager System The tail-end of the CMS Data Acquisition System is the Storage Manger (SM), which collects output from the HLT and stages the data at Cessy for transfer to its ultimate home in the Tier-0 center. A SM system has been used by CMS for several years with the steadily evolving software within the XDAQ framework, but until relatively recently, only with provisional hardware. The SM is well known to much of the collaboration through the ‘MiniDAQ’ system, which served as the central DAQ system in 2007, and lives on in 2008 for dedicated sub-detector commissioning. Since March of 2008 a first phase of the final hardware was commissioned and used in CMS Global Runs. The system originally planned for 2008 aimed at recording ~1MB events at a few hundred Hz. The building blocks to achieve this are based on Nexsan's SATABeast storage array - a device  housing up to 40 disks of 1TB each, and possessing two controllers each capable of almost 200 MB/sec throughput....

  6. The upgrade of the ATLAS High Level Trigger and Data Acquisition systems and their integration

    CERN Document Server

    Abreu, R; The ATLAS collaboration

    2014-01-01

    The Data Acquisition (DAQ) and High Level Trigger (HLT) systems that served the ATLAS experiment during LHC's first run are being upgraded in the first long LHC shutdown period, from 2013 to 2015. This contribution describes the elements that are vital for the new interaction between the two systems. The central architectural enhancement is the fusion of the once separate Level 2, Event Building (EB), and Event Filter steps. Through the factorization of previously disperse functionality and better exploitation of caching mechanisms, the inherent simplification carries with it an increase in performance. Flexibility to different running conditions is improved by an automatic balance of formerly separate tasks. Incremental EB is the principle of the new Data Collection, whereby the HLT farm avoids duplicate requests to the detector Read-Out System (ROS) by preserving and reusing previously obtained data. Moreover, requests are packed and fetched together to avoid redundant trips to the ROS. Anticipated EB is ac...

  7. Development of fluorocarbon evaporative cooling recirculators and controls for the ATLAS inner silicon tracker

    CERN Document Server

    Bayer, C; Bonneau, P; Bosteels, Michel; Burckhart, H J; Cragg, D; English, R; Hallewell, G D; Hallgren, Björn I; Ilie, S; Kersten, S; Kind, P; Langedrag, K; Lindsay, S; Merkel, M; Stapnes, Steinar; Thadome, J; Vacek, V

    2000-01-01

    We report on the development of evaporative fluorocarbon cooling recirculators and their control systems for the ATLAS inner silicon tracker. We have developed a prototype circulator using a dry, hermetic compressor with C/sub 3/F/sup 8/ refrigerant, and have prototyped the remote-control analog pneumatic links for the regulation of coolant mass flows and operating temperatures that will be necessary in the magnetic field and radiation environment around ATLAS. pressure and flow measurement and control use 150+ channels of standard ATLAS LMB ("Local Monitor Board") DAQ and DACs on a multi-drop CAN network administered through a BridgeVIEW user interface. A hardwired thermal interlock system has been developed to cut power to individual silicon modules should their temperatures exceed safe values. Highly satisfactory performance of the circulator under steady state, partial-load and transient conditions was seen, with proportional fluid flow tuned to varying circuit power. Future developments, including a 6 kW...

  8. Operational performance of the ATLAS trigger and data acquisition system and its possible evolution

    CERN Document Server

    Negri, A; The ATLAS collaboration

    2012-01-01

    The experience accumulated in the ATLAS DAQ/HLT system operation during these years stimulated interest to explore possible evolutions, despite the success of the current design. One attractive direction is to merge three systems - the second trigger level (L2), the Event Builder (EB), and the Event Filter (EF) - within a single homogeneous one in which each HLT node executes all the steps required by the trigger and data acquisition process. Each L1 event is assigned to an available HLT node which executes the L2 algorithms using a subset of the event data and, upon positive selection, builds the event, which is further processed by the EF algorithms. Appealing aspects of this design are: a simplification of the software architecture and of its configuration, a better exploitation of the computing resources, the caching of fragments already collected for L2 processing, the automated load balancing between L2 and EF selection steps, the sharing of code and services on HLT nodes. Furthermore, the full treatmen...

  9. Quality of service on Linux for the Atlas TDAQ event building network

    International Nuclear Information System (INIS)

    Yasu, Y.; Manabe, A.; Fujii, H.; Watase, Y.; Nagasaka, Y.; Hasegawa, Y.; Shimojima, M.; Nomachi, M.

    2001-01-01

    Congestion control for packets sent on a network is important for DAQ systems that contain an event builder using switching network technologies. Quality of Service (QoS) is a technique for congestion control. Recent Linux releases provide QoS in the kernel to manage network traffic. The authors have analyzed the packet-loss and packet distribution for the event builder prototype of the Atlas TDAQ system. The authors used PC/Linux with Gigabit Ethernet network as the testbed. The results showed that QoS using CBQ and TBF eliminated packet loss on UDP/IP transfer while the UDP/IP transfer in best effort made lots of packet loss. The result also showed that the QoS overhead was small. The authors concluded that QoS on Linux performed efficiently in TCP/IP and UDP/IP and will have an important role of the Atlas TDAQ system

  10. H4DAQ: a modern and versatile data-acquisition package for calorimeter prototypes test-beams

    Science.gov (United States)

    Marini, A. C.

    2018-02-01

    The upgrade of the particle detectors for the HL-LHC or for future colliders requires an extensive program of tests to qualify different detector prototypes with dedicated test beams. A common data-acquisition system, H4DAQ, was developed for the H4 test beam line at the North Area of the CERN SPS in 2014 and it has since been adopted in various applications for the CMS experiment and AIDA project. Several calorimeter prototypes and precision timing detectors have used our system from 2014 to 2017. H4DAQ has proven to be a versatile application and has been ported to many other beam test environments. H4DAQ is fast, simple, modular and can be configured to support various kinds of setup. The functionalities of the DAQ core software are split into three configurable finite state machines: data readout, run control, and event builder. The distribution of information and data between the various computers is performed using ZEROMQ (0MQ) sockets. Plugins are available to read different types of hardware, including VME crates with many types of boards, PADE boards, custom front-end boards and beam instrumentation devices. The raw data are saved as ROOT files, using the CERN C++ ROOT libraries. A Graphical User Interface, based on the python gtk libraries, is used to operate the H4DAQ and an integrated data quality monitoring (DQM), written in C++, allows for fast processing of the events for quick feedback to the user. As the 0MQ libraries are also available for the National Instruments LabVIEW program, this environment can easily be integrated within H4DAQ applications.

  11. The Phase-2 ATLAS ITk Pixel Upgrade

    CERN Document Server

    Macchiolo, Anna; The ATLAS collaboration

    2018-01-01

    The new ATLAS ITk pixel system will be installed during the LHC Phase-II shutdown, to better take advantage of the increased luminosity of the HL-LHC. The detector will consist of 5 layers of stave-like support structures in the most central region and ring-shaped supports in the endcap regions, covering up to |η| < 4. While the outer 3 layers of the Pixel Detector are designed to operate for the full HL-LHC data taking period, the innermost 2 layers of the detector will be replaced around half of the lifetime. The ITk pixel detector will be instrumented with new sensors and readout electronics to provide improved tracking performance and radiation hardness compared to the current detector. Sensors will be read out by new ASICs based on the chip developed by the RD53 Collaboration. The pixel off-detector readout electronics will be implemented in the framework of the general ATLAS trigger and DAQ system with a readout speed of up to 5 Gb/s per data link for the innermost layers. Results of extensive tests...

  12. Multiple brain atlas database and atlas-based neuroimaging system.

    Science.gov (United States)

    Nowinski, W L; Fang, A; Nguyen, B T; Raphel, J K; Jagannathan, L; Raghavan, R; Bryan, R N; Miller, G A

    1997-01-01

    For the purpose of developing multiple, complementary, fully labeled electronic brain atlases and an atlas-based neuroimaging system for analysis, quantification, and real-time manipulation of cerebral structures in two and three dimensions, we have digitized, enhanced, segmented, and labeled the following print brain atlases: Co-Planar Stereotaxic Atlas of the Human Brain by Talairach and Tournoux, Atlas for Stereotaxy of the Human Brain by Schaltenbrand and Wahren, Referentially Oriented Cerebral MRI Anatomy by Talairach and Tournoux, and Atlas of the Cerebral Sulci by Ono, Kubik, and Abernathey. Three-dimensional extensions of these atlases have been developed as well. All two- and three-dimensional atlases are mutually preregistered and may be interactively registered with an actual patient's data. An atlas-based neuroimaging system has been developed that provides support for reformatting, registration, visualization, navigation, image processing, and quantification of clinical data. The anatomical index contains about 1,000 structures and over 400 sulcal patterns. Several new applications of the brain atlas database also have been developed, supported by various technologies such as virtual reality, the Internet, and electronic publishing. Fusion of information from multiple atlases assists the user in comprehensively understanding brain structures and identifying and quantifying anatomical regions in clinical data. The multiple brain atlas database and atlas-based neuroimaging system have substantial potential impact in stereotactic neurosurgery and radiotherapy by assisting in visualization and real-time manipulation in three dimensions of anatomical structures, in quantitative neuroradiology by allowing interactive analysis of clinical data, in three-dimensional neuroeducation, and in brain function studies.

  13. Towards a Level-1 tracking trigger for the ATLAS experiment

    CERN Document Server

    Cerri, A; The ATLAS collaboration

    2014-01-01

    The future plans for the LHC accelerator allow, through a schedule of phased upgrades, an increase in the average instantaneous luminosity by a factor 5 with respect to the original design luminosity. The ATLAS experiment at the LHC will be able to maximise the physics potential from this higher luminosity only if the detector, trigger and DAQ infrastructure are adapted to handle the sustained increase in particle production rates. In this paper the changes expected to be required to the ATLAS detectors and trigger system to fulfill the requirement for working in such high luminosity scenario are described. The increased number of interactions per bunch crossing will result in higher occupancy in the detectors and increased rates at each level of the trigger system. The trigger selection will improve the selectivity partly from increased granularity for the sub detectors and the consequent higher resolution. One of the largest challenges will be the provision of tracking information at the first trigger level...

  14. DAQ

    CERN Multimedia

    Frans Meijers

    2012-01-01

    Operations for the 2012 physics run For the 2012 run, the DAQ system operates typically at the start of a fill with a L1 Trigger rate close to 90 kHz, a raw event size of ~700 kB, and ~1 kHz recording of stream-A with a size of ~450 kB after compression. The stream-A includes the physics triggers and consists since 2012 of the “core” triggers and the “parked” triggers, at about equal rate. In order to be able to handle the higher instantaneous luminosities in 2012 (so far, up to 6.5E33 at 50 ns bunch spacing) with a pile-up of ~35 events, an extension of the HLT was installed, commissioned and is in operation since the start of data taking. Extension of the HLT farm The CMS event builder and High-Level Trigger (HLT) farm are built using standard commercial PCs and networking equipment and are therefore easily extendable with state-of-the-art hardware. The HLT farm has been extended twice so far, in May 2011 and recently in May 2012. Table 1 shows the parameters and...

  15. Applications of an OO (Objected Oriented) methodology and case to a DAQ system

    International Nuclear Information System (INIS)

    Bee, C.P.; Eshghi, S.; Jones, R.

    1996-01-01

    The RD13 project has evaluated the use of the Object Oriented Information Engineering (OOIE) method during the development of several software components connected to the DAQ system. The method is supported by a sophisticated commercial CASE tool (Object Management Workbench) and programming environment (Kappa) which covers the full life-cycle of the software including model simulation, code generation and application deployment. This paper gives an overview of the method, CASE tool, DAD components which have been developed and we relate our experiences with the method and tool, its integration into our development environment and the spiral life cycle if supports. (author)

  16. A Readout Driver for the ATLAS LAr Calorimeter at a High Luminosity LHC

    CERN Document Server

    Kielburg-Jeka, A; The ATLAS collaboration

    2010-01-01

    A new readout driver (ROD) is being developed as a central part of the signal processing of the ATLAS liquid-argon calorimeters for operation at the sLHC. In the architecture of the upgraded readout system, the ROD modules will have several challenging tasks: receiving of up to 1.4 Tb/s of data per board from the detector front-end on multiple high-speed serial links, low-latency data processing, data buffering, and data transmission to the ATLAS trigger and DAQ systems. In order to evaluate the different components, prototype boards in ATCA format equipped with modern Xilinx and Altera FPGAs have been built. We will report on the measured performance of the SERDES devices, the parallel signal processing using DSP slices, the implementation of trigger interfaces, using e.g. multi-Gb Ethernet, as well as the development of the ATCA infrastructure on the ROD prototype modules.

  17. A Readout Driver for the ATLAS LAr Calorimeter at a High Luminosity LHC

    CERN Document Server

    Kielburg-Jeka, A

    2011-01-01

    A new readout driver (ROD) is being developed as a central part of the signal processing of the ATLAS liquid-argon calorimeters for operation at the High Luminosity LHC (HL-LHC). In the architecture of the upgraded readout system, the ROD modules will have several challenging tasks: receiving of up to 1.4 Tb/s of data per board from the detector front-end on multiple high-speed serial links, low-latency data processing, data buffering, and data transmission to the ATLAS trigger and DAQ systems. In order to evaluate the different components, prototype boards in ATCA format equipped with modern Xilinx and Altera FPGAs have been built. We will report on the measured performance of the SERDES devices, the parallel signal processing using DSP slices, the implementation of trigger interfaces, using e.g. multi-Gb Ethernet, as well as the development of the ATCA infrastructure on the ROD prototype modules.

  18. Clock Distribution and Readout Architecture for the ATLAS Tile Calorimeter at the HL-LHC

    CERN Document Server

    Carrio Argos, Fernando; The ATLAS collaboration

    2018-01-01

    The Tile Calorimeter (TileCal) is one detector of the ATLAS experiment at the Large Hadron Collider (LHC). TileCal is a sampling calorimeter made of steel plates and plastic scintillators which are readout using approximately 10,000 PhotoMultipliers Tubes (PMTs). In 2024, the LHC will undergo a series of upgrades towards a High Luminosity LHC (HL-LHC) to deliver up to 7.5 times the current nominal instantaneous luminosity. The ATLAS Tile Phase II Upgrade will accommodate detector and data acquisition system to the HL-LHC requirements. The detector electronics will be redesigned using a new clock distribution and readout architecture with a full-digital trigger system. After the Long Shutdown 3 (2024-2026), the on-detector electronics will transfer digitized data for every bunch crossing (~25 ns) to the Tile PreProcessors (TilePPr) in the counting rooms with a total data bandwidth of 40 Tbps. The TilePPrs will store the detector data in pipeline memories to cope with the new ATLAS DAQ architecture requirements...

  19. ATLAS Data Preparation in Run 2

    CERN Document Server

    Laycock, Paul; The ATLAS collaboration

    2016-01-01

    In this presentation, the data preparation workflows for Run 2 are presented. Online data quality uses a new hybrid software release that incorporates the latest offline data quality monitoring software for the online environment. This is used to provide fast feedback in the control room during a data acquisition (DAQ) run, via a histogram-based monitoring framework as well as the online Event Display. Data are sent to several streams for offline processing at the dedicated Tier-0 computing facility, including dedicated calibration streams and an "express" physics stream containing approximately 2% of the main physics stream. This express stream is processed as data arrives, allowing a first look at the offline data quality within hours of a run end. A prompt calibration loop starts once an ATLAS DAQ run ends, nominally defining a 48 hour period in which calibrations and alignments can be derived using the dedicated calibration and express streams. The bulk processing of the main physics stream starts on expi...

  20. A modern and versatile data-acquisition package for calorimeter prototypes test-beams H4DAQ

    CERN Document Server

    Marini, Andrea Carlo

    2017-01-01

    The upgrade of the calorimeters for the HL-LHC or for future colliders requires an extensive programme of tests to qualify different detector prototypes with dedicated test beams. A common data-acquisition system (called H4DAQ) was developed for the H4 test beam line at the North Area of the CERN SPS in 2014 and it has since been adopted by an increasing number of teams involved in the CMS experiment and AIDA groups. Several different calorimeter prototypes and precision timing detectors have used H4DAQ from 2014 to 2017, and it has proved to be a versatile application, portable to many other beam test environments (the CERN beam lines EA-T9 at the PS, H2 and H4 at the SPS, and at the INFN Frascati Beam Test Facility).The H4DAQ is fast, simple, modular and can be configured to support different setups. The different functionalities of the DAQ core software are split into three configurable finite state machines the data readout, run control, and event builder. The distribution of information and data betw...

  1. The TOTEM DAQ based on the Scalable Readout System (SRS)

    Science.gov (United States)

    Quinto, Michele; Cafagna, Francesco S.; Fiergolski, Adrian; Radicioni, Emilio

    2018-02-01

    The TOTEM (TOTal cross section, Elastic scattering and diffraction dissociation Measurement at the LHC) experiment at LHC, has been designed to measure the total proton-proton cross-section and study the elastic and diffractive scattering at the LHC energies. In order to cope with the increased machine luminosity and the higher statistic required by the extension of the TOTEM physics program, approved for the LHC's Run Two phase, the previous VME based data acquisition system has been replaced with a new one based on the Scalable Readout System. The system features an aggregated data throughput of 2GB / s towards the online storage system. This makes it possible to sustain a maximum trigger rate of ˜ 24kHz, to be compared with the 1KHz rate of the previous system. The trigger rate is further improved by implementing zero-suppression and second-level hardware algorithms in the Scalable Readout System. The new system fulfils the requirements for an increased efficiency, providing higher bandwidth, and increasing the purity of the data recorded. Moreover full compatibility has been guaranteed with the legacy front-end hardware, as well as with the DAQ interface of the CMS experiment and with the LHC's Timing, Trigger and Control distribution system. In this contribution we describe in detail the architecture of full system and its performance measured during the commissioning phase at the LHC Interaction Point.

  2. Contributions to large scale and performance tests of the ATLAS online software

    International Nuclear Information System (INIS)

    Badescu, E.; Caprini, M.

    2003-01-01

    One of the sub-system of the Trigger/DAQ system of the future ATLAS experiment is the Online Software system. It encompasses the functionality needed to configure, control and monitor the DAQ. Its architecture is based on a component structure described in the ATLAS Trigger/DAQ technical proposal. Online Software is responsible for control, supervision and internal communication, excluding the event data flow. For the final ATLAS experiment in 2006 it is expected that it will have to control up to 1000 processors. The core components are the run control, process manager, configuration database, inter process communication, message reporting system and information exchange system. The auxiliary components, namely resource manager, online bookkeeper and the integrated graphical user interface were in use for tests. All the components are unit tested for functionality, fault tolerance, performance and scalability. Extended functionality tests are performed at CERN and remote institutes before each official release. The test objective was the verification of the scalability of the system to a configuration containing a large number of nodes. The aim was to study the interaction between the components, to identify critical areas and to investigate the variation and optimization of online system parameters. The timing of the data acquisition transition phases were recorded and analysed. The information on all processes and their relationships, the run control hierarchy in the online system as well as startup and shutdown dependencies are defined in the configuration database data file. Timing measurements were performed for the transitions shown in the paper and defined as follows: Setup: start online server infrastructure; Close: remove online infrastructure; Boot: start all supervised processes; Shutdown: stop all supervised processes; Cold start: start the supervised processes and go to the Running state; Cold stop: reverse of the cold start phase; Luke warm start

  3. Design and Commissioning of the ATLAS Muon Spectrometer RPC Read Out Driver

    CERN Document Server

    Aloisio, A; Cevenini, F; Della Pietra; Della Volpe; Izzo, V

    2008-01-01

    The RPC subsystem of the ATLAS muon spectrometer provides the Level-1 trigger in the barrel and it is read out by a specific DAQ system. On-detector electronics pack the RPC data in frames, tagged with an event number assigned by the trigger logic, and transmit them to the counting room on optical fibre. Data from each sector are then routed together to a Read-Out Driver (ROD) board. This is a custom processor that parses the frames, checks their coherence and builds a data structure for all the RPCs of one of the 32 sectors of the spectrometer. Each ROD sends the event fragments to a Read-Out subsystem for further event building and analysis. The ROD is a VME64x board, designed around two Xilinx Virtex-II FPGAs and an ARM7 microcontroller. In this paper we describe the board architecture and the event binding algorithm. The boards have been installed in the ATLAS USA15 control room and have been successfully used in the ATLAS commissioning runs.

  4. Commissioning and integration testing of the DAQ system for the CMS GEM upgrade

    CERN Document Server

    Castaneda Hernandez, Alfredo Martin

    2017-01-01

    The CMS muon system will undergo a series of upgrades in the coming years to preserve and extend its muon detection capabilities during the High Luminosity LHC.The first of these will be the installation of triple-foil GEM detectors in the CMS forward region with the goal of maintaining trigger rates and preserving good muon reconstruction, even in the expected harsh environment.In 2017 the CMS GEM project is looking to achieve a major milestone in the project with the installation of 5 super-chambers in CMS; this exercise will allow for the study of services installation and commissioning, and integration with the rest of the subsystems for the first time. An overview of the DAQ system will be given with emphasis on the usage during chamber quality control testing, commissioning in CMS, and integration with the central CMS system.

  5. A potent approach for the development of FPGA based DAQ system for HEP experiments

    Science.gov (United States)

    Khan, Shuaib Ahmad; Mitra, Jubin; David, Erno; Kiss, Tivadar; Nayak, Tapan Kumar

    2017-10-01

    With ever increasing particle beam energies and interaction rates in modern High Energy Physics (HEP) experiments in the present and future accelerator facilities, there has always been the demand for robust Data Acquisition (DAQ) schemes which perform in the harsh radiation environment and handle high data volume. The scheme is required to be flexible enough to adapt to the demands of future detector and electronics upgrades, and at the same time keeping the cost factor in mind. To address these challenges, in the present work, we discuss an efficient DAQ scheme for error resilient, high speed data communication on commercially available state-of-the-art FPGA with optical links. The scheme utilises GigaBit Transceiver (GBT) protocol to establish radiation tolerant communication link between on-detector front-end electronics situated in harsh radiation environment to the back-end Data Processing Unit (DPU) placed in a low radiation zone. The acquired data are reconstructed in DPU which reduces the data volume significantly, and then transmitted to the computing farms through high speed optical links using 10 Gigabit Ethernet (10GbE). In this study, we focus on implementation and testing of GBT protocol and 10GbE links on an Intel FPGA. Results of the measurements of resource utilisation, critical path delays, signal integrity, eye diagram and Bit Error Rate (BER) are presented, which are the indicators for efficient system performance.

  6. A potent approach for the development of FPGA based DAQ system for HEP experiments

    International Nuclear Information System (INIS)

    Khan, Shuaib Ahmad; Mitra, Jubin; Nayak, Tapan Kumar; David, Erno; Kiss, Tivadar

    2017-01-01

    With ever increasing particle beam energies and interaction rates in modern High Energy Physics (HEP) experiments in the present and future accelerator facilities, there has always been the demand for robust Data Acquisition (DAQ) schemes which perform in the harsh radiation environment and handle high data volume. The scheme is required to be flexible enough to adapt to the demands of future detector and electronics upgrades, and at the same time keeping the cost factor in mind. To address these challenges, in the present work, we discuss an efficient DAQ scheme for error resilient, high speed data communication on commercially available state-of-the-art FPGA with optical links. The scheme utilises GigaBit Transceiver (GBT) protocol to establish radiation tolerant communication link between on-detector front-end electronics situated in harsh radiation environment to the back-end Data Processing Unit (DPU) placed in a low radiation zone. The acquired data are reconstructed in DPU which reduces the data volume significantly, and then transmitted to the computing farms through high speed optical links using 10 Gigabit Ethernet (10GbE). In this study, we focus on implementation and testing of GBT protocol and 10GbE links on an Intel FPGA. Results of the measurements of resource utilisation, critical path delays, signal integrity, eye diagram and Bit Error Rate (BER) are presented, which are the indicators for efficient system performance.

  7. A read-out buffer prototype for ATLAS high level triggers

    CERN Document Server

    Calvet, D; Huet, M; Le Dû, P; Mandjavidze, I D; Mur, M

    2000-01-01

    Read-Out Buffers are critical components in the dataflow chain of the ATLAS Trigger/DAQ system. At up to 75 kHz, after each Level-1 trigger accept signal, these devices receive and store digitized data from groups of front-end electronic channels. Several Read-Out Buffers are grouped to form a Read-Out Buffer Complex that acts as a data server for the High Level Triggers selection algorithms and for the final data collection system. This paper describes a functional prototype of a Read-Out Buffer based on a custom made PCI mezzanine card that is designed to accept input data at up to 160 MB/s, to store up to 8 MB of data and to distribute data chunks at the desired request rate. We describe the hardware of the card that is based on an Intel I960 processor and CPLDs. We present the integration of several of these cards in a Read-Out Buffer Complex. We measure various performance figures and we discuss to which extent these can fulfill ATLAS needs. 5 Refs.

  8. Towards a Level-1 Tracking Trigger for the ATLAS Experiment

    CERN Document Server

    De Santo, A; The ATLAS collaboration

    2014-01-01

    Plans for a physics-driven upgrade of the LHC foresee staged increases of the accelerator's average instantaneous luminosity, of up to a factor of five compared to the original design. In order to cope with the sustained luminosity increase, and the resulting higher detector occupancy and particle interaction rates, the ATLAS experiment is planning phased upgrades of the trigger system and of the DAQ infrastructure. In the new conditions, maintaining an adequate signal acceptance for electro-weak processes will pose unprecedented challenges, as the default solution to cope with the higher rates would be to increase thresholds on the transverse momenta of physics objects (leptons, jets, etc). Therefore the possibility to apply fast processing at the first trigger level in order to use tracking information as early as possible in the trigger selection represents a most appealing opportunity, which can preserve the ATLAS trigger's selectivity without reducing its flexibility. Studies to explore the feasibility o...

  9. Development of the DAQ System of Triple-GEM Detectors for the CMS Muon Spectrometer Upgrade at LHC

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00387583

    The Gas Electron Multiplier (GEM) upgrade project aims at improving the performance of the muon spectrometer of the Compact Muon Solenoid (CMS) experiment which will suffer from the increase in luminosity of the Large Hadron Collider (LHC). After a long technical stop in 2019-2020, the LHC will restart and run at a luminosity of 2 × 1034 cm−2 s−1, twice its nominal value. This will in turn increase the rate of particles to which detectors in CMS will be exposed and affect their performance. The muon spectrometer in particular will suffer from a degraded detection efficiency due to the lack of redundancy in its most forward region. To solve this issue, the GEM collaboration proposes to instrument the first muon station with Triple-GEM detectors, a technology which has proven to be resistant to high fluxes of particles. Within the GEM collaboration, the Data Acquisition (DAQ) subgroup is in charge of the development of the electronics and software of the DAQ system of the detectors. This thesis presents th...

  10. Efficient network monitoring for large data acquisition systems

    International Nuclear Information System (INIS)

    Savu, D.O.; Martin, B.; Al-Shabibi, A.; Sjoen, R.; Batraneanu, S.M.; Stancu, S.N.

    2012-01-01

    Though constantly evolving and improving, the available network monitoring solutions have limitations when applied to the infrastructure of a high speed realtime data acquisition (DAQ) system. DAQ networks are particular computer networks where experts have to pay attention to both individual subsections as well as system wide traffic flows while monitoring the network. The ATLAS Network at the Large Hadron Collider (LHC) has more than 200 switches interconnecting 3500 hosts and totaling 8500 high speed links. The use of heterogeneous tools for monitoring various infrastructure parameters, in order to assure optimal DAQ system performance, proved to be a tedious and time consuming task for experts. To alleviate this problem we used our networking and DAQ expertise to build a flexible and scalable monitoring system providing an intuitive user interface with the same look and feel irrespective of the data provider that is used. Our system uses custom developed components for critical performance monitoring and seamlessly integrates complementary data from auxiliary tools, such as NAGIOS, information services or custom databases. A number of techniques (e.g. normalization, aggregation and data caching) were used in order to improve the user interface response time. The end result is a unified monitoring interface, for fast and uniform access to system statistics, which significantly reduced the time spent by experts for ad-hoc and post-mortem analysis. (authors)

  11. A TTC to Data Acquisition interface for the ATLAS Tile Hadronic calorimeter at the LHC

    CERN Document Server

    Valero, Alberto; The ATLAS collaboration; Torres Pais, Jose Gabriel; Soret Medel, Jesús

    2017-01-01

    TileCal is the central tile hadronic calorimeter of the ATLAS experiment at the Large Hadron Collider (LHC) at CERN. It is a sampling calorimeter where scintillating tiles are embedded in steel absorber plates. The tiles are read-out using almost 10,000 photomultipliers which convert the light into an electrical signal. These signals are digitized and stored in pipelines memories in the front-end electronics. Upon the reception of a trigger signal, the PMT data is transferred to the Read-Out Drivers in the back-end electronics which process and transmits the processed data to the ATLAS Data AcQuisition (DAQ) system. The Timing, Trigger and Control (TTC) system is an optical network used to distribute the clock synchronized with the accelerator, the trigger signals and configuration commands to both the front-end and back-end electronics components. During physics operation, the TTC system is used to configure the electronics and to distribute trigger information used to synchronize the different parts of the ...

  12. The ATLAS experience and its relevance to the data acquisition of the BM@N experiment at the NICA complex

    International Nuclear Information System (INIS)

    Tomiwa, K G; Mellado, B; Slepnev, I; Bazylev, S

    2016-01-01

    The quest to understand the world around us has increased the size of high energy physics experiments and the processing rate of the data output from high energy experiments. The Large Hadron Collider is the largest experimental set-up known, with ATLAS detector as one of the detectors built to record proton-proton collision at about 10 PB/s (Petabit/s) around the LHC interaction point. With the Phase-II upgrade in 2022 this data output will increase by at least 10 times higher than those of today due to luminosity increase, this poses a serious challenge on processing and storage of the data. Also the BM@N fixed target experiment is expected to have event size of about 80,000 bytes/Event, leading to huge amount of data output to be processed in real time. Experimentalists handle these challenges by developing High-throughput electronic, with the capability of processing and reducing big data to scientific data in real time. One of these high-throughput electronics is the Super Readout Driver (sROD) and ARM-based processing unit (PU) developed for ATLAS TileCal detector by the University of the Witwatersrand. The sROD is designed to process data from Tile Calorimeter at 40 MHz. This work takes a look at the architecture of the data acquisition (DAQ) system of the BM@N detectors and the adaptation of the high-throughput systems to last stage of the BM@N DAQ system. (paper)

  13. Measurement Of Neutron Radius In Lead By Parity Violating Scattering Flash ADC DAQ

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed, Zafar [Christopher Newport Univ., Newport News, VA (United States)

    2012-06-01

    This dissertation reports the experiment PREx, a parity violation experiment which is designed to measure the neutron radius in 208Pb. PREx is performed in hall A of Thomas Jefferson National Accelerator Facility from March 19th to June 21st. Longitudionally polarized electrons at energy 1 GeV scattered at and angle of θlab = 5.8 ° from the Lead target. Beam corrected pairty violaing counting rate asymmetry is (Acorr= 594 ± 50(stat) ± 9(syst))ppb at Q2 = 0.009068GeV 2. This dissertation also presents the details of Flash ADC Data Acquisition(FADC DAQ) system for Moller polarimetry in Hall A of Thomas Jefferson National Accelerator Facility. The Moller polarimeter measures the beam polarization to high precision to meet the specification of the PREx(Lead radius experiment). The FADC DAQ is part of the upgrade of Moller polarimetery to reduce the systematic error for PREx. The hardware setup and the results of the FADC DAQ analysis are presented

  14. Glance Information System for ATLAS Management

    International Nuclear Information System (INIS)

    Grael, F F; Maidantchik, C; Évora, L H R A; Karam, K; Moraes, L O F; Cirilli, M; Nessi, M; Pommès, K

    2011-01-01

    ATLAS Experiment is an international collaboration where more than 37 countries, 172 institutes and laboratories, 2900 physicists, engineers, and computer scientists plus 700 students participate. The management of this teamwork involves several aspects such as institute contribution, employment records, members' appointment, authors' list, preparation and publication of papers and speakers nomination. Previously, most of the information was accessible by a limited group and developers had to face problems such as different terminology, diverse data modeling, heterogeneous databases and unlike users needs. Moreover, the systems were not designed to handle new requirements. The maintenance has to be an easy task due to the long lifetime experiment and professionals turnover. The Glance system, a generic mechanism for accessing any database, acts as an intermediate layer isolating the user from the particularities of each database. It retrieves, inserts and updates the database independently of its technology and modeling. Relying on Glance, a group of systems were built to support the ATLAS management and operation aspects: ATLAS Membership, ATLAS Appointments, ATLAS Speakers, ATLAS Analysis Follow-Up, ATLAS Conference Notes, ATLAS Thesis, ATLAS Traceability and DSS Alarms Viewer. This paper presents the overview of the Glance information framework and describes the privilege mechanism developed to grant different level of access for each member and system.

  15. Glance Information System for ATLAS Management

    Science.gov (United States)

    Grael, F. F.; Maidantchik, C.; Évora, L. H. R. A.; Karam, K.; Moraes, L. O. F.; Cirilli, M.; Nessi, M.; Pommès, K.; ATLAS Collaboration

    2011-12-01

    ATLAS Experiment is an international collaboration where more than 37 countries, 172 institutes and laboratories, 2900 physicists, engineers, and computer scientists plus 700 students participate. The management of this teamwork involves several aspects such as institute contribution, employment records, members' appointment, authors' list, preparation and publication of papers and speakers nomination. Previously, most of the information was accessible by a limited group and developers had to face problems such as different terminology, diverse data modeling, heterogeneous databases and unlike users needs. Moreover, the systems were not designed to handle new requirements. The maintenance has to be an easy task due to the long lifetime experiment and professionals turnover. The Glance system, a generic mechanism for accessing any database, acts as an intermediate layer isolating the user from the particularities of each database. It retrieves, inserts and updates the database independently of its technology and modeling. Relying on Glance, a group of systems were built to support the ATLAS management and operation aspects: ATLAS Membership, ATLAS Appointments, ATLAS Speakers, ATLAS Analysis Follow-Up, ATLAS Conference Notes, ATLAS Thesis, ATLAS Traceability and DSS Alarms Viewer. This paper presents the overview of the Glance information framework and describes the privilege mechanism developed to grant different level of access for each member and system.

  16. Construction and Performance of the ATLAS SCT Barrels and Cosmic Tests

    CERN Document Server

    Demirkoz, Bilge Melahat

    2007-01-01

    ATLAS is a multi-purpose detector for the LHC and will detect proton-proton collisions with center of mass energy of $14$TeV. Part of the central inner detector, the Semi-Conductor Tracker (SCT) barrels, were assembled and tested at Oxford University and later integrated at CERN with the TRT (Transition Radiation Tracker) barrel. The barrel SCT is composed of 4 layers of silicon strip modules with two sensor layers with $80 \\mu$m channel width. The design of the modules and the barrels has been optimized for low radiation length while maintaining mechanical stability, bringing services to the detector, and ensuring a cold and dry environment. The high granularity, high detector efficiency and low noise occupancy ($ < 5 \\times 10^{-4}$) of the SCT will enable ATLAS to have an efficient pattern recognition capability. Due to the binary nature of the SCT read-out, a stable read-out system and the calibration system is of critical importance. SctRodDaq is the online software framework for the calibration and a...

  17. The Detector Control System of the ATLAS experiment at CERN An application to the calibration of the modules of the Tile Hadron Calorimeter

    CERN Document Server

    Varelá-Rodriguez, F

    2002-01-01

    The principle subject of this thesis work is the design and development of the Detector Control System (DCS) of the ATLAS experiment at CERN. The DCS must ensure the coherent and safe operation of the detector and handle the communication with external systems, like the LHC accelerator and CERN services. A bidirectional data flow between the Data AcQuisition (DAQ) system and the DCS will enable coherent operation of the experiment. The LHC experiments represent new challenges for the design of the control system. The extremely high complexity of the project forces the design of different components of the detector and related systems to be performed well ahead to their use. The long lifetime of the LHC experiments imposes the use of evolving technologies and modular design. The overall dimensions of the detector and the high number of I/O channels call for a control system with processing power distributed all over the facilities of the experiment while keeping a low cost. The environmental conditions require...

  18. ATLAS production system

    CERN Document Server

    Borodin, Mikhail; The ATLAS collaboration; De, Kaushik; Klimentov, Alexei; Golubkov, Dmitry; Maeno, Tadashi; Mashinistov, Ruslan; Wenaus, Torre; Padolski, Siarhei

    2016-01-01

    The second generation of the ATLAS production system called ProdSys2 is a distributed workload manager which used by thousands of physicists to analyze the data remotely, with the volume of processed data is beyond the exabyte scale, across a more than hundred heterogeneous sites. It achieves high utilization by combining dynamic job definition based on many criterias, such as input and output size, memory requirements and CPU consumption with manageable scheduling policies and by supporting different kind of computational resources, such as GRID, clouds, supercomputers and volunteering computers. Besides jobs definition Production System also includes flexible web user interface, which implements user-friendly environment for main ATLAS workflows, e.g. simple way of combining different data flows, and real-time monitoring, optimised for using with huge amount of information to present. We present an overview of the ATLAS Production System major components: job and task definition, workflow manager web user i...

  19. AGIS: The ATLAS Grid Information System

    CERN Document Server

    Anisenkov, A; The ATLAS collaboration; Klimentov, A; Senchenko, A

    2012-01-01

    The ATLAS Computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we present ATLAS Grid Information System (AGIS) designed to integrate configuration and status information about resources, services and topology of whole ATLAS Grid needed by ATLAS Distributed Computing applications and services.

  20. A TCP/IP transport layer for the DAQ of the CMS experiment

    International Nuclear Information System (INIS)

    Kozlovszky, M.

    2004-01-01

    The CMS collaboration is currently investigating various networking technologies that may meet the requirements of the CMS Data Acquisition System (DAQ). During this study, a peer transport component based on TCP/IP has been developed using object-oriented techniques for the distributed DAQ framework named XDAQ. This framework has been designed to facilitate the development of distributed data acquisition systems within the CMS Experiment. The peer transport component has to meet 3 main requirements. Firstly, it had to provide fair access to the communication medium for competing applications. Secondly, it had to provide as much of the available bandwidth to the application layer as possible. Finally, it had to hide the complexity of using non-blocking TCP/IP connections from the application layer. This paper describes the development of the peer transport component and then presents and draws conclusions on the measurements made during tests. The major topics investigated include: blocking versus non-blocking communication, TCP/IP configuration options, multi-rail connections

  1. The Next Generation ATLAS Production System

    CERN Document Server

    Borodin, Mikhail; The ATLAS collaboration; Golubkov, Dmitry; Klimentov, Alexei; Maeno, Tadashi; Mashinistov, Ruslan; Vaniachine, Alexandre

    2015-01-01

    The ATLAS experiment at LHC data processing and simulation grows continuously, as more data and more use cases emerge. For data processing the ATLAS experiment adopted the data transformation approach, where software applications transform the input data into outputs. In the ATLAS production system, each data transformation is represented by a task, a collection of many jobs, dynamically submitted by the ATLAS workload management system (PanDA/JEDI) and executed on the Grid, clouds and supercomputers. Patterns in ATLAS data transformation workflows composed of many tasks provided a scalable production system framework for template definitions of the many-tasks workflows. User interface and system logic of these workflows are being implemented in the Database Engine for Tasks (DEFT). Such development required using modern computing technologies and approaches. We report technical details of this development: database implementation, server logic and Web user interface technologies.

  2. Overview and future developments of the FPGA-based DAQ of COMPASS

    Energy Technology Data Exchange (ETDEWEB)

    Bai, Yunpeng; Huber, Stefan; Konorov, Igor; Levit, Dmytro [Physik-Department E18, Technische Universitaet Muenchen (Germany); Bodlak, Martin [Department of Low-Temperature Physics, Charles University Prague (Czech Republic); Frolov, Vladimir [European Organization for Nuclear Research - CERN (Switzerland); Jary, Vladimir; Virius, Miroslav [Faculty of Nuclear Sciences and Physical Engineering, Czech Technical University (Czech Republic); Novy, Josef [European Organization for Nuclear Research - CERN (Switzerland); Faculty of Nuclear Sciences and Physical Engineering, Czech Technical University (Czech Republic); Steffen, Dominik [Physik-Department E18, Technische Universitaet Muenchen (Germany); European Organization for Nuclear Research - CERN (Switzerland)

    2016-07-01

    COMPASS is a fixed-target experiment at the SPS accelerator at CERN dedicated to the study of hadron structure and spectroscopy. In 2014, an FPGA-based data acquisition system (FDAQ) was deployed. Its hardware event builder consisting of nine custom designed FPGA-cards replaced 30 distributed online computers and around 100 PCI cards. As a result, the new DAQ provides higher bandwidth and better reliability. By buffering the data, the system exploits the spill structure of the SPS averaging the maximum on-spill data rate of 1.5 GB/s over the whole SPS duty cycle. A modern run control software allows user-friendly monitoring and configuration of the hardware nodes of the event builder. From 2016, it is planned to wire all point-to-point high-speed links via a fully programmable crosspoint switch. The crosspoint switch will provide a fully customizable DAQ network topology between front-end electronics, the event building hardware, and the readout computers. It will therefore simplify compensation for hardware failure and improve load balancing.

  3. ATLAS Operations: Experience and Evolution in the Data Taking Era

    CERN Document Server

    Ueda, I; The ATLAS collaboration; Goossens, L; Stewart, G; Jezequel, S; Nairz, A; Negri, G; Campana, S; Di Girolamo, A

    2011-01-01

    This paper summarises the operational experience and improvements of the ATLAS hierarchical multi-tier computing infrastructure in the past year leading to taking and processing of the first collisions in 2009 and 2010. Special focus will be given to Tier-0 which is responsible, among other things, for a prompt processing of the raw data coming from the online DAQ system and is thus critical part of the chain. We will give an overview of the Tier-0 architecture, and improvements based on the operational experience. Emphasis will be put on the new developments, namely the Task Management System opening Tier-0 to expert users and Web 2.0 monitoring and management suite. We then overview the achieved performances with the distributed computing system, discuss observed data access patterns over the grid and describe how we used this information to improve analysis rates.

  4. Testing on a Large Scale Running the ATLAS Data Acquisition and High Level Trigger Software on 700 PC Nodes

    CERN Document Server

    Burckhart-Chromek, Doris; Adragna, P; Alexandrov, L; Amorim, A; Armstrong, S; Badescu, E; Baines, J T M; Barros, N; Beck, H P; Bee, C; Blair, R; Bogaerts, J A C; Bold, T; Bosman, M; Caprini, M; Caramarcu, C; Ciobotaru, M; Comune, G; Corso-Radu, A; Cranfield, R; Crone, G; Dawson, J; Della Pietra, M; Di Mattia, A; Dobinson, Robert W; Dobson, M; Dos Anjos, A; Dotti, A; Drake, G; Ellis, Nick; Ermoline, Y; Ertorer, E; Falciano, S; Ferrari, R; Ferrer, M L; Francis, D; Gadomski, S; Gameiro, S; Garitaonandia, H; Gaudio, G; George, S; Gesualdi-Mello, A; Gorini, B; Green, B; Haas, S; Haberichter, W N; Hadavand, H; Haeberli, C; Haller, J; Hansen, J; Hauser, R; Hillier, S J; Höcker, A; Hughes-Jones, R E; Joos, M; Kazarov, A; Kieft, G; Klous, S; Kohno, T; Kolos, S; Korcyl, K; Kordas, K; Kotov, V; Kugel, A; Landon, M; Lankford, A; Leahu, L; Leahu, M; Lehmann-Miotto, G; Le Vine, M J; Liu, W; Maeno, T; Männer, R; Mapelli, L; Martin, B; Masik, J; McLaren, R; Meessen, C; Meirosu, C; Mineev, M; Misiejuk, A; Morettini, P; Mornacchi, G; Müller, M; Garcia-Murillo, R; Nagasaka, Y; Negri, A; Padilla, C; Pasqualucci, E; Pauly, T; Perera, V; Petersen, J; Pope, B; Albuquerque-Portes, M; Pretzl, K; Prigent, D; Roda, C; Ryabov, Yu; Salvatore, D; Schiavi, C; Schlereth, J L; Scholtes, I; Sole-Segura, E; Seixas, M; Sloper, J; Soloviev, I; Spiwoks, R; Stamen, R; Stancu, S; Strong, S; Sushkov, S; Szymocha, T; Tapprogge, S; Teixeira-Dias, P; Torres, R; Touchard, F; Tremblet, L; Ünel, G; Van Wasen, J; Vandelli, W; Vaz-Gil-Lopes, L; Vermeulen, J C; von der Schmitt, H; Wengler, T; Werner, P; Wheeler, S; Wickens, F; Wiedenmann, W; Wiesmann, M; Wu, X; Yasu, Y; Yu, M; Zema, F; Zobernig, H; Computing In High Energy and Nuclear Physics

    2006-01-01

    The ATLAS Data Acquisition (DAQ) and High Level Trigger (HLT) software system will be comprised initially of 2000 PC nodes which take part in the control, event readout, second level trigger and event filter operations. This high number of PCs will only be purchased before data taking in 2007. The large CERN IT LXBATCH facility provided the opportunity to run in July 2005 online functionality tests over a period of 5 weeks on a stepwise increasing farm size from 100 up to 700 PC dual nodes. The interplay between the control and monitoring software with the event readout, event building and the trigger software has been exercised the first time as an integrated system on this large scale. New was also to run algorithms in the online environment for the trigger selection and in the event filter processing tasks on a larger scale. A mechanism has been developed to package the offline software together with the DAQ/HLT software and to distribute it via peer-to-peer software efficiently to this large pc cluster. T...

  5. Testing on a Large Scale running the ATLAS Data Acquisition and High Level Trigger Software on 700 PC Nodes

    CERN Document Server

    Burckhart-Chromek, Doris; Adragna, P; Albuquerque-Portes, M; Alexandrov, L; Amorim, A; Armstrong, S; Badescu, E; Baines, J T M; Barros, N; Beck, H P; Bee, C; Blair, R; Bogaerts, J A C; Bold, T; Bosman, M; Caprini, M; Caramarcu, C; Ciobotaru, M; Comune, G; Corso-Radu, A; Cranfield, R; Crone, G; Dawson, J; Della Pietra, M; Di Mattia, A; Dobinson, Robert W; Dobson, M; Dos Anjos, A; Dotti, A; Drake, G; Ellis, Nick; Ermoline, Y; Ertorer, E; Falciano, S; Ferrari, R; Ferrer, M L; Francis, D; Gadomski, S; Gameiro, S; Garcia-Murillo, R; Garitaonandia, H; Gaudio, G; George, S; Gesualdi-Mello, A; Gorini, B; Green, B; Haas, S; Haberichter, W N; Hadavand, H; Haeberli, C; Haller, J; Hansen, J; Hauser, R; Hillier, S J; Hughes-Jones, R E; Höcker, A; Joos, M; Kazarov, A; Kieft, G; Klous, S; Kohno, T; Kolos, S; Korcyl, K; Kordas, K; Kotov, V; Kugel, A; Landon, M; Lankford, A; Le Vine, M J; Leahu, L; Leahu, M; Lehmann-Miotto, G; Liu, W; Maeno, T; Mapelli, L; Martin, B; Masik, J; McLaren, R; Meessen, C; Meirosu, C; Mineev, M; Misiejuk, A; Morettini, P; Mornacchi, G; Männer, R; Müller, M; Nagasaka, Y; Negri, A; Padilla, C; Pasqualucci, E; Pauly, T; Perera, V; Petersen, J; Pope, B; Pretzl, K; Prigent, D; Roda, C; Ryabov, Yu; Salvatore, D; Schiavi, C; Schlereth, J L; Scholtes, I; Seixas, M; Sloper, J; Sole-Segura, E; Soloviev, I; Spiwoks, R; Stamen, R; Stancu, S; Strong, S; Sushkov, S; Szymocha, T; Tapprogge, S; Teixeira-Dias, P; Torres, R; Touchard, F; Tremblet, L; Van Wasen, J; Vandelli, W; Vaz-Gil-Lopes, L; Vermeulen, J C; Wengler, T; Werner, P; Wheeler, S; Wickens, F; Wiedenmann, W; Wiesmann, M; Wu, X; Yasu, Y; Yu, M; Zema, F; Zobernig, H; von der Schmitt, H; Ünel, G; Computing In High Energy and Nuclear Physics

    2006-01-01

    The ATLAS Data Acquisition (DAQ) and High Level Trigger (HLT) software system will be comprised initially of 2000 PC nodes which take part in the control, event readout, second level trigger and event filter operations. This high number of PCs will only be purchased before data taking in 2007. The large CERN IT LXBATCH facility provided the opportunity to run in July 2005 online functionality tests over a period of 5 weeks on a stepwise increasing farm size from 100 up to 700 PC dual nodes. The interplay between the control and monitoring software with the event readout, event building and the trigger software has been exercised the first time as an integrated system on this large scale. New was also to run algorithms in the online environment for the trigger selection and in the event filter processing tasks on a larger scale. A mechanism has been developed to package the offline software together with the DAQ/HLT software and to distribute it via peer-to-peer software efficiently to this large pc cluster. T...

  6. Data acquisition and processing in the ATLAS Tile Calorimeter Phase-II Upgrade Demonstrator

    CERN Document Server

    Valero, Alberto; The ATLAS collaboration

    2016-01-01

    The LHC has planned a series of upgrades culminating in the High Luminosity LHC (HL-LHC) which will have an average luminosity 5-7 times larger than the nominal Run-2 value. The ATLAS Tile Calorimeter (TileCal) will undergo an upgrade to accommodate to the HL-LHC parameters. The TileCal read-out electronics will be redesigned introducing a new read-out strategy. The photomultiplier signals will be digitized and transferred to the TileCal PreProcessors (TilePPr) located off-detector for every bunch crossing, requiring a data bandwidth of 80 Tbps. The TilePPr will provide preprocessed information to the first level of trigger and in parallel will store the samples in pipeline memories. The data of the events selected by the trigger system will be transferred to the ATLAS global Data AcQuisition (DAQ) system for further processing. A demonstrator drawer has been built to evaluate the new proposed readout architecture and prototypes of all the components. In the demonstrator, the detector data received in the Til...

  7. AGIS: The ATLAS Grid Information System

    CERN Document Server

    Anisenkov, Alexey; Di Girolamo, Alessandro; Gayazov, Stavro; Klimentov, Alexei; Oleynik, Danila; Senchenko, Alexander

    2012-01-01

    ATLAS is a particle physics experiment at the Large Hadron Collider at CERN. The experiment produces petabytes of data annually through simulation production and tens petabytes of data per year from the detector itself. The ATLAS Computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we present ATLAS Grid Information System (AGIS) designed to integrate configuration and status information about resources, services and topology of whole ATLAS Grid needed by ATLAS Distributed Computing applications and services.

  8. AGIS: The ATLAS Grid Information System

    CERN Document Server

    Anisenkov, A; The ATLAS collaboration; Klimentov, A; Oleynik, D; Petrosyan, A

    2014-01-01

    In this paper we describe ATLAS Grid Information System (AGIS), the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services.

  9. AGIS: The ATLAS Grid Information System

    OpenAIRE

    Anisenkov, A; Di Girolamo, A; Klimentov, A; Oleynik, D; Petrosyan, A

    2013-01-01

    In this paper we describe ATLAS Grid Information System (AGIS), the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services.

  10. ATLAS Operations: Experience and Evolution in the Data Taking Era

    International Nuclear Information System (INIS)

    Ueda, I

    2011-01-01

    This paper summarises the operational experience and improvements of the ATLAS hierarchical multi-tier computing infrastructure in the past year leading to taking and processing of the first collisions in 2009 and 2010. Special focus will be given to the Tier-0 which is responsible, among other things, for a prompt processing of the raw data coming from the online DAQ system and is thus a critical part of the chain. We will give an overview of the Tier-0 architecture, and improvements based on the operational experience. Emphasis will be put on the new developments, namely the Task Management System opening Tier-0 to expert users and Web 2.0 monitoring and management suite. We then overview the achieved performances with the distributed computing system, discuss observed data access patterns over the grid and describe how we used this information to improve analysis rates.

  11. AGIS: The ATLAS Grid Information System

    Science.gov (United States)

    Anisenkov, A.; Di Girolamo, A.; Klimentov, A.; Oleynik, D.; Petrosyan, A.; Atlas Collaboration

    2014-06-01

    ATLAS, a particle physics experiment at the Large Hadron Collider at CERN, produced petabytes of data annually through simulation production and tens of petabytes of data per year from the detector itself. The ATLAS computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we describe the ATLAS Grid Information System (AGIS), designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by the ATLAS Distributed Computing applications and services.

  12. AGIS: The ATLAS Grid Information System

    OpenAIRE

    Anisenkov, Alexey; Belov, Sergey; Di Girolamo, Alessandro; Gayazov, Stavro; Klimentov, Alexei; Oleynik, Danila; Senchenko, Alexander

    2012-01-01

    ATLAS is a particle physics experiment at the Large Hadron Collider at CERN. The experiment produces petabytes of data annually through simulation production and tens petabytes of data per year from the detector itself. The ATLAS Computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we present ATLAS Grid Information System (AGIS) designed to integrate configurat...

  13. ATLAS TDAQ System Administration:

    CERN Document Server

    Lee, Christopher Jon; The ATLAS collaboration; Bogdanchikov, Alexander; Ballestrero, Sergio; Contescu, Alexandru Cristian; Dubrov, Sergei; Fazio, Daniel; Korol, Aleksandr; Scannicchio, Diana; Twomey, Matthew Shaun; Voronkov, Artem

    2015-01-01

    The ATLAS Trigger and Data Acquisition (TDAQ) system is responsible for the online processing of live data, streaming from the ATLAS experiment at the Large Hadron Collider (LHC) at CERN. The online farm is composed of ̃3000 servers, processing the data readout from ̃100 million detector channels through multiple trigger levels. During the two years of the first Long Shutdown (LS1) there has been a tremendous amount of work done by the ATLAS TDAQ System Administrators, implementing numerous new software applications, upgrading the OS and the hardware, changing some design philosophies and exploiting the High Level Trigger farm with different purposes. During the data taking only critical security updates are applied and broken hardware is replaced to ensure a stable operational environment. The LS1 provided an excellent opportunity to look into new technologies and applications that would help to improve and streamline the daily tasks of not only the System Administrators, but also of the scientists who wil...

  14. FELIX: a High-Throughput Network Approach for Interfacing to Front End Electronics for ATLAS Upgrades

    International Nuclear Information System (INIS)

    Anderson, J; Drake, G; Ryu, S; Zhang, J; Borga, A; Boterenbrood, H; Schreuder, F; Vermeulen, J; Chen, H; Chen, K; Lanni, F; Francis, D; Gorini, B; Miotto, G Lehmann; Schumacher, J; Vandelli, W; Levinson, L; Narevicius, J; Roich, A; Plessl, C

    2015-01-01

    The ATLAS experiment at CERN is planning full deployment of a new unified optical link technology for connecting detector front end electronics on the timescale of the LHC Run 4 (2025). It is estimated that roughly 8000 GBT (GigaBit Transceiver) links, with transfer rates up to 10.24 Gbps, will replace existing links used for readout, detector control and distribution of timing and trigger information. A new class of devices will be needed to interface many GBT links to the rest of the trigger, data-acquisition and detector control systems. In this paper FELIX (Front End LInk eXchange) is presented, a PC-based device to route data from and to multiple GBT links via a high-performance general purpose network capable of a total throughput up to O(20 Tbps). FELIX implies architectural changes to the ATLAS data acquisition system, such as the use of industry standard COTS components early in the DAQ chain. Additionally the design and implementation of a FELIX demonstration platform is presented and hardware and software aspects will be discussed. (paper)

  15. Soft real-time alarm messages for ATLAS TDAQ

    CERN Document Server

    Darlea, G; Martin, B; Lehmann Miotto, G

    2010-01-01

    The ATLAS TDAQ network consists of three separate Ethernet-based networks (Data, Control and Management) with over 2000 end-nodes. The TDAQ system has to be aware of the meaningful network failures and events in order for it to take effective recovery actions. The first stage of the process is implemented with Spectrum, a commercial network management tool. Spectrum detects and registers all network events, then it publishes the information via a CORBA programming interface. A gateway program (called NSG—Network Service Gateway) connects to Spectrum through CORBA and exposes to its clients a Java RMI interface. This interface implements a callback mechanism that allows the clients to subscribe for monitoring "interesting" parts of the network. The last stage of the TDAQ network monitoring tool is implemented in a module named DNC (DAQ to Network Connection), which filters the events that are to be reported to the TDAQ system: it subscribes to the gateway only for the machines that are currently active in th...

  16. Argonne's atlas control system upgrade

    International Nuclear Information System (INIS)

    Munson, F.; Quock, D.; Chapin, B.; Figueroa, J.

    1999-01-01

    The ATLAS facility (Argonne Tandem-Linac Accelerator System) is located at the Argonne National Laboratory. The facility is a tool used in nuclear and atomic physics research, which focuses primarily on heavy-ion physics. The accelerator as well as its control system are evolutionary in nature, and consequently, continue to advance. In 1998 the most recent project to upgrade the ATLAS control system was completed. This paper briefly reviews the upgrade, and summarizes the configuration and features of the resulting control system

  17. Jet energy measurements at ILC. Calorimeter DAQ requirements and application in Higgs boson mass measurements

    International Nuclear Information System (INIS)

    Ebrahimi, Aliakbar

    2017-11-01

    required for the Higgs boson mass measurement can only be achieved using the particle flow approach to reconstruction. The particle flow approach requires highly-granular calorimeters and a highly efficient tracking system. The CALICE collaboration is developing highly-granular calorimeters for such applications. One of the challenges in the development of such calorimeters with millions of read-out channels is their Data Acquisition System (DAQ) system. The second part of this thesis involves contributions to development of a new DAQ system for the CALICE scintillator calorimeters. The new DAQ system fulfills the requirements for the prototypes tests while being scalable to larger systems. The requirements and general architecture of the DAQ system is outlined in this thesis. The new DAQ system has been commissioned and tested with particle beams at the CERN Proton Synchrotron test beam facility in 2014,results of which are presented here.

  18. Jet energy measurements at ILC. Calorimeter DAQ requirements and application in Higgs boson mass measurements

    Energy Technology Data Exchange (ETDEWEB)

    Ebrahimi, Aliakbar

    2017-11-15

    jet energy resolution required for the Higgs boson mass measurement can only be achieved using the particle flow approach to reconstruction. The particle flow approach requires highly-granular calorimeters and a highly efficient tracking system. The CALICE collaboration is developing highly-granular calorimeters for such applications. One of the challenges in the development of such calorimeters with millions of read-out channels is their Data Acquisition System (DAQ) system. The second part of this thesis involves contributions to development of a new DAQ system for the CALICE scintillator calorimeters. The new DAQ system fulfills the requirements for the prototypes tests while being scalable to larger systems. The requirements and general architecture of the DAQ system is outlined in this thesis. The new DAQ system has been commissioned and tested with particle beams at the CERN Proton Synchrotron test beam facility in 2014,results of which are presented here.

  19. The ATLAS IBL CO2 Cooling System

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00237783; The ATLAS collaboration; Zwalinski, L.; Bortolin, C.; Vogt, S.; Godlewski, J.; Crespo-Lopez, O.; Van Overbeek, M.; Blaszcyk, T.

    2017-01-01

    The ATLAS Pixel detector has been equipped with an extra B-layer in the space obtained by a reduced beam pipe. This new pixel detector called the ATLAS Insertable B-Layer (IBL) is installed in 2014 and is operational in the current ATLAS data taking. The IBL detector is cooled with evaporative CO2 and is the first of its kind in ATLAS. The ATLAS IBL CO2 cooling system is designed for lower temperature operation (<-35⁰C) than the previous developed CO2 cooling systems in High Energy Physics experiments. The cold temperatures are required to protect the pixel sensors for the high expected radiation dose up to 550 fb^-1 integrated luminosity.

  20. The ATLAS IBL CO2 Cooling System

    CERN Document Server

    Verlaat, Bartholomeus; The ATLAS collaboration

    2016-01-01

    The Atlas Pixel detector has been equipped with an extra B-layer in the space obtained by a reduced beam pipe. This new pixel detector called the ATLAS Insertable B-Layer (IBL) is installed in 2014 and is operational in the current ATLAS data taking. The IBL detector is cooled with evaporative CO2 and is the first of its kind in ATLAS. The ATLAS IBL CO2 cooling system is designed for lower temperature operation (<-35⁰C) than the previous developed CO2 cooling systems in High Energy Physics experiments. The cold temperatures are required to protect the pixel sensors for the high expected radiation dose up to 550 fb^-1 integrated luminosity. This paper describes the design, development, construction and commissioning of the IBL CO2 cooling system. It describes the challenges overcome and the important lessons learned for the development of future systems which are now under design for the Phase-II upgrade detectors.

  1. The ATLAS detector control system

    International Nuclear Information System (INIS)

    Schlenker, S.; Arfaoui, S.; Franz, S.

    2012-01-01

    The ATLAS experiment is one of the multi-purpose experiments at the Large Hadron Collider (LHC), constructed to study elementary particle interactions in collisions of high-energy proton beams. Twelve different sub-detectors as well as the common experimental infrastructure are supervised by the Detector Control System (DCS). The DCS enables equipment supervision of all ATLAS sub-detectors by using a system of more that 130 server machines running the industrial SCADA product PVSS. This highly distributed system reads, processes and archives of the order of 10 6 operational parameters. Higher level control system layers allow for automatic control procedures, efficient error recognition and handling, and manage the communication with external systems such as the LHC. First, this contribution describes the status of the ATLAS DCS and the experience gained during the LHC commissioning and the first physics data taking operation period. Secondly, the future evolution and maintenance constraints for the coming years and the LHC high luminosity upgrades are outlined. (authors)

  2. The ATLAS Detector Control System

    CERN Document Server

    Schlenker, S; Kersten, S; Hirschbuehl, D; Braun, H; Poblaguev, A; Oliveira Damazio, D; Talyshev, A; Zimmermann, S; Franz, S; Gutzwiller, O; Hartert, J; Mindur, B; Tsarouchas, CA; Caforio, D; Sbarra, C; Olszowska, J; Hajduk, Z; Banas, E; Wynne, B; Robichaud-Veronneau, A; Nemecek, S; Thompson, PD; Mandic, I; Deliyergiyev, M; Polini, A; Kovalenko, S; Khomutnikov, V; Filimonov, V; Bindi, M; Stanecka, E; Martin, T; Lantzsch, K; Hoffmann, D; Huber, J; Mountricha, E; Santos, HF; Ribeiro, G; Barillari, T; Habring, J; Arabidze, G; Boterenbrood, H; Hart, R; Marques Vinagre, F; Lafarguette, P; Tartarelli, GF; Nagai, K; D'Auria, S; Chekulaev, S; Phillips, P; Ertel, E; Brenner, R; Leontsinis, S; Mitrevski, J; Grassi, V; Karakostas, K; Iakovidis, G.; Marchese, F; Aielli, G

    2011-01-01

    The ATLAS experiment is one of the multi-purpose experiments at the Large Hadron Collider (LHC), constructed to study elementary particle interactions in collisions of high-energy proton beams. Twelve different sub-detectors as well as the common experimental infrastructure are supervised by the Detector Control System (DCS). The DCS enables equipment supervision of all ATLAS sub-detectors by using a system of >130 server machines running the industrial SCADA product PVSS. This highly distributed system reads, processes and archives of the order of 106 operational parameters. Higher level control system layers allow for automatic control procedures, efficient error recognition and handling, and manage the communication with external systems such as the LHC. This contribution firstly describes the status of the ATLAS DCS and the experience gained during the LHC commissioning and the first physics data taking operation period. Secondly, the future evolution and maintenance constraints for the coming years an...

  3. Performance of n-in-p pixel detectors irradiated at fluences up to $5x10^{15} n_{eq}/cm^{2}$ for the future ATLAS upgrades

    CERN Document Server

    INSPIRE-00219560; La Rosa, A.; Nisius, R.; Pernegger, H.; Richter, R.H.; Weigell, P.

    We present the results of the characterization of novel n-in-p planar pixel detectors, designed for the future upgrades of the ATLAS pixel system. N-in-p silicon devices are a promising candidate to replace the n-in-n sensors thanks to their radiation hardness and cost effectiveness, that allow for enlarging the area instrumented with pixel detectors. The n-in-p modules presented here are composed of pixel sensors produced by CiS connected by bump-bonding to the ATLAS readout chip FE-I3. The characterization of these devices has been performed with the ATLAS pixel read-out systems, TurboDAQ and USBPIX, before and after irradiation with 25 MeV protons and neutrons up to a fluence of 5x10**15 neq /cm2. The charge collection measurements carried out with radioactive sources have proven the feasibility of employing this kind of detectors up to these particle fluences. The collected charge has been measured to be for any fluence in excess of twice the value of the FE-I3 threshold, tuned to 3200 e. The first result...

  4. DAQ systems for the high energy and nuclotron internal target polarimeters with network access to polarization calculation results and raw data

    International Nuclear Information System (INIS)

    Isupov, A.Yu.

    2004-01-01

    On-line data acquisition (DAQ) system for the Nuclotron Internal Target Polarimeter (ITP) at the LHE, JINR, is explained in respect of design and implementation, based on the distributed data acquisition and processing system qdpb. Software modules specific for this implementation (dependent on ITP data contents and hardware layout) are discussed briefly in comparison with those for the High Energy Polarimeter (HEP) at the LHE, JINR. User access methods both to raw data and to results of polarization calculations of the ITP and HEP are discussed

  5. AGIS: Evolution of Distributed Computing Information system for ATLAS

    CERN Document Server

    Anisenkov, Alexey; The ATLAS collaboration; Alandes Pradillo, Maria; Karavakis, Edward

    2015-01-01

    The variety of the ATLAS Computing Infrastructure requires a central information system to define the topology of computing resources and to store the different parameters and configuration data which are needed by the various ATLAS software components. The ATLAS Grid Information System is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services.

  6. VMEbus based computer and real-time UNIX as infrastructure of DAQ

    International Nuclear Information System (INIS)

    Yasu, Y.; Fujii, H.; Nomachi, M.; Kodama, H.; Inoue, E.; Tajima, Y.; Takeuchi, Y.; Shimizu, Y.

    1994-01-01

    This paper describes what the authors have constructed as the infrastructure of data acquisition system (DAQ). The paper reports recent developments concerned with HP VME board computer with LynxOS (HP742rt/HP-RT) and Alpha/OSF1 with VMEbus adapter. The paper also reports current status of developing a Benchmark Suite for Data Acquisition (DAQBENCH) for measuring not only the performance of VME/CAMAC access but also that of the context switching, the inter-process communications and so on, for various computers including Workstation-based systems and VME board computers

  7. The ATLAS Tier-0 Overview and operational experience

    CERN Document Server

    Elsing, M; Nairz, A; Negri, G

    2010-01-01

    Within the ATLAS hierarchical, multi-tier computing infrastructure, the Tier-0 centre at CERN is mainly responsible for prompt processing of the raw data coming from the online DAQ system, to archive the raw and derived data on tape, to register the data with the relevant catalogues and to distribute them to the associated Tier-1 centers. The Tier-0 is already fully functional. It has been successfully participating in all cosmic and commissioning data taking since May 2007, and was ramped up to its foreseen full size, performance and throughput for the cosmic (and short single-beam) run periods between July and October 2008. Data and work flows for collision data taking were exercised in several "Full Dress Rehearsals" (FDRs) in the course of 2008. The transition from an expert to a shifter-based system was successfully established in July 2008. This article will give an overview of the Tier-0 system, its data and work flows, and operations model. It will review the operational experience gained in cosmic, c...

  8. A system for managing information at ATLAS

    International Nuclear Information System (INIS)

    Tilbrook, I.R.

    1993-01-01

    In response to a need for better management of maintenance and document information at the Argonne Tandem-Linear Accelerating System (ATLAS), the ATLAS Information Management System (AIMS) has been created. The system is based on the relational database model. The system's applications use the Alpha-4 relational database management system, a commercially available software package. The system's function and design are described

  9. Evolution of the Argonne Tandem Linear Accelerator System (ATLAS) control system

    International Nuclear Information System (INIS)

    Power, M.; Munson, F.

    2012-01-01

    Given that the Argonne Tandem Linear Accelerator System (ATLAS) recently celebrated its 25. anniversary, this paper will explore the past, present, and future of the ATLAS Control System, and how it has evolved along with the accelerator and control system technology. ATLAS as we know it today, originated with a Tandem Van de Graff in the sixties. With the addition of the Booster section in the late seventies, came the first computerized control. ATLAS itself was placed into service on June 25, 1985, and was the world's first superconducting linear accelerator for ions. Since its dedication as a National User Facility, more than a thousand experiments by more than 2,000 users worldwide, have taken advantage of the unique capabilities it provides. Today, ATLAS continues to be a user facility for physicists who study the particles that form the heart of atoms. Its most recent addition, CARIBU (Californium Rare Isotope Breeder Upgrade), creates special beams that feed into ATLAS. ATLAS is similar to a living organism, changing and responding to new technological challenges and research needs. As it continues to evolve, so does the control system: from the original days using a DEC PDP-11/34 computer and two CAMAC crates, to a DEC Alpha computer running Vsystem software and more than twenty CAMAC crates, to distributed computers and VME systems. Future upgrades are also in the planning stages that will continue to evolve the control system. (authors)

  10. Firmware development and testing of the ATLAS IBL Readout Driver card

    CERN Document Server

    Chen, S; The ATLAS collaboration

    2014-01-01

    The ATLAS Experiment is reworking and upgrading systems during the current LHC shut down. In particular, the Pixel detector is inserting an additional inner layer called Insertable B-Layer (IBL). The Readout-Driver card (ROD), the Back-of-Crate card (BOC), and the S-Link together form the essential frontend data path of the IBL’s off-detector DAQ system. The strategy for IBLROD firmware development focused on migrating and tailoring HDL code blocks from PixelROD to ensure modular compatibility in future ROD upgrades, in which a unified code version will interface with IBL and Pixel layers. Essential features such as data formatting, frontend-specific error handling, and calibration are added to the ROD data path. An IBLDAQ testbench using realistic frontend chip model was created to serve as an initial framework for full offline electronic system simulation. In this document, major firmware achievements concerning the IBLROD data path implementation, tested in testbench and on ROD prototypes, will be report...

  11. Integrated graphical user interface for the back-end software sub-system

    International Nuclear Information System (INIS)

    Badescu, E.; Caprini, M.

    2001-01-01

    The ATLAS data acquisition and Event Filter prototype '-1' project was intended to produce a prototype system for evaluating candidate technologies and architectures for the final ATLAS DAQ system on the LHC accelerator at CERN. Within the prototype project, the back-end sub-system encompasses the software for configuring, controlling and monitoring the data acquisition (DAQ). The back-end sub-system includes core components and detector integration components. One of the detector integration components is the Integrated Graphical User Interface (IGUI), which is intended to give a view of the status of the DAQ system and its sub-systems (Dataflow, Event Filter and Back-end) and to allow the user (general users, such as a shift operator at a test beam or experts, in order to control and debug the DAQ system) to control its operation. The IGUI is intended to be a Status Display and a Control Interface too, so there are three groups of functional requirements: display requirements (the information to be displayed); control requirements (the actions the IGUI shall perform on the DAQ components); general requirements, applying to the general functionality of the IGUI. The constraint requirements include requirements related to the access control (shift operator or expert user). The quality requirements are related to the portability on different platforms. The IGUI has to interact with many components in a distributed environment. The following design guidelines have been considered in order to fulfil the requirements: use a modular design with easy possibility to integrate different sub-systems; use Java language for portability and powerful graphical features; use CORBA interfaces for communication with other components. The actual implementation of Back-end software components use Inter-Language Unification (ILU) for inter-process communication. Different methods of access of Java applications to ILU C++ servers have been evaluated (native methods, ILU Java support

  12. The ATLAS beam pick-up based timing system

    International Nuclear Information System (INIS)

    Ohm, C.; Pauly, T.

    2010-01-01

    The ATLAS BPTX stations are composed of electrostatic button pick-up detectors, located 175 m away along the beam pipe on both sides of ATLAS. The pick-ups are installed as a part of the LHC beam instrumentation and used by ATLAS for timing purposes. The usage of the BPTX signals in ATLAS is twofold: they are used both in the trigger system and for LHC beam monitoring. The BPTX signals are discriminated with a constant-fraction discriminator to provide a Level-1 trigger when a bunch passes through ATLAS. Furthermore, the BPTX detectors are used by a stand-alone monitoring system for the LHC bunches and timing signals. The BPTX monitoring system measures the phase between collisions and clock with a precision better than 100 ps in order to guarantee a stable phase relationship for optimal signal sampling in the sub-detector front-end electronics. In addition to monitoring this phase, the properties of the individual bunches are measured and the structure of the beams is determined. On September 10, 2008, the first LHC beams reached the ATLAS experiment. During this period with beam, the ATLAS BPTX system was used extensively to time in the read-out of the sub-detectors. In this paper, we present the performance of the BPTX system and its measurements of the first LHC beams.

  13. Firmware development and testing of the ATLAS Pixel Detector / IBL ROD card

    CERN Document Server

    Balbi, G; The ATLAS collaboration; Gabrielli, A; Lama, L; Travaglini, R; Backhaus, M; Bindi, M; Chen, S-P; Flick, T; Kretz, M; Kugel, A; Wensing, M

    2014-01-01

    The ATLAS Experiment is reworking and upgrading systems during the current LHC shut down. In particular, the Pixel detector has inserted an additional inner layer called Insertable B-Layer (IBL). The Readout-Driver card (ROD), the Back-of-Crate card (BOC), and the S-Link together form the essential frontend data path of the IBL’s off-detector DAQ system. The strategy for IBLROD firmware development was three-fold: keeping as much of the PixelROD datapath firmware logic as possible, employing a complete new scheme of steering and calibration firmware and designing the overall system to prepare for a future unified code version integrating IBL and Pixel layers. Essential features such as data formatting, frontend-specific error handling, and calibration are added to the ROD data path. An IBLDAQ testbench using realistic frontend chip model was created to serve as an initial framework for full offline electronic system simulation. In this document, major firmware achievements concerning the IBLROD data path im...

  14. CMS DAQ current and future hardware upgrades up to post Long Shutdown 3 (LS3) times

    CERN Document Server

    Racz, Attila; Behrens, Ulf; Branson, James; Chaze, Olivier; Cittolin, Sergio; Contescu, Cristian; da Silva Gomes, Diego; Darlea, Georgiana-Lavinia; Deldicque, Christian; Demiragli, Zeynep; Dobson, Marc; Doualot, Nicolas; Erhan, Samim; Fulcher, Jonathan Richard; Gigi, Dominique; Gladki, Maciej; Glege, Frank; Gomez-Ceballos, Guillelmo; Hegeman, Jeroen; Holzner, Andre; Janulis, Mindaugas; Lettrich, Michael; Meijers, Frans; Meschi, Emilio; Mommsen, Remigius K; Morovic, Srecko; O'Dell, Vivian; Orn, Samuel Johan; Orsini, Luciano; Papakrivopoulos, Ioannis; Paus, Christoph; Petrova, Petia; Petrucci, Andrea; Pieri, Marco; Rabady, Dinyar; Reis, Thomas; Sakulin, Hannes; Schwick, Christoph; Simelevicius, Dainius; Vazquez Velez, Cristina; Vougioukas, Michail; Zejdl, Petr

    2017-01-01

    Following the first LHC collisions seen and recorded by CMS in 2009, the DAQ hardware went through a major upgrade during LS1 (2013- 2014) and new detectors have been connected during 2015-2016 and 2016-2017 winter shutdowns. Now, LS2 (2019-2020) and LS3 (2024-mid 2026) are actively being prepared. This paper shows how CMS DAQ hardware has evolved from the beginning and will continue to evolve in order to meet the future challenges posed by High Luminosity LHC (HL-LHC) and the CMS detector evolution. In particular, post LS3 DAQ architectures are focused upon.

  15. Automated Loads Analysis System (ATLAS)

    Science.gov (United States)

    Gardner, Stephen; Frere, Scot; O’Reilly, Patrick

    2013-01-01

    ATLAS is a generalized solution that can be used for launch vehicles. ATLAS is used to produce modal transient analysis and quasi-static analysis results (i.e., accelerations, displacements, and forces) for the payload math models on a specific Shuttle Transport System (STS) flight using the shuttle math model and associated forcing functions. This innovation solves the problem of coupling of payload math models into a shuttle math model. It performs a transient loads analysis simulating liftoff, landing, and all flight events between liftoff and landing. ATLAS utilizes efficient and numerically stable algorithms available in MSC/NASTRAN.

  16. Evolution of the ATLAS Nightly Build System

    International Nuclear Information System (INIS)

    Undrus, A

    2012-01-01

    The ATLAS Nightly Build System is a major component in the ATLAS collaborative software organization, validation, and code approval scheme. For over 10 years of development it has evolved into a factory for automatic release production and grid distribution. The 50 multi-platform branches of ATLAS releases provide vast opportunities for testing new packages, verification of patches to existing software, and migration to new platforms and compilers for ATLAS code that currently contains 2200 packages with 4 million C++ and 1.4 million python scripting lines written by about 1000 developers. Recent development was focused on the integration of ATLAS Nightly Build and Installation systems. The nightly releases are distributed and validated and some are transformed into stable releases used for data processing worldwide. The ATLAS Nightly System is managed by the NICOS control tool on a computing farm with 50 powerful multiprocessor nodes. NICOS provides the fully automated framework for the release builds, testing, and creation of distribution kits. The ATN testing framework of the Nightly System runs unit and integration tests in parallel suites, fully utilizing the resources of multi-core machines, and provides the first results even before compilations complete. The NICOS error detection system is based on several techniques and classifies the compilation and test errors according to their severity. It is periodically tuned to place greater emphasis on certain software defects by highlighting the problems on NICOS web pages and sending automatic e-mail notifications to responsible developers. These and other recent developments will be presented and future plans will be described.

  17. The ATLAS Level-1 Muon to Central Trigger Processor Interface

    CERN Document Server

    Berge, D; Farthouat, P; Haas, S; Klofver, P; Krasznahorkay, A; Messina, A; Pauly, T; Schuler, G; Spiwoks, R; Wengler, T; PH-EP

    2007-01-01

    The Muon to Central Trigger Processor Interface (MUCTPI) is part of the ATLAS Level-1 trigger system and connects the output of muon trigger system to the Central Trigger Processor (CTP). At every bunch crossing (BC), the MUCTPI receives information on muon candidates from each of the 208 muon trigger sectors and calculates the total multiplicity for each of six transverse momentum (pT) thresholds. This multiplicity value is then sent to the CTP, where it is used together with the input from the Calorimeter trigger to make the final Level-1 Accept (L1A) decision. In addition the MUCTPI provides summary information to the Level-2 trigger and to the data acquisition (DAQ) system for events selected at Level-1. This information is used to define the regions of interest (RoIs) that drive the Level-2 muontrigger processing. The MUCTPI system consists of a 9U VME chassis with a dedicated active backplane and 18 custom designed modules. The design of the modules is based on state-of-the-art FPGA devices and special ...

  18. Support system for ATLAS distributed computing operations

    CERN Document Server

    Kishimoto, Tomoe; The ATLAS collaboration

    2018-01-01

    The ATLAS distributed computing system has allowed the experiment to successfully meet the challenges of LHC Run 2. In order for distributed computing to operate smoothly and efficiently, several support teams are organized in the ATLAS experiment. The ADCoS (ATLAS Distributed Computing Operation Shifts) is a dedicated group of shifters who follow and report failing jobs, failing data transfers between sites, degradation of ATLAS central computing services, and more. The DAST (Distributed Analysis Support Team) provides user support to resolve issues related to running distributed analysis on the grid. The CRC (Computing Run Coordinator) maintains a global view of the day-to-day operations. In this presentation, the status and operational experience of the support system for ATLAS distributed computing in LHC Run 2 will be reported. This report also includes operations experience from the grid site point of view, and an analysis of the errors that create the biggest waste of wallclock time. The report of oper...

  19. Web tools to monitor and debug DAQ hardware

    International Nuclear Information System (INIS)

    Desavouret, Eugene; Nogiec, Jerzy M.

    2003-01-01

    A web-based toolkit to monitor and diagnose data acquisition hardware has been developed. It allows for remote testing, monitoring, and control of VxWorks data acquisition computers and associated instrumentation using the HTTP protocol and a web browser. This solution provides concurrent and platform independent access, supplementary to the standard single-user rlogin mechanism. The toolkit is based on a specialized web server, and allows remote access and execution of select system commands and tasks, execution of test procedures, and provides remote monitoring of computer system resources and connected hardware. Various DAQ components such as multiplexers, digital I/O boards, analog to digital converters, or current sources can be accessed and diagnosed remotely in a uniform and well-organized manner. Additionally, the toolkit application supports user authentication and is able to enforce specified access restrictions

  20. Alignment of the ATLAS Inner Detector Tracking System

    CERN Document Server

    Moles-Valls, R

    2008-01-01

    The ATLAS experiment is equipped with a tracking system for c harged particles built on two technologies: silicon and drift tube base detectors. These kind of detectors compose the ATLAS Inner Detector (ID). The Alignment of the ATLAS ID tracking s ystem requires the determination of almost 36000 degrees of freedom. From the tracking point o f view, the alignment parameters should be know to a few microns precision. This permits to att ain optimal measurements of the parameters of the charged particles trajectories, thus ena bling ATLAS to achieve its physics goals. The implementation of the alignment software, its framewor k and the data flow will be discussed. Special attention will be paid to the recent challenges wher e large scale computing simulation of the ATLAS detector has been performed, mimicking the ATLAS o peration, which is going to be very important for the LHC startup scenario. The alignment r esult for several challenges (real cosmic ray data taking and computing system commissioning) will be...

  1. Completion of the ATLAS control system upgrade

    International Nuclear Information System (INIS)

    Munson, F. H.

    1998-01-01

    In the fall of 1992 at the SNEAP(Symposium of North Eastern Accelerator Personnel) a project to up grade the ATLAS (Argonne Tandem Linear Accelerator System) control system was first reported. Not unlike the accelerator it services the control system will continue to evolve. However, the first of this year has marked the completion of this most recent upgrade project. Since the control system upgrade took place during a period when ATLAS was operating at a record number of hours, special techniques were necessary to enable the development of the new control system ''on line'' while still saving the needs of normal operations. This paper reviews the techniques used for upgrading the ATLAS control system while the system was in use. In addition a summary of the upgrade project and final configuration, as well as some of the features of the new control system is provided

  2. Design and Implementation of the ATLAS Detector Control System

    CERN Document Server

    Boterenbrood, H; Cook, J; Filimonov, V; Hallgren, B I; Heubers, W P J; Khomoutnikov, V; Ryabov, Yu; Varela, F

    2004-01-01

    The overall dimensions of the ATLAS experiment and its harsh environment, due to radiation and magnetic field, represent new challenges for the implementation of the Detector Control System. It supervises all hardware of the ATLAS detector, monitors the infrastructure of the experiment, and provides information exchange with the LHC accelerator. The system must allow for the operation of the different ATLAS sub-detectors in stand-alone mode, as required for calibration and debugging, as well as the coherent and integrated operation of all sub-detectors for physics data taking. For this reason, the Detector Control System is logically arranged to map the hierarchical organization of the ATLAS detector. Special requirements are placed onto the ATLAS Detector Control System because of the large number of distributed I/O channels and of the inaccessibility of the equipment during operation. Standardization is a crucial issue for the design and implementation of the control system because of the large variety of e...

  3. The ATLAS Distributed Analysis System

    CERN Document Server

    Legger, F; The ATLAS collaboration; Pacheco Pages, A; Stradling, A

    2013-01-01

    In the LHC operations era, analysis of the multi-petabyte ATLAS data sample by globally distributed physicists is a challenging task. To attain the required scale the ATLAS Computing Model was designed around the concept of grid computing, realized in the Worldwide LHC Computing Grid (WLCG), the largest distributed computational resource existing in the sciences. The ATLAS experiment currently stores over 140 PB of data and runs about 140,000 concurrent jobs continuously at WLCG sites. During the first run of the LHC, the ATLAS Distributed Analysis (DA) service has operated stably and scaled as planned. More than 1600 users submitted jobs in 2012, with 2 million or more analysis jobs per week, peaking at about a million jobs per day. The system dynamically distributes popular data to expedite processing and maximally utilize resources. The reliability of the DA service is high but steadily improving; grid sites are continually validated against a set of standard tests, and a dedicated team of expert shifters ...

  4. The ATLAS Distributed Analysis System

    CERN Document Server

    Legger, F; The ATLAS collaboration

    2014-01-01

    In the LHC operations era, analysis of the multi-petabyte ATLAS data sample by globally distributed physicists is a challenging task. To attain the required scale the ATLAS Computing Model was designed around the concept of grid computing, realized in the Worldwide LHC Computing Grid (WLCG), the largest distributed computational resource existing in the sciences. The ATLAS experiment currently stores over 140 PB of data and runs about 140,000 concurrent jobs continuously at WLCG sites. During the first run of the LHC, the ATLAS Distributed Analysis (DA) service has operated stably and scaled as planned. More than 1600 users submitted jobs in 2012, with 2 million or more analysis jobs per week, peaking at about a million jobs per day. The system dynamically distributes popular data to expedite processing and maximally utilize resources. The reliability of the DA service is high but steadily improving; grid sites are continually validated against a set of standard tests, and a dedicated team of expert shifters ...

  5. Implementation of CMS Central DAQ monitoring services in Node.js

    CERN Document Server

    Vougioukas, Michail

    2015-01-01

    This report summarizes my contribution to the CMS Central DAQ monitoring system, in my capacity as a CERN Summer Students Programme participant, from June to September 2015. Specifically, my work was focused on rewriting – from Apache/PHP to Node.js/Javascript - and optimizing real-time monitoring web services (mostly Elasticsearch-based but also some Oracle-based) for the CMS Data Acquisition (Run II Filterfarm). Moreover, it included an implementation of web server caching, for better scalability when simultaneous web clients use the services. Measurements confirmed that the software developed during this project has indeed a potential to provide scalable services.

  6. ATLAS: A High-cadence All-sky Survey System

    Science.gov (United States)

    Tonry, J. L.; Denneau, L.; Heinze, A. N.; Stalder, B.; Smith, K. W.; Smartt, S. J.; Stubbs, C. W.; Weiland, H. J.; Rest, A.

    2018-06-01

    Technology has advanced to the point that it is possible to image the entire sky every night and process the data in real time. The sky is hardly static: many interesting phenomena occur, including variable stationary objects such as stars or QSOs, transient stationary objects such as supernovae or M dwarf flares, and moving objects such as asteroids and the stars themselves. Funded by NASA, we have designed and built a sky survey system for the purpose of finding dangerous near-Earth asteroids (NEAs). This system, the “Asteroid Terrestrial-impact Last Alert System” (ATLAS), has been optimized to produce the best survey capability per unit cost, and therefore is an efficient and competitive system for finding potentially hazardous asteroids (PHAs) but also for tracking variables and finding transients. While carrying out its NASA mission, ATLAS now discovers more bright (m day cadence. ATLAS discovered the afterglow of a gamma-ray burst independent of the high energy trigger and has released a variable star catalog of 5 × 106 sources. This is the first of a series of articles describing ATLAS, devoted to the design and performance of the ATLAS system. Subsequent articles will describe in more detail the software, the survey strategy, ATLAS-derived NEA population statistics, transient detections, and the first data release of variable stars and transient light curves.

  7. THOR-a commodity component prototype for the ATLAS Event Filter

    CERN Document Server

    Davis, R; MacKinnon, S; Pinfold, James L

    1999-01-01

    The ATLAS Event Filter prototype developed at the University of Alberta (the THOR project) is being used in the context of the DAQ-1 project to study issues related to the implementation of the sub-farm model using commodity components and open source software. The prototype consists of seven dual Pentium II 450 MHz machines connected via a fast Ethernet switch and will soon be upgraded to nine dual Pentium 450 MHz machines connected in a 3*3 array using Scalable Coherent Interconnect (SCI). The entire prototype is placed behind a firewall machine which serves as the control centre for the processor farm. (8 refs).

  8. The ATLAS Detector Control System

    International Nuclear Information System (INIS)

    Lantzsch, K; Braun, H; Hirschbuehl, D; Kersten, S; Arfaoui, S; Franz, S; Gutzwiller, O; Schlenker, S; Tsarouchas, C A; Mindur, B; Hartert, J; Zimmermann, S; Talyshev, A; Oliveira Damazio, D; Poblaguev, A; Martin, T; Thompson, P D; Caforio, D; Sbarra, C; Hoffmann, D

    2012-01-01

    The ATLAS experiment is one of the multi-purpose experiments at the Large Hadron Collider (LHC) at CERN, constructed to study elementary particle interactions in collisions of high-energy proton beams. Twelve different sub detectors as well as the common experimental infrastructure are controlled and monitored by the Detector Control System (DCS) using a highly distributed system of 140 server machines running the industrial SCADA product PVSS. Higher level control system layers allow for automatic control procedures, efficient error recognition and handling, manage the communication with external systems such as the LHC controls, and provide a synchronization mechanism with the ATLAS data acquisition system. Different databases are used to store the online parameters of the experiment, replicate a subset used for physics reconstruction, and store the configuration parameters of the systems. This contribution describes the computing architecture and software tools to handle this complex and highly interconnected control system.

  9. The ATLAS Detector Control System

    Science.gov (United States)

    Lantzsch, K.; Arfaoui, S.; Franz, S.; Gutzwiller, O.; Schlenker, S.; Tsarouchas, C. A.; Mindur, B.; Hartert, J.; Zimmermann, S.; Talyshev, A.; Oliveira Damazio, D.; Poblaguev, A.; Braun, H.; Hirschbuehl, D.; Kersten, S.; Martin, T.; Thompson, P. D.; Caforio, D.; Sbarra, C.; Hoffmann, D.; Nemecek, S.; Robichaud-Veronneau, A.; Wynne, B.; Banas, E.; Hajduk, Z.; Olszowska, J.; Stanecka, E.; Bindi, M.; Polini, A.; Deliyergiyev, M.; Mandic, I.; Ertel, E.; Marques Vinagre, F.; Ribeiro, G.; Santos, H. F.; Barillari, T.; Habring, J.; Huber, J.; Arabidze, G.; Boterenbrood, H.; Hart, R.; Iakovidis, G.; Karakostas, K.; Leontsinis, S.; Mountricha, E.; Ntekas, K.; Filimonov, V.; Khomutnikov, V.; Kovalenko, S.; Grassi, V.; Mitrevski, J.; Phillips, P.; Chekulaev, S.; D'Auria, S.; Nagai, K.; Tartarelli, G. F.; Aielli, G.; Marchese, F.; Lafarguette, P.; Brenner, R.

    2012-12-01

    The ATLAS experiment is one of the multi-purpose experiments at the Large Hadron Collider (LHC) at CERN, constructed to study elementary particle interactions in collisions of high-energy proton beams. Twelve different sub detectors as well as the common experimental infrastructure are controlled and monitored by the Detector Control System (DCS) using a highly distributed system of 140 server machines running the industrial SCADA product PVSS. Higher level control system layers allow for automatic control procedures, efficient error recognition and handling, manage the communication with external systems such as the LHC controls, and provide a synchronization mechanism with the ATLAS data acquisition system. Different databases are used to store the online parameters of the experiment, replicate a subset used for physics reconstruction, and store the configuration parameters of the systems. This contribution describes the computing architecture and software tools to handle this complex and highly interconnected control system.

  10. The design and realization of general high-speed RAIN100B DAQ module based on powerPC MPC5200B processor

    International Nuclear Information System (INIS)

    Xue Tao; Gong Guanghua; Shao Beibei

    2010-01-01

    In order to deal with the DAQ function of nuclear electronics, department of engineering physics of Tsinghua University design and realize a general, high-speed RAIN100B DAQ module based on Freescale's PowerPC MPC5200B processor.And the RAIN100B was used on GEM detector DAQ, it can reach up to 90Mbps data speed. The result is also presented and discussed. (authors)

  11. Upgrading the ATLAS control system

    International Nuclear Information System (INIS)

    Munson, F.H.; Ferraretto, M.

    1993-01-01

    Heavy-ion accelerators are tools used in the research of nuclear and atomic physics. The ATLAS facility at the Argonne National Laboratory is one such tool. The ATLAS control system serves as the primary operator interface to the accelerator. A project to upgrade the control system is presently in progress. Since this is an upgrade project and not a new installation, it was imperative that the development work proceed without interference to normal operations. An additional criteria for the development work was that the writing of additional ''in-house'' software should be kept to a minimum. This paper briefly describes the control system being upgraded, and explains some of the reasons for the decision to upgrade the control system. Design considerations and goals for the new system are described, and the present status of the upgrade is discussed

  12. Multilevel Workflow System in the ATLAS Experiment

    International Nuclear Information System (INIS)

    Borodin, M; De, K; Navarro, J Garcia; Golubkov, D; Klimentov, A; Maeno, T; Vaniachine, A

    2015-01-01

    The ATLAS experiment is scaling up Big Data processing for the next LHC run using a multilevel workflow system comprised of many layers. In Big Data processing ATLAS deals with datasets, not individual files. Similarly a task (comprised of many jobs) has become a unit of the ATLAS workflow in distributed computing, with about 0.8M tasks processed per year. In order to manage the diversity of LHC physics (exceeding 35K physics samples per year), the individual data processing tasks are organized into workflows. For example, the Monte Carlo workflow is composed of many steps: generate or configure hard-processes, hadronize signal and minimum-bias (pileup) events, simulate energy deposition in the ATLAS detector, digitize electronics response, simulate triggers, reconstruct data, convert the reconstructed data into ROOT ntuples for physics analysis, etc. Outputs are merged and/or filtered as necessary to optimize the chain. The bi-level workflow manager - ProdSys2 - generates actual workflow tasks and their jobs are executed across more than a hundred distributed computing sites by PanDA - the ATLAS job-level workload management system. On the outer level, the Database Engine for Tasks (DEfT) empowers production managers with templated workflow definitions. On the next level, the Job Execution and Definition Interface (JEDI) is integrated with PanDA to provide dynamic job definition tailored to the sites capabilities. We report on scaling up the production system to accommodate a growing number of requirements from main ATLAS areas: Trigger, Physics and Data Preparation. (paper)

  13. Advanced Technology Lifecycle Analysis System (ATLAS)

    Science.gov (United States)

    O'Neil, Daniel A.; Mankins, John C.

    2004-01-01

    Developing credible mass and cost estimates for space exploration and development architectures require multidisciplinary analysis based on physics calculations, and parametric estimates derived from historical systems. Within the National Aeronautics and Space Administration (NASA), concurrent engineering environment (CEE) activities integrate discipline oriented analysis tools through a computer network and accumulate the results of a multidisciplinary analysis team via a centralized database or spreadsheet Each minute of a design and analysis study within a concurrent engineering environment is expensive due the size of the team and supporting equipment The Advanced Technology Lifecycle Analysis System (ATLAS) reduces the cost of architecture analysis by capturing the knowledge of discipline experts into system oriented spreadsheet models. A framework with a user interface presents a library of system models to an architecture analyst. The analyst selects models of launchers, in-space transportation systems, and excursion vehicles, as well as space and surface infrastructure such as propellant depots, habitats, and solar power satellites. After assembling the architecture from the selected models, the analyst can create a campaign comprised of missions spanning several years. The ATLAS controller passes analyst specified parameters to the models and data among the models. An integrator workbook calls a history based parametric analysis cost model to determine the costs. Also, the integrator estimates the flight rates, launched masses, and architecture benefits over the years of the campaign. An accumulator workbook presents the analytical results in a series of bar graphs. In no way does ATLAS compete with a CEE; instead, ATLAS complements a CEE by ensuring that the time of the experts is well spent Using ATLAS, an architecture analyst can perform technology sensitivity analysis, study many scenarios, and see the impact of design decisions. When the analyst is

  14. Overview of the ATLAS distributed computing system

    CERN Document Server

    Elmsheuser, Johannes; The ATLAS collaboration

    2018-01-01

    The CERN ATLAS experiment successfully uses a worldwide computing infrastructure to support the physics program during LHC Run 2. The grid workflow system PanDA routinely manages 250 to 500 thousand concurrently running production and analysis jobs to process simulation and detector data. In total more than 300 PB of data is distributed over more than 150 sites in the WLCG and handled by the ATLAS data management system Rucio. To prepare for the ever growing LHC luminosity in future runs new developments are underway to even more efficiently use opportunistic resources such as HPCs and utilize new technologies. This presentation will review and explain the outline and the performance of the ATLAS distributed computing system and give an outlook to new workflow and data management ideas for the beginning of the LHC Run 3.

  15. A dynamic system for ATLAS software installation on OSG grid sites

    International Nuclear Information System (INIS)

    Zhao, X; Maeno, T; Wenaus, T; Leuhring, F; Youssef, S; Brunelle, J; De Salvo, A; Thompson, A S

    2010-01-01

    A dynamic and reliable system for installing the ATLAS software releases on Grid sites is crucial to guarantee the timely and smooth start of ATLAS production and reduce its failure rate. In this paper, we discuss the issues encountered in the previous software installation system, and introduce the new approach, which is built upon the new development in the areas of the ATLAS workload management system (PanDA), and software package management system (pacman). It is also designed to integrate with the EGEE ATLAS software installation framework. In the new system, ATLAS software releases are packaged as pacball, a uniquely identifiable and reproducible self-installing data file. The distribution of pacballs to remote sites is managed by ATLAS data management system (DQ2) and PanDA server. The installation on remote sites is automatically triggered by the PanDA pilot jobs. The installation job payload connects to a central ATLAS software installation portal, making the information of installation status easily accessible across OSG and EGEE Grids. The issues encountered in running the new system in production, and our future plan for improvement, will also be discussed.

  16. Alignment of the ATLAS Inner Detector Tracking System

    CERN Document Server

    Lacuesta, V; The ATLAS collaboration

    2010-01-01

    ATLAS is a multipurpose experiment that records the LHC collisions. To reconstruct trajectories of charged particles produced in these collisions, ATLAS tracking system is equipped with silicon planar sensors and drift‐tube based detectors. They constitute the ATLAS Inner Detector. In order to achieve its scientific goals, the alignment of the ATLAS tracking system requires the determine accurately its almost 36000 degrees of freedom. Thus the demanded precision for the alignment of the silicon sensors is below 10 micrometers. This implies to use a large sample of high momentum and isolated charge particle tracks. The high level trigger selects those tracks online. Then the raw data with the hits information of the triggered tracks is stored in a calibration stream. Tracks from cosmic trigger during empty LHC bunches are also used as input for the alignment. The implementation of the track based alignment within the ATLAS software framework unifies different alignment approaches and allows the alignment of ...

  17. LASER monitoring system for the ATLAS Tile Calorimeter

    International Nuclear Information System (INIS)

    Viret, S.

    2010-01-01

    The ATLAS detector at the Large Hadron Collider (LHC) at CERN uses a scintillator-iron technique for its hadronic Tile Calorimeter (TileCal). Scintillating light is readout via 9852 photomultiplier tubes (PMTs). Calibration and monitoring of these PMTs are made using a LASER based system. Short light pulses are sent simultaneously into all the TileCal photomultiplier's tubes (PMTs) during ATLAS physics runs, thus providing essential information for ATLAS data quality and monitoring analyses. The experimental setup developed for this purpose is described as well as preliminary results obtained during ATLAS commissioning phase in 2008.

  18. Performance of the ATLAS Trigger System in 2010

    CERN Document Server

    Aad, Georges; Abdallah, Jalal; Abdelalim, Ahmed Ali; Abdesselam, Abdelouahab; Abdinov, Ovsat; Abi, Babak; Abolins, Maris; Abramowicz, Halina; Abreu, Henso; Acerbi, Emilio; Acharya, Bobby Samir; Adams, David; Addy, Tetteh; Adelman, Jahred; Aderholz, Michael; Adomeit, Stefanie; Adragna, Paolo; Adye, Tim; Aefsky, Scott; Aguilar-Saavedra, Juan Antonio; Aharrouche, Mohamed; Ahlen, Steven; Ahles, Florian; Ahmad, Ashfaq; Ahsan, Mahsana; Aielli, Giulio; Akdogan, Taylan; Åkesson, Torsten Paul Ake; Akimoto, Ginga; Akimov, Andrei; Akiyama, Kunihiro; Alam, Mohammad; Alam, Muhammad Aftab; Albrand, Solveig; Aleksa, Martin; Aleksandrov, Igor; Alessandria, Franco; Alexa, Calin; Alexander, Gideon; Alexandre, Gauthier; Alexopoulos, Theodoros; Alhroob, Muhammad; Aliev, Malik; Alimonti, Gianluca; Alison, John; Aliyev, Magsud; Allport, Phillip; Allwood-Spiers, Sarah; Almond, John; Aloisio, Alberto; Alon, Raz; Alonso, Alejandro; Alviggi, Mariagrazia; Amako, Katsuya; Amaral, Pedro; Amelung, Christoph; Ammosov, Vladimir; Amorim, Antonio; Amorós, Gabriel; Amram, Nir; Anastopoulos, Christos; Andari, Nansi; Andeen, Timothy; Anders, Christoph Falk; Anderson, Kelby; Andreazza, Attilio; Andrei, George Victor; Andrieux, Marie-Laure; Anduaga, Xabier; Angerami, Aaron; Anghinolfi, Francis; Anjos, Nuno; Annovi, Alberto; Antonaki, Ariadni; Antonelli, Mario; Antonelli, Stefano; Antonov, Alexey; Antos, Jaroslav; Anulli, Fabio; Aoun, Sahar; Aperio Bella, Ludovica; Apolle, Rudi; Arabidze, Giorgi; Aracena, Ignacio; Arai, Yasuo; Arce, Ayana; Archambault, John-Paul; Arfaoui, Samir; Arguin, Jean-Francois; Arik, Engin; Arik, Metin; Armbruster, Aaron James; Arnaez, Olivier; Arnault, Christian; Artamonov, Andrei; Artoni, Giacomo; Arutinov, David; Asai, Shoji; Asfandiyarov, Ruslan; Ask, Stefan; Åsman, Barbro; Asquith, Lily; Assamagan, Ketevi; Astbury, Alan; Astvatsatourov, Anatoli; Atoian, Grigor; Aubert, Bernard; Auerbach, Benjamin; Auge, Etienne; Augsten, Kamil; Aurousseau, Mathieu; Austin, Nicholas; Avolio, Giuseppe; Avramidou, Rachel Maria; Axen, David; Ay, Cano; Azuelos, Georges; Azuma, Yuya; Baak, Max; Baccaglioni, Giuseppe; Bacci, Cesare; Bach, Andre; Bachacou, Henri; Bachas, Konstantinos; Bachy, Gerard; Backes, Moritz; Backhaus, Malte; Badescu, Elisabeta; Bagnaia, Paolo; Bahinipati, Seema; Bai, Yu; Bailey, David; Bain, Travis; Baines, John; Baker, Oliver Keith; Baker, Mark; Baker, Sarah; Baltasar Dos Santos Pedrosa, Fernando; Banas, Elzbieta; Banerjee, Piyali; Banerjee, Swagato; Banfi, Danilo; Bangert, Andrea Michelle; Bansal, Vikas; Bansil, Hardeep Singh; Barak, Liron; Baranov, Sergei; Barashkou, Andrei; Barbaro Galtieri, Angela; Barber, Tom; Barberio, Elisabetta Luigia; Barberis, Dario; Barbero, Marlon; Bardin, Dmitri; Barillari, Teresa; Barisonzi, Marcello; Barklow, Timothy; Barlow, Nick; Barnett, Bruce; Barnett, Michael; Baroncelli, Antonio; Barone, Gaetano; Barr, Alan; Barreiro, Fernando; Barreiro Guimarães da Costa, João; Barrillon, Pierre; Bartoldus, Rainer; Barton, Adam Edward; Bartsch, Detlef; Bartsch, Valeria; Bates, Richard; Batkova, Lucia; Batley, Richard; Battaglia, Andreas; Battistin, Michele; Battistoni, Giuseppe; Bauer, Florian; Bawa, Harinder Singh; Beare, Brian; Beau, Tristan; Beauchemin, Pierre-Hugues; Beccherle, Roberto; Bechtle, Philip; Beck, Hans Peter; Beckingham, Matthew; Becks, Karl-Heinz; Beddall, Andrew; Beddall, Ayda; Bedikian, Sourpouhi; Bednyakov, Vadim; Bee, Christopher; Begel, Michael; Behar Harpaz, Silvia; Behera, Prafulla; Beimforde, Michael; Belanger-Champagne, Camille; Bell, Paul; Bell, William; Bella, Gideon; Bellagamba, Lorenzo; Bellina, Francesco; Bellomo, Massimiliano; Belloni, Alberto; Beloborodova, Olga; Belotskiy, Konstantin; Beltramello, Olga; Ben Ami, Sagi; Benary, Odette; Benchekroun, Driss; Benchouk, Chafik; Bendel, Markus; Benedict, Brian Hugues; Benekos, Nektarios; Benhammou, Yan; Benjamin, Douglas; Benoit, Mathieu; Bensinger, James; Benslama, Kamal; Bentvelsen, Stan; Berge, David; Bergeaas Kuutmann, Elin; Berger, Nicolas; Berghaus, Frank; Berglund, Elina; Beringer, Jürg; Bernardet, Karim; Bernat, Pauline; Bernhard, Ralf; Bernius, Catrin; Berry, Tracey; Bertin, Antonio; Bertinelli, Francesco; Bertolucci, Federico; Besana, Maria Ilaria; Besson, Nathalie; Bethke, Siegfried; Bhimji, Wahid; Bianchi, Riccardo-Maria; Bianco, Michele; Biebel, Otmar; Bieniek, Stephen Paul; Biesiada, Jed; Biglietti, Michela; Bilokon, Halina; Bindi, Marcello; Binet, Sebastien; Bingul, Ahmet; Bini, Cesare; Biscarat, Catherine; Bitenc, Urban; Black, Kevin; Blair, Robert; Blanchard, Jean-Baptiste; Blanchot, Georges; Blazek, Tomas; Blocker, Craig; Blocki, Jacek; Blondel, Alain; Blum, Walter; Blumenschein, Ulrike; Bobbink, Gerjan; Bobrovnikov, Victor; Bocchetta, Simona Serena; Bocci, Andrea; Boddy, Christopher Richard; Boehler, Michael; Boek, Jennifer; Boelaert, Nele; Böser, Sebastian; Bogaerts, Joannes Andreas; Bogdanchikov, Alexander; Bogouch, Andrei; Bohm, Christian; Boisvert, Veronique; Bold, Tomasz; Boldea, Venera; Bolnet, Nayanka Myriam; Bona, Marcella; Bondarenko, Valery; Boonekamp, Maarten; Boorman, Gary; Booth, Chris; Bordoni, Stefania; Borer, Claudia; Borisov, Anatoly; Borissov, Guennadi; Borjanovic, Iris; Borroni, Sara; Bos, Kors; Boscherini, Davide; Bosman, Martine; Boterenbrood, Hendrik; Botterill, David; Bouchami, Jihene; Boudreau, Joseph; Bouhova-Thacker, Evelina Vassileva; Boulahouache, Chaouki; Bourdarios, Claire; Bousson, Nicolas; Boveia, Antonio; Boyd, James; Boyko, Igor; Bozhko, Nikolay; Bozovic-Jelisavcic, Ivanka; Bracinik, Juraj; Braem, André; Branchini, Paolo; Brandenburg, George; Brandt, Andrew; Brandt, Gerhard; Brandt, Oleg; Bratzler, Uwe; Brau, Benjamin; Brau, James; Braun, Helmut; Brelier, Bertrand; Bremer, Johan; Brenner, Richard; Bressler, Shikma; Breton, Dominique; Britton, Dave; Brochu, Frederic; Brock, Ian; Brock, Raymond; Brodbeck, Timothy; Brodet, Eyal; Broggi, Francesco; Bromberg, Carl; Brooijmans, Gustaaf; Brooks, William; Brown, Gareth; Brown, Heather; Brubaker, Erik; Bruckman de Renstrom, Pawel; Bruncko, Dusan; Bruneliere, Renaud; Brunet, Sylvie; Bruni, Alessia; Bruni, Graziano; Bruschi, Marco; Buanes, Trygve; Bucci, Francesca; Buchanan, James; Buchanan, Norman; Buchholz, Peter; Buckingham, Ryan; Buckley, Andrew; Buda, Stelian Ioan; Budagov, Ioulian; Budick, Burton; Büscher, Volker; Bugge, Lars; Buira-Clark, Daniel; Bulekov, Oleg; Bunse, Moritz; Buran, Torleiv; Burckhart, Helfried; Burdin, Sergey; Burgess, Thomas; Burke, Stephen; Busato, Emmanuel; Bussey, Peter; Buszello, Claus-Peter; Butin, François; Butler, Bart; Butler, John; Buttar, Craig; Butterworth, Jonathan; Buttinger, William; Byatt, Tom; Cabrera Urbán, Susana; Caforio, Davide; Cakir, Orhan; Calafiura, Paolo; Calderini, Giovanni; Calfayan, Philippe; Calkins, Robert; Caloba, Luiz; Caloi, Rita; Calvet, David; Calvet, Samuel; Camacho Toro, Reina; Camard, Arnaud; Camarri, Paolo; Cambiaghi, Mario; Cameron, David; Cammin, Jochen; Campana, Simone; Campanelli, Mario; Canale, Vincenzo; Canelli, Florencia; Canepa, Anadi; Cantero, Josu; Capasso, Luciano; Capeans Garrido, Maria Del Mar; Caprini, Irinel; Caprini, Mihai; Capriotti, Daniele; Capua, Marcella; Caputo, Regina; Caramarcu, Costin; Cardarelli, Roberto; Carli, Tancredi; Carlino, Gianpaolo; Carminati, Leonardo; Caron, Bryan; Caron, Sascha; Carrillo Montoya, German D; Carter, Antony; Carter, Janet; Carvalho, João; Casadei, Diego; Casado, Maria Pilar; Cascella, Michele; Caso, Carlo; Castaneda Hernandez, Alfredo Martin; Castaneda-Miranda, Elizabeth; Castillo Gimenez, Victoria; Castro, Nuno Filipe; Cataldi, Gabriella; Cataneo, Fernando; Catinaccio, Andrea; Catmore, James; Cattai, Ariella; Cattani, Giordano; Caughron, Seth; Cauz, Diego; Cavalleri, Pietro; Cavalli, Donatella; Cavalli-Sforza, Matteo; Cavasinni, Vincenzo; Cazzato, Antonio; Ceradini, Filippo; Santiago Cerqueira, Augusto; Cerri, Alessandro; Cerrito, Lucio; Cerutti, Fabio; Cetin, Serkant Ali; Cevenini, Francesco; Chafaq, Aziz; Chakraborty, Dhiman; Chan, Kevin; Chapleau, Bertrand; Chapman, John Derek; Chapman, John Wehrley; Chareyre, Eve; Charlton, Dave; Chavda, Vikash; Cheatham, Susan; Chekanov, Sergei; Chekulaev, Sergey; Chelkov, Gueorgui; Chelstowska, Magda Anna; Chen, Chunhui; Chen, Hucheng; Chen, Li; Chen, Shenjian; Chen, Tingyang; Chen, Xin; Cheng, Shaochen; Cheplakov, Alexander; Chepurnov, Vladimir; Cherkaoui El Moursli, Rajaa; Chernyatin, Valeriy; Cheu, Elliott; Cheung, Sing-Leung; Chevalier, Laurent; Chiefari, Giovanni; Chikovani, Leila; Childers, John Taylor; Chilingarov, Alexandre; Chiodini, Gabriele; Chislett, Rebecca Thalatta; Chizhov, Mihail; Choudalakis, Georgios; Chouridou, Sofia; Christidi, Illectra-Athanasia; Christov, Asen; Chromek-Burckhart, Doris; Chu, Ming-Lee; Chudoba, Jiri; Ciapetti, Guido; Ciba, Krzysztof; Ciftci, Abbas Kenan; Ciftci, Rena; Cinca, Diane; Cindro, Vladimir; Ciobotaru, Matei Dan; Ciocca, Claudia; Ciocio, Alessandra; Cirilli, Manuela; Ciubancan, Mihai; Clark, Allan G; Clark, Philip; Cleland, Bill; Clemens, Jean-Claude; Clement, Benoit; Clement, Christophe; Clifft, Roger; Coadou, Yann; Cobal, Marina; Coccaro, Andrea; Cochran, James H; Coe, Paul; Cogan, Joshua Godfrey; Coggeshall, James; Cogneras, Eric; Cojocaru, Claudiu; Colas, Jacques; Colijn, Auke-Pieter; Collard, Caroline; Collins, Neil; Collins-Tooth, Christopher; Collot, Johann; Colon, German; Conde Muiño, Patricia; Coniavitis, Elias; Conidi, Maria Chiara; Consonni, Michele; Consorti, Valerio; Constantinescu, Serban; Conta, Claudio; Conventi, Francesco; Cook, James; Cooke, Mark; Cooper, Ben; Cooper-Sarkar, Amanda; Cooper-Smith, Neil; Copic, Katherine; Cornelissen, Thijs; Corradi, Massimo; Corriveau, Francois; Cortes-Gonzalez, Arely; Cortiana, Giorgio; Costa, Giuseppe; Costa, María José; Costanzo, Davide; Costin, Tudor; Côté, David; Coura Torres, Rodrigo; Courneyea, Lorraine; Cowan, Glen; Cowden, Christopher; Cox, Brian; Cranmer, Kyle; Crescioli, Francesco; Cristinziani, Markus; Crosetti, Giovanni; Crupi, Roberto; Crépé-Renaudin, Sabine; Cuciuc, Constantin-Mihai; Cuenca Almenar, Cristóbal; Cuhadar Donszelmann, Tulay; Cuneo, Stefano; Curatolo, Maria; Curtis, Chris; Cwetanski, Peter; Czirr, Hendrik; Czyczula, Zofia; D'Auria, Saverio; D'Onofrio, Monica; D'Orazio, Alessia; Da Rocha Gesualdi Mello, Aline; Da Silva, Paulo Vitor; Da Via, Cinzia; Dabrowski, Wladyslaw; Dahlhoff, Andrea; Dai, Tiesheng; Dallapiccola, Carlo; Dam, Mogens; Dameri, Mauro; Damiani, Daniel; Danielsson, Hans Olof; Dannheim, Dominik; Dao, Valerio; Darbo, Giovanni; Darlea, Georgiana Lavinia; Daum, Cornelis; Dauvergne, Jean-Pierre; Davey, Will; Davidek, Tomas; Davidson, Nadia; Davidson, Ruth; Davies, Eleanor; Davies, Merlin; Davison, Adam; Davygora, Yuriy; Dawe, Edmund; Dawson, Ian; Dawson, John; Daya, Rozmin; De, Kaushik; de Asmundis, Riccardo; De Castro, Stefano; De Castro Faria Salgado, Pedro; De Cecco, Sandro; de Graat, Julien; De Groot, Nicolo; de Jong, Paul; De La Taille, Christophe; De la Torre, Hector; De Lotto, Barbara; De Mora, Lee; De Nooij, Lucie; De Oliveira Branco, Miguel; De Pedis, Daniele; de Saintignon, Paul; De Salvo, Alessandro; De Sanctis, Umberto; De Santo, Antonella; De Vivie De Regie, Jean-Baptiste; Dean, Simon; Dedovich, Dmitri; Degenhardt, James; Dehchar, Mohamed; Deile, Mario; Del Papa, Carlo; Del Peso, Jose; Del Prete, Tarcisio; Dell'Acqua, Andrea; Dell'Asta, Lidia; Della Pietra, Massimo; della Volpe, Domenico; Delmastro, Marco; Delpierre, Pierre; Delruelle, Nicolas; Delsart, Pierre-Antoine; Deluca, Carolina; Demers, Sarah; Demichev, Mikhail; Demirkoz, Bilge; Deng, Jianrong; Denisov, Sergey; Derendarz, Dominik; Derkaoui, Jamal Eddine; Derue, Frederic; Dervan, Paul; Desch, Klaus Kurt; Devetak, Erik; Deviveiros, Pier-Olivier; Dewhurst, Alastair; DeWilde, Burton; Dhaliwal, Saminder; Dhullipudi, Ramasudhakar; Di Ciaccio, Anna; Di Ciaccio, Lucia; Di Girolamo, Alessandro; Di Girolamo, Beniamino; Di Luise, Silvestro; Di Mattia, Alessandro; Di Micco, Biagio; Di Nardo, Roberto; Di Simone, Andrea; Di Sipio, Riccardo; Diaz, Marco Aurelio; Diblen, Faruk; Diehl, Edward; Dieli, Michele Vincenzo; Dietl, Hans; Dietrich, Janet; Dietzsch, Thorsten; Diglio, Sara; Dindar Yagci, Kamile; Dingfelder, Jochen; Dionisi, Carlo; Dita, Petre; Dita, Sanda; Dittus, Fridolin; Djama, Fares; Djilkibaev, Rashid; Djobava, Tamar; Barros do Vale, Maria Aline; Do Valle Wemans, André; Doan, Thi Kieu Oanh; Dobbs, Matt; Dobinson, Robert; Dobos, Daniel; Dobson, Ellie; Dobson, Marc; Dodd, Jeremy; Dogan, Ozgen Berkol; Doglioni, Caterina; Doherty, Tom; Doi, Yoshikuni; Dolejsi, Jiri; Dolenc, Irena; Dolezal, Zdenek; Dolgoshein, Boris; Dohmae, Takeshi; Donadelli, Marisilvia; Donega, Mauro; Donini, Julien; Dopke, Jens; Doria, Alessandra; Dos Anjos, Andre; Dosil, Mireia; Dotti, Andrea; Dova, Maria-Teresa; Dowell, John; Doxiadis, Alexander; Doyle, Tony; Drasal, Zbynek; Drees, Jürgen; Dressnandt, Nandor; Drevermann, Hans; Driouichi, Chafik; Dris, Manolis; Dubbert, Jörg; Dubbs, Tim; Dube, Sourabh; Duchovni, Ehud; Duckeck, Guenter; Dudarev, Alexey; Dudziak, Fanny; Dührssen, Michael; Duerdoth, Ian; Duflot, Laurent; Dufour, Marc-Andre; Dunford, Monica; Duran Yildiz, Hatice; Duxfield, Robert; Dwuznik, Michal; Dydak, Friedrich; Dzahini, Daniel; Düren, Michael; Ebenstein, William; Ebke, Johannes; Eckert, Simon; Eckweiler, Sebastian; Edmonds, Keith; Edwards, Clive; Edwards, Nicholas Charles; Ehrenfeld, Wolfgang; Ehrich, Thies; Eifert, Till; Eigen, Gerald; Einsweiler, Kevin; Eisenhandler, Eric; Ekelof, Tord; El Kacimi, Mohamed; Ellert, Mattias; Elles, Sabine; Ellinghaus, Frank; Ellis, Katherine; Ellis, Nicolas; Elmsheuser, Johannes; Elsing, Markus; Ely, Robert; Emeliyanov, Dmitry; Engelmann, Roderich; Engl, Albert; Epp, Brigitte; Eppig, Andrew; Erdmann, Johannes; Ereditato, Antonio; Eriksson, Daniel; Ernst, Jesse; Ernst, Michael; Ernwein, Jean; Errede, Deborah; Errede, Steven; Ertel, Eugen; Escalier, Marc; Escobar, Carlos; Espinal Curull, Xavier; Esposito, Bellisario; Etienne, Francois; Etienvre, Anne-Isabelle; Etzion, Erez; Evangelakou, Despoina; Evans, Hal; Fabbri, Laura; Fabre, Caroline; Fakhrutdinov, Rinat; Falciano, Speranza; Falou, Alain; Fang, Yaquan; Fanti, Marcello; Farbin, Amir; Farilla, Addolorata; Farley, Jason; Farooque, Trisha; Farrington, Sinead; Farthouat, Philippe; Fassnacht, Patrick; Fassouliotis, Dimitrios; Fatholahzadeh, Baharak; Favareto, Andrea; Fayard, Louis; Fazio, Salvatore; Febbraro, Renato; Federic, Pavol; Fedin, Oleg; Fedorko, Ivan; Fedorko, Woiciech; Fehling-Kaschek, Mirjam; Feligioni, Lorenzo; Fellmann, Denis; Felzmann, Ulrich; Feng, Cunfeng; Feng, Eric; Fenyuk, Alexander; Ferencei, Jozef; Ferland, Jonathan; Fernando, Waruna; Ferrag, Samir; Ferrando, James; Ferrara, Valentina; Ferrari, Arnaud; Ferrari, Pamela; Ferrari, Roberto; Ferrer, Antonio; Ferrer, Maria Lorenza; Ferrere, Didier; Ferretti, Claudio; Ferretto Parodi, Andrea; Fiascaris, Maria; Fiedler, Frank; Filipčič, Andrej; Filippas, Anastasios; Filthaut, Frank; Fincke-Keeler, Margret; Fiolhais, Miguel; Fiorini, Luca; Firan, Ana; Fischer, Gordon; Fischer, Peter; Fisher, Matthew; Fisher, Steve; Flechl, Martin; Fleck, Ivor; Fleckner, Johanna; Fleischmann, Philipp; Fleischmann, Sebastian; Flick, Tobias; Flores Castillo, Luis; Flowerdew, Michael; Föhlisch, Florian; Fokitis, Manolis; Fonseca Martin, Teresa; Forbush, David Alan; Formica, Andrea; Forti, Alessandra; Fortin, Dominique; Foster, Joe; Fournier, Daniel; Foussat, Arnaud; Fowler, Andrew; Fowler, Ken; Fox, Harald; Francavilla, Paolo; Franchino, Silvia; Francis, David; Frank, Tal; Franklin, Melissa; Franz, Sebastien; Fraternali, Marco; Fratina, Sasa; French, Sky; Froeschl, Robert; Froidevaux, Daniel; Frost, James; Fukunaga, Chikara; Fullana Torregrosa, Esteban; Fuster, Juan; Gabaldon, Carolina; Gabizon, Ofir; Gadfort, Thomas; Gadomski, Szymon; Gagliardi, Guido; Gagnon, Pauline; Galea, Cristina; Gallas, Elizabeth; Gallas, Manuel; Gallo, Valentina Santina; Gallop, Bruce; Gallus, Petr; Galyaev, Eugene; Gan, KK; Gao, Yongsheng; Gapienko, Vladimir; Gaponenko, Andrei; Garberson, Ford; Garcia-Sciveres, Maurice; García, Carmen; García Navarro, José Enrique; Gardner, Robert; Garelli, Nicoletta; Garitaonandia, Hegoi; Garonne, Vincent; Garvey, John; Gatti, Claudio; Gaudio, Gabriella; Gaumer, Olivier; Gaur, Bakul; Gauthier, Lea; Gavrilenko, Igor; Gay, Colin; Gaycken, Goetz; Gayde, Jean-Christophe; Gazis, Evangelos; Ge, Peng; Gee, Norman; Geerts, Daniël Alphonsus Adrianus; Geich-Gimbel, Christoph; Gellerstedt, Karl; Gemme, Claudia; Gemmell, Alistair; Genest, Marie-Hélène; Gentile, Simonetta; George, Matthias; George, Simon; Gerlach, Peter; Gershon, Avi; Geweniger, Christoph; Ghazlane, Hamid; Ghez, Philippe; Ghodbane, Nabil; Giacobbe, Benedetto; Giagu, Stefano; Giakoumopoulou, Victoria; Giangiobbe, Vincent; Gianotti, Fabiola; Gibbard, Bruce; Gibson, Adam; Gibson, Stephen; Gilbert, Laura; Gilchriese, Murdock; Gilewsky, Valentin; Gillberg, Dag; Gillman, Tony; Gingrich, Douglas; Ginzburg, Jonatan; Giokaris, Nikos; Giordano, Raffaele; Giorgi, Francesco Michelangelo; Giovannini, Paola; Giraud, Pierre-Francois; Giugni, Danilo; Giunta, Michele; Giusti, Paolo; Gjelsten, Børge Kile; Gladilin, Leonid; Glasman, Claudia; Glatzer, Julian; Glazov, Alexandre; Glitza, Karl-Walter; Glonti, George; Godfrey, Jennifer; Godlewski, Jan; Goebel, Martin; Göpfert, Thomas; Goeringer, Christian; Gössling, Claus; Göttfert, Tobias; Goldfarb, Steven; Goldin, Daniel; Golling, Tobias; Golovnia, Serguei; Gomes, Agostinho; Gomez Fajardo, Luz Stella; Gonçalo, Ricardo; Goncalves Pinto Firmino Da Costa, Joao; Gonella, Laura; Gonidec, Allain; Gonzalez, Saul; González de la Hoz, Santiago; Gonzalez Silva, Laura; Gonzalez-Sevilla, Sergio; Goodson, Jeremiah Jet; Goossens, Luc; Gorbounov, Petr Andreevich; Gordon, Howard; Gorelov, Igor; Gorfine, Grant; Gorini, Benedetto; Gorini, Edoardo; Gorišek, Andrej; Gornicki, Edward; Gorokhov, Serguei; Goryachev, Vladimir; Gosdzik, Bjoern; Gosselink, Martijn; Gostkin, Mikhail Ivanovitch; Gouanère, Michel; Gough Eschrich, Ivo; Gouighri, Mohamed; Goujdami, Driss; Goulette, Marc Phillippe; Goussiou, Anna; Goy, Corinne; Grabowska-Bold, Iwona; Grabski, Varlen; Grafström, Per; Grah, Christian; Grahn, Karl-Johan; Grancagnolo, Francesco; Grancagnolo, Sergio; Grassi, Valerio; Gratchev, Vadim; Grau, Nathan; Gray, Heather; Gray, Julia Ann; Graziani, Enrico; Grebenyuk, Oleg; Greenfield, Debbie; Greenshaw, Timothy; Greenwood, Zeno Dixon; Gregor, Ingrid-Maria; Grenier, Philippe; Griesmayer, Erich; Griffiths, Justin; Grigalashvili, Nugzar; Grillo, Alexander; Grinstein, Sebastian; Grishkevich, Yaroslav; Grivaz, Jean-Francois; Grognuz, Joel; Groh, Manfred; Gross, Eilam; Grosse-Knetter, Joern; Groth-Jensen, Jacob; Grybel, Kai; Guarino, Victor; Guest, Daniel; Guicheney, Christophe; Guida, Angelo; Guillemin, Thibault; Guindon, Stefan; Guler, Hulya; Gunther, Jaroslav; Guo, Bin; Guo, Jun; Gupta, Ambreesh; Gusakov, Yury; Gushchin, Vladimir; Gutierrez, Andrea; Gutierrez, Phillip; Guttman, Nir; Gutzwiller, Olivier; Guyot, Claude; Gwenlan, Claire; Gwilliam, Carl; Haas, Andy; Haas, Stefan; Haber, Carl; Hackenburg, Robert; Hadavand, Haleh Khani; Hadley, David; Haefner, Petra; Hahn, Ferdinand; Haider, Stefan; Hajduk, Zbigniew; Hakobyan, Hrachya; Haller, Johannes; Hamacher, Klaus; Hamal, Petr; Hamilton, Andrew; Hamilton, Samuel; Han, Hongguang; Han, Liang; Hanagaki, Kazunori; Hance, Michael; Handel, Carsten; Hanke, Paul; Hansen, John Renner; Hansen, Jørgen Beck; Hansen, Jorn Dines; Hansen, Peter Henrik; Hansson, Per; Hara, Kazuhiko; Hare, Gabriel; Harenberg, Torsten; Harkusha, Siarhei; Harper, Devin; Harrington, Robert; Harris, Orin; Harrison, Karl; Hartert, Jochen; Hartjes, Fred; Haruyama, Tomiyoshi; Harvey, Alex; Hasegawa, Satoshi; Hasegawa, Yoji; Hassani, Samira; Hatch, Mark; Hauff, Dieter; Haug, Sigve; Hauschild, Michael; Hauser, Reiner; Havranek, Miroslav; Hawes, Brian; Hawkes, Christopher; Hawkings, Richard John; Hawkins, Donovan; Hayakawa, Takashi; Hayden, Daniel; Hayward, Helen; Haywood, Stephen; Hazen, Eric; He, Mao; Head, Simon; Hedberg, Vincent; Heelan, Louise; Heim, Sarah; Heine, Kristin; Heinemann, Beate; Heisterkamp, Simon; Helary, Louis; Heldmann, Michael; Heller, Mathieu; Hellman, Sten; Helsens, Clement; Henderson, Robert; Henke, Michael; Henrichs, Anna; Henriques Correia, Ana Maria; Henrot-Versille, Sophie; Henry-Couannier, Frédéric; Hensel, Carsten; Henß, Tobias; Medina Hernandez, Carlos; Hernández Jiménez, Yesenia; Herrberg, Ruth; Hershenhorn, Alon David; Herten, Gregor; Hertenberger, Ralf; Hervas, Luis; Hessey, Nigel; Hidvegi, Attila; Higón-Rodriguez, Emilio; Hill, Daniel; Hill, John; Hill, Norman; Hiller, Karl Heinz; Hillert, Sonja; Hillier, Stephen; Hinchliffe, Ian; Hines, Elizabeth; Hirose, Minoru; Hirsch, Florian; Hirschbuehl, Dominic; Hobbs, John; Hod, Noam; Hodgkinson, Mark; Hodgson, Paul; Hoecker, Andreas; Hoeferkamp, Martin; Hoffman, Julia; Hoffmann, Dirk; Hohlfeld, Marc; Holder, Martin; Holmes, Alan; Holmgren, Sven-Olof; Holy, Tomas; Holzbauer, Jenny; Homma, Yasuhiro; Hong, Tae Min; Hooft van Huysduynen, Loek; Horazdovsky, Tomas; Horn, Claus; Horner, Stephan; Horton, Katherine; Hostachy, Jean-Yves; Hou, Suen; Houlden, Michael; Hoummada, Abdeslam; Howarth, James; Howell, David; Hristova, Ivana; Hrivnac, Julius; Hruska, Ivan; Hryn'ova, Tetiana; Hsu, Pai-hsien Jennifer; Hsu, Shih-Chieh; Huang, Guang Shun; Hubacek, Zdenek; Hubaut, Fabrice; Huegging, Fabian; Huffman, Todd Brian; Hughes, Emlyn; Hughes, Gareth; Hughes-Jones, Richard; Huhtinen, Mika; Hurst, Peter; Hurwitz, Martina; Husemann, Ulrich; Huseynov, Nazim; Huston, Joey; Huth, John; Iacobucci, Giuseppe; Iakovidis, Georgios; Ibbotson, Michael; Ibragimov, Iskander; Ichimiya, Ryo; Iconomidou-Fayard, Lydia; Idarraga, John; Idzik, Marek; Iengo, Paolo; Igonkina, Olga; Ikegami, Yoichi; Ikeno, Masahiro; Ilchenko, Yuri; Iliadis, Dimitrios; Imbault, Didier; Imhaeuser, Martin; Imori, Masatoshi; Ince, Tayfun; Inigo-Golfin, Joaquin; Ioannou, Pavlos; Iodice, Mauro; Ionescu, Gelu; Irles Quiles, Adrian; Ishii, Koji; Ishikawa, Akimasa; Ishino, Masaya; Ishmukhametov, Renat; Issever, Cigdem; Istin, Serhat; Itoh, Yuki; Ivashin, Anton; Iwanski, Wieslaw; Iwasaki, Hiroyuki; Izen, Joseph; Izzo, Vincenzo; Jackson, Brett; Jackson, John; Jackson, Paul; Jaekel, Martin; Jain, Vivek; Jakobs, Karl; Jakobsen, Sune; Jakubek, Jan; Jana, Dilip; Jankowski, Ernest; Jansen, Eric; Jantsch, Andreas; Janus, Michel; Jarlskog, Göran; Jeanty, Laura; Jelen, Kazimierz; Jen-La Plante, Imai; Jenni, Peter; Jeremie, Andrea; Jež, Pavel; Jézéquel, Stéphane; Jha, Manoj Kumar; Ji, Haoshuang; Ji, Weina; Jia, Jiangyong; Jiang, Yi; Jimenez Belenguer, Marcos; Jin, Ge; Jin, Shan; Jinnouchi, Osamu; Joergensen, Morten Dam; Joffe, David; Johansen, Lars; Johansen, Marianne; Johansson, Erik; Johansson, Per; Johnert, Sebastian; Johns, Kenneth; Jon-And, Kerstin; Jones, Graham; Jones, Roger; Jones, Tegid; Jones, Tim; Jonsson, Ove; Joram, Christian; Jorge, Pedro; Joseph, John; Ju, Xiangyang; Juranek, Vojtech; Jussel, Patrick; Kabachenko, Vasily; Kabana, Sonja; Kaci, Mohammed; Kaczmarska, Anna; Kadlecik, Peter; Kado, Marumi; Kagan, Harris; Kagan, Michael; Kaiser, Steffen; Kajomovitz, Enrique; Kalinin, Sergey; Kalinovskaya, Lidia; Kama, Sami; Kanaya, Naoko; Kaneda, Michiru; Kanno, Takayuki; Kantserov, Vadim; Kanzaki, Junichi; Kaplan, Benjamin; Kapliy, Anton; Kaplon, Jan; Kar, Deepak; Karagoz, Muge; Karnevskiy, Mikhail; Karr, Kristo; Kartvelishvili, Vakhtang; Karyukhin, Andrey; Kashif, Lashkar; Kasmi, Azzedine; Kass, Richard; Kastanas, Alex; Kataoka, Mayuko; Kataoka, Yousuke; Katsoufis, Elias; Katzy, Judith; Kaushik, Venkatesh; Kawagoe, Kiyotomo; Kawamoto, Tatsuo; Kawamura, Gen; Kayl, Manuel; Kazanin, Vassili; Kazarinov, Makhail; Keates, James Robert; Keeler, Richard; Kehoe, Robert; Keil, Markus; Kekelidze, George; Kelly, Marc; Kennedy, John; Kenney, Christopher John; Kenyon, Mike; Kepka, Oldrich; Kerschen, Nicolas; Kerševan, Borut Paul; Kersten, Susanne; Kessoku, Kohei; Ketterer, Christian; Keung, Justin; Khakzad, Mohsen; Khalil-zada, Farkhad; Khandanyan, Hovhannes; Khanov, Alexander; Kharchenko, Dmitri; Khodinov, Alexander; Kholodenko, Anatoli; Khomich, Andrei; Khoo, Teng Jian; Khoriauli, Gia; Khoroshilov, Andrey; Khovanskiy, Nikolai; Khovanskiy, Valery; Khramov, Evgeniy; Khubua, Jemal; Kim, Hyeon Jin; Kim, Min Suk; Kim, Peter; Kim, Shinhong; Kimura, Naoki; Kind, Oliver; King, Barry; King, Matthew; King, Robert Steven Beaufoy; Kirk, Julie; Kirsch, Guillaume; Kirsch, Lawrence; Kiryunin, Andrey; Kisielewska, Danuta; Kittelmann, Thomas; Kiver, Andrey; Kiyamura, Hironori; Kladiva, Eduard; Klaiber-Lodewigs, Jonas; Klein, Max; Klein, Uta; Kleinknecht, Konrad; Klemetti, Miika; Klier, Amit; Klimentov, Alexei; Klingenberg, Reiner; Klinkby, Esben; Klioutchnikova, Tatiana; Klok, Peter; Klous, Sander; Kluge, Eike-Erik; Kluge, Thomas; Kluit, Peter; Kluth, Stefan; Kneringer, Emmerich; Knobloch, Juergen; Knoops, Edith; Knue, Andrea; Ko, Byeong Rok; Kobayashi, Tomio; Kobel, Michael; Kocian, Martin; Kocnar, Antonin; Kodys, Peter; Köneke, Karsten; König, Adriaan; Koenig, Sebastian; Köpke, Lutz; Koetsveld, Folkert; Koevesarki, Peter; Koffas, Thomas; Koffeman, Els; Kohn, Fabian; Kohout, Zdenek; Kohriki, Takashi; Koi, Tatsumi; Kokott, Thomas; Kolachev, Guennady; Kolanoski, Hermann; Kolesnikov, Vladimir; Koletsou, Iro; Koll, James; Kollar, Daniel; Kollefrath, Michael; Kolya, Scott; Komar, Aston; Komaragiri, Jyothsna Rani; Komori, Yuto; Kondo, Takahiko; Kono, Takanori; Kononov, Anatoly; Konoplich, Rostislav; Konstantinidis, Nikolaos; Kootz, Andreas; Koperny, Stefan; Kopikov, Sergey; Korcyl, Krzysztof; Kordas, Kostantinos; Koreshev, Victor; Korn, Andreas; Korol, Aleksandr; Korolkov, Ilya; Korolkova, Elena; Korotkov, Vladislav; Kortner, Oliver; Kortner, Sandra; Kostyukhin, Vadim; Kotamäki, Miikka Juhani; Kotov, Sergey; Kotov, Vladislav; Kotwal, Ashutosh; Kourkoumelis, Christine; Kouskoura, Vasiliki; Koutsman, Alex; Kowalewski, Robert Victor; Kowalski, Tadeusz; Kozanecki, Witold; Kozhin, Anatoly; Kral, Vlastimil; Kramarenko, Viktor; Kramberger, Gregor; Krasel, Olaf; Krasny, Mieczyslaw Witold; Krasznahorkay, Attila; Kraus, James; Kreisel, Arik; Krejci, Frantisek; Kretzschmar, Jan; Krieger, Nina; Krieger, Peter; Kroeninger, Kevin; Kroha, Hubert; Kroll, Joe; Kroseberg, Juergen; Krstic, Jelena; Kruchonak, Uladzimir; Krüger, Hans; Kruker, Tobias; Krumshteyn, Zinovii; Kruth, Andre; Kubota, Takashi; Kuehn, Susanne; Kugel, Andreas; Kuhl, Thorsten; Kuhn, Dietmar; Kukhtin, Victor; Kulchitsky, Yuri; Kuleshov, Sergey; Kummer, Christian; Kuna, Marine; Kundu, Nikhil; Kunkle, Joshua; Kupco, Alexander; Kurashige, Hisaya; Kurata, Masakazu; Kurochkin, Yurii; Kus, Vlastimil; Kuykendall, William; Kuze, Masahiro; Kuzhir, Polina; Kvasnicka, Ondrej; Kvita, Jiri; Kwee, Regina; La Rosa, Alessandro; La Rotonda, Laura; Labarga, Luis; Labbe, Julien; Lablak, Said; Lacasta, Carlos; Lacava, Francesco; Lacker, Heiko; Lacour, Didier; Lacuesta, Vicente Ramón; Ladygin, Evgueni; Lafaye, Rémi; Laforge, Bertrand; Lagouri, Theodota; Lai, Stanley; Laisne, Emmanuel; Lamanna, Massimo; Lampen, Caleb; Lampl, Walter; Lancon, Eric; Landgraf, Ulrich; Landon, Murrough; Landsman, Hagar; Lane, Jenna; Lange, Clemens; Lankford, Andrew; Lanni, Francesco; Lantzsch, Kerstin; Lapin, Vladimir; Laplace, Sandrine; Lapoire, Cecile; Laporte, Jean-Francois; Lari, Tommaso; Larionov, Anatoly; Larner, Aimee; Lasseur, Christian; Lassnig, Mario; Lau, Wing; Laurelli, Paolo; Lavorato, Antonia; Lavrijsen, Wim; Laycock, Paul; Lazarev, Alexandre; Lazzaro, Alfio; Le Dortz, Olivier; Le Guirriec, Emmanuel; Le Maner, Christophe; Le Menedeu, Eve; Lebedev, Alexander; Lebel, Céline; LeCompte, Thomas; Ledroit-Guillon, Fabienne Agnes Marie; Lee, Hurng-Chun; Lee, Jason; Lee, Shih-Chang; Lee, Lawrence; Lefebvre, Michel; Legendre, Marie; Leger, Annie; LeGeyt, Benjamin; Legger, Federica; Leggett, Charles; Lehmacher, Marc; Lehmann Miotto, Giovanna; Lei, Xiaowen; Leite, Marco Aurelio Lisboa; Leitner, Rupert; Lellouch, Daniel; Leltchouk, Mikhail; Lendermann, Victor; Leney, Katharine; Lenz, Tatiana; Lenzen, Georg; Lenzi, Bruno; Leonhardt, Kathrin; Leontsinis, Stefanos; Leroy, Claude; Lessard, Jean-Raphael; Lesser, Jonas; Lester, Christopher; Leung Fook Cheong, Annabelle; Levêque, Jessica; Levin, Daniel; Levinson, Lorne; Levitski, Mikhail; Lewandowska, Marta; Lewis, Adrian; Lewis, George; Leyko, Agnieszka; Leyton, Michael; Li, Bo; Li, Haifeng; Li, Shu; Li, Xuefei; Liang, Zhihua; Liang, Zhijun; Liberti, Barbara; Lichard, Peter; Lichtnecker, Markus; Lie, Ki; Liebig, Wolfgang; Lifshitz, Ronen; Lilley, Joseph; Limbach, Christian; Limosani, Antonio; Limper, Maaike; Lin, Simon; Linde, Frank; Linnemann, James; Lipeles, Elliot; Lipinsky, Lukas; Lipniacka, Anna; Liss, Tony; Lissauer, David; Lister, Alison; Litke, Alan; Liu, Chuanlei; Liu, Dong; Liu, Hao; Liu, Jianbei; Liu, Minghui; Liu, Shengli; Liu, Yanwen; Livan, Michele; Livermore, Sarah; Lleres, Annick; Llorente Merino, Javier; Lloyd, Stephen; Lobodzinska, Ewelina; Loch, Peter; Lockman, William; Lockwitz, Sarah; Loddenkoetter, Thomas; Loebinger, Fred; Loginov, Andrey; Loh, Chang Wei; Lohse, Thomas; Lohwasser, Kristin; Lokajicek, Milos; Loken, James; Lombardo, Vincenzo Paolo; Long, Robin Eamonn; Lopes, Lourenco; Lopez Mateos, David; Losada, Marta; Loscutoff, Peter; Lo Sterzo, Francesco; Losty, Michael; Lou, Xinchou; Lounis, Abdenour; Loureiro, Karina; Love, Jeremy; Love, Peter; Lowe, Andrew; Lu, Feng; Lu, Liang; Lubatti, Henry; Luci, Claudio; Lucotte, Arnaud; Ludwig, Andreas; Ludwig, Dörthe; Ludwig, Inga; Ludwig, Jens; Luehring, Frederick; Luijckx, Guy; Lumb, Debra; Luminari, Lamberto; Lund, Esben; Lund-Jensen, Bengt; Lundberg, Björn; Lundberg, Johan; Lundquist, Johan; Lungwitz, Matthias; Lupi, Anna; Lutz, Gerhard; Lynn, David; Lys, Jeremy; Lytken, Else; Ma, Hong; Ma, Lian Liang; Macana Goia, Jorge Andres; Maccarrone, Giovanni; Macchiolo, Anna; Maček, Boštjan; Machado Miguens, Joana; Mackeprang, Rasmus; Madaras, Ronald; Mader, Wolfgang; Maenner, Reinhard; Maeno, Tadashi; Mättig, Peter; Mättig, Stefan; Magalhaes Martins, Paulo Jorge; Magnoni, Luca; Magradze, Erekle; Mahalalel, Yair; Mahboubi, Kambiz; Mahout, Gilles; Maiani, Camilla; Maidantchik, Carmen; Maio, Amélia; Majewski, Stephanie; Makida, Yasuhiro; Makovec, Nikola; Mal, Prolay; Malecki, Pawel; Malecki, Piotr; Maleev, Victor; Malek, Fairouz; Mallik, Usha; Malon, David; Maltezos, Stavros; Malyshev, Vladimir; Malyukov, Sergei; Mameghani, Raphael; Mamuzic, Judita; Manabe, Atsushi; Mandelli, Luciano; Mandić, Igor; Mandrysch, Rocco; Maneira, José; Mangeard, Pierre-Simon; Manjavidze, Ioseb; Mann, Alexander; Manning, Peter; Manousakis-Katsikakis, Arkadios; Mansoulie, Bruno; Manz, Andreas; Mapelli, Alessandro; Mapelli, Livio; March, Luis; Marchand, Jean-Francois; Marchese, Fabrizio; Marchiori, Giovanni; Marcisovsky, Michal; Marin, Alexandru; Marino, Christopher; Marroquim, Fernando; Marshall, Robin; Marshall, Zach; Martens, Kalen; Marti-Garcia, Salvador; Martin, Andrew; Martin, Brian; Martin, Brian Thomas; Martin, Franck Francois; Martin, Jean-Pierre; Martin, Philippe; Martin, Tim; Martin dit Latour, Bertrand; Martinez, Mario; Martinez Outschoorn, Verena; Martyniuk, Alex; Marx, Marilyn; Marzano, Francesco; Marzin, Antoine; Masetti, Lucia; Mashimo, Tetsuro; Mashinistov, Ruslan; Masik, Jiri; Maslennikov, Alexey; Maß, Martin; Massa, Ignazio; Massaro, Graziano; Massol, Nicolas; Mastrandrea, Paolo; Mastroberardino, Anna; Masubuchi, Tatsuya; Mathes, Markus; Matricon, Pierre; Matsumoto, Hiroshi; Matsunaga, Hiroyuki; Matsushita, Takashi; Mattravers, Carly; Maugain, Jean-Marie; Maxfield, Stephen; Maximov, Dmitriy; May, Edward; Mayne, Anna; Mazini, Rachid; Mazur, Michael; Mazzanti, Marcello; Mazzoni, Enrico; Mc Kee, Shawn Patrick; McCarn, Allison; McCarthy, Robert; McCarthy, Tom; McCubbin, Norman; McFarlane, Kenneth; Mcfayden, Josh; McGlone, Helen; Mchedlidze, Gvantsa; McLaren, Robert Andrew; Mclaughlan, Tom; McMahon, Steve; McPherson, Robert; Meade, Andrew; Mechnich, Joerg; Mechtel, Markus; Medinnis, Mike; Meera-Lebbai, Razzak; Meguro, Tatsuma; Mehdiyev, Rashid; Mehlhase, Sascha; Mehta, Andrew; Meier, Karlheinz; Meinhardt, Jens; Meirose, Bernhard; Melachrinos, Constantinos; Mellado Garcia, Bruce Rafael; Mendoza Navas, Luis; Meng, Zhaoxia; Mengarelli, Alberto; Menke, Sven; Menot, Claude; Meoni, Evelin; Mercurio, Kevin Michael; Mermod, Philippe; Merola, Leonardo; Meroni, Chiara; Merritt, Frank; Messina, Andrea; Metcalfe, Jessica; Mete, Alaettin Serhan; Meuser, Stefan; Meyer, Carsten; Meyer, Jean-Pierre; Meyer, Jochen; Meyer, Joerg; Meyer, Thomas Christian; Meyer, W Thomas; Miao, Jiayuan; Michal, Sebastien; Micu, Liliana; Middleton, Robin; Miele, Paola; Migas, Sylwia; Mijović, Liza; Mikenberg, Giora; Mikestikova, Marcela; Mikuž, Marko; Miller, David; Miller, Robert; Mills, Bill; Mills, Corrinne; Milov, Alexander; Milstead, David; Milstein, Dmitry; Minaenko, Andrey; Miñano, Mercedes; Minashvili, Irakli; Mincer, Allen; Mindur, Bartosz; Mineev, Mikhail; Ming, Yao; Mir, Lluisa-Maria; Mirabelli, Giovanni; Miralles Verge, Lluis; Misiejuk, Andrzej; Mitrevski, Jovan; Mitrofanov, Gennady; Mitsou, Vasiliki A; Mitsui, Shingo; Miyagawa, Paul; Miyazaki, Kazuki; Mjörnmark, Jan-Ulf; Moa, Torbjoern; Mockett, Paul; Moed, Shulamit; Moeller, Victoria; Mönig, Klaus; Möser, Nicolas; Mohapatra, Soumya; Mohn, Bjarte; Mohr, Wolfgang; Mohrdieck-Möck, Susanne; Moisseev, Artemy; Moles-Valls, Regina; Molina-Perez, Jorge; Monk, James; Monnier, Emmanuel; Montesano, Simone; Monticelli, Fernando; Monzani, Simone; Moore, Roger; Moorhead, Gareth; Mora Herrera, Clemencia; Moraes, Arthur; Morais, Antonio; Morange, Nicolas; Morel, Julien; Morello, Gianfranco; Moreno, Deywis; Moreno Llácer, María; Morettini, Paolo; Morii, Masahiro; Morin, Jerome; Morita, Youhei; Morley, Anthony Keith; Mornacchi, Giuseppe; Morone, Maria-Christina; Morozov, Sergey; Morris, John; Morvaj, Ljiljana; Moser, Hans-Guenther; Mosidze, Maia; Moss, Josh; Mount, Richard; Mountricha, Eleni; Mouraviev, Sergei; Moyse, Edward; Mudrinic, Mihajlo; Mueller, Felix; Mueller, James; Mueller, Klemens; Müller, Thomas; Muenstermann, Daniel; Muijs, Sandra; Muir, Alex; Munwes, Yonathan; Murakami, Koichi; Murray, Bill; Mussche, Ido; Musto, Elisa; Myagkov, Alexey; Myska, Miroslav; Nadal, Jordi; Nagai, Koichi; Nagano, Kunihiro; Nagasaka, Yasushi; Nairz, Armin Michael; Nakahama, Yu; Nakamura, Koji; Nakano, Itsuo; Nanava, Gizo; Napier, Austin; Nash, Michael; Nation, Nigel; Nattermann, Till; Naumann, Thomas; Navarro, Gabriela; Neal, Homer; Nebot, Eduardo; Nechaeva, Polina; Negri, Andrea; Negri, Guido; Nektarijevic, Snezana; Nelson, Andrew; Nelson, Silke; Nelson, Timothy Knight; Nemecek, Stanislav; Nemethy, Peter; Nepomuceno, Andre Asevedo; Nessi, Marzio; Nesterov, Stanislav; Neubauer, Mark; Neusiedl, Andrea; Neves, Ricardo; Nevski, Pavel; Newman, Paul; Nguyen Thi Hong, Van; Nickerson, Richard; Nicolaidou, Rosy; Nicolas, Ludovic; Nicquevert, Bertrand; Niedercorn, Francois; Nielsen, Jason; Niinikoski, Tapio; Nikiforov, Andriy; Nikolaenko, Vladimir; Nikolaev, Kirill; Nikolic-Audit, Irena; Nikolopoulos, Konstantinos; Nilsen, Henrik; Nilsson, Paul; Ninomiya, Yoichi; Nisati, Aleandro; Nishiyama, Tomonori; Nisius, Richard; Nodulman, Lawrence; Nomachi, Masaharu; Nomidis, Ioannis; Nomoto, Hiroshi; Nordberg, Markus; Nordkvist, Bjoern; Norton, Peter; Novakova, Jana; Nozaki, Mitsuaki; Nožička, Miroslav; Nozka, Libor; Nugent, Ian Michael; Nuncio-Quiroz, Adriana-Elizabeth; Nunes Hanninger, Guilherme; Nunnemann, Thomas; Nurse, Emily; Nyman, Tommi; O'Brien, Brendan Joseph; O'Neale, Steve; O'Neil, Dugan; O'Shea, Val; Oakham, Gerald; Oberlack, Horst; Ocariz, Jose; Ochi, Atsuhiko; Oda, Susumu; Odaka, Shigeru; Odier, Jerome; Ogren, Harold; Oh, Alexander; Oh, Seog; Ohm, Christian; Ohshima, Takayoshi; Ohshita, Hidetoshi; Ohska, Tokio Kenneth; Ohsugi, Takashi; Okada, Shogo; Okawa, Hideki; Okumura, Yasuyuki; Okuyama, Toyonobu; Olcese, Marco; Olchevski, Alexander; Oliveira, Miguel Alfonso; Oliveira Damazio, Denis; Oliver Garcia, Elena; Olivito, Dominick; Olszewski, Andrzej; Olszowska, Jolanta; Omachi, Chihiro; Onofre, António; Onyisi, Peter; Oram, Christopher; Oreglia, Mark; Oren, Yona; Orestano, Domizia; Orlov, Iliya; Oropeza Barrera, Cristina; Orr, Robert; Ortega, Eduardo; Osculati, Bianca; Ospanov, Rustem; Osuna, Carlos; Otero y Garzon, Gustavo; Ottersbach, John; Ouchrif, Mohamed; Ould-Saada, Farid; Ouraou, Ahmimed; Ouyang, Qun; Owen, Mark; Owen, Simon; Øye, Ola; Ozcan, Veysi Erkcan; Ozturk, Nurcan; Pacheco Pages, Andres; Padilla Aranda, Cristobal; Paganis, Efstathios; Paige, Frank; Pajchel, Katarina; Palestini, Sandro; Pallin, Dominique; Palma, Alberto; Palmer, Jody; Pan, Yibin; Panagiotopoulou, Evgenia; Panes, Boris; Panikashvili, Natalia; Panitkin, Sergey; Pantea, Dan; Panuskova, Monika; Paolone, Vittorio; Papadelis, Aras; Papadopoulou, Theodora; Paramonov, Alexander; Park, Woochun; Parker, Andy; Parodi, Fabrizio; Parsons, John; Parzefall, Ulrich; Pasqualucci, Enrico; Passeri, Antonio; Pastore, Fernanda; Pastore, Francesca; Pásztor, Gabriella; Pataraia, Sophio; Patel, Nikhul; Pater, Joleen; Patricelli, Sergio; Pauly, Thilo; Pecsy, Martin; Pedraza Morales, Maria Isabel; Peleganchuk, Sergey; Peng, Haiping; Pengo, Ruggero; Penson, Alexander; Penwell, John; Perantoni, Marcelo; Perez, Kerstin; Perez Cavalcanti, Tiago; Perez Codina, Estel; Pérez García-Estañ, María Teresa; Perez Reale, Valeria; Peric, Ivan; Perini, Laura; Pernegger, Heinz; Perrino, Roberto; Perrodo, Pascal; Persembe, Seda; Peshekhonov, Vladimir; Peters, Onne; Petersen, Brian; Petersen, Jorgen; Petersen, Troels; Petit, Elisabeth; Petridis, Andreas; Petridou, Chariclia; Petrolo, Emilio; Petrucci, Fabrizio; Petschull, Dennis; Petteni, Michele; Pezoa, Raquel; Phan, Anna; Phillips, Alan; Phillips, Peter William; Piacquadio, Giacinto; Piccaro, Elisa; Piccinini, Maurizio; Pickford, Andrew; Piec, Sebastian Marcin; Piegaia, Ricardo; Pilcher, James; Pilkington, Andrew; Pina, João Antonio; Pinamonti, Michele; Pinder, Alex; Pinfold, James; Ping, Jialun; Pinto, Belmiro; Pirotte, Olivier; Pizio, Caterina; Placakyte, Ringaile; Plamondon, Mathieu; Plano, Will; Pleier, Marc-Andre; Pleskach, Anatoly; Poblaguev, Andrei; Poddar, Sahill; Podlyski, Fabrice; Poggioli, Luc; Poghosyan, Tatevik; Pohl, Martin; Polci, Francesco; Polesello, Giacomo; Policicchio, Antonio; Polini, Alessandro; Poll, James; Polychronakos, Venetios; Pomarede, Daniel Marc; Pomeroy, Daniel; Pommès, Kathy; Pontecorvo, Ludovico; Pope, Bernard; Popeneciu, Gabriel Alexandru; Popovic, Dragan; Poppleton, Alan; Portell Bueso, Xavier; Porter, Robert; Posch, Christoph; Pospelov, Guennady; Pospisil, Stanislav; Potrap, Igor; Potter, Christina; Potter, Christopher; Poulard, Gilbert; Poveda, Joaquin; Prabhu, Robindra; Pralavorio, Pascal; Prasad, Srivas; Pravahan, Rishiraj; Prell, Soeren; Pretzl, Klaus Peter; Pribyl, Lukas; Price, Darren; Price, Lawrence; Price, Michael John; Prichard, Paul; Prieur, Damien; Primavera, Margherita; Prokofiev, Kirill; Prokoshin, Fedor; Protopopescu, Serban; Proudfoot, James; Prudent, Xavier; Przysiezniak, Helenka; Psoroulas, Serena; Ptacek, Elizabeth; Purdham, John; Purohit, Milind; Puzo, Patrick; Pylypchenko, Yuriy; Qian, Jianming; Qian, Zuxuan; Qin, Zhonghua; Quadt, Arnulf; Quarrie, David; Quayle, William; Quinonez, Fernando; Raas, Marcel; Radescu, Voica; Radics, Balint; Rador, Tonguc; Ragusa, Francesco; Rahal, Ghita; Rahimi, Amir; Rahm, David; Rajagopalan, Srinivasan; Rammensee, Michael; Rammes, Marcus; Ramstedt, Magnus; Randrianarivony, Koloina; Ratoff, Peter; Rauscher, Felix; Rauter, Emanuel; Raymond, Michel; Read, Alexander Lincoln; Rebuzzi, Daniela; Redelbach, Andreas; Redlinger, George; Reece, Ryan; Reeves, Kendall; Reichold, Armin; Reinherz-Aronis, Erez; Reinsch, Andreas; Reisinger, Ingo; Reljic, Dusan; Rembser, Christoph; Ren, Zhongliang; Renaud, Adrien; Renkel, Peter; Rensch, Bertram; Rescigno, Marco; Resconi, Silvia; Resende, Bernardo; Reznicek, Pavel; Rezvani, Reyhaneh; Richards, Alexander; Richter, Robert; Richter-Was, Elzbieta; Ridel, Melissa; Rieke, Stefan; Rijpstra, Manouk; Rijssenbeek, Michael; Rimoldi, Adele; Rinaldi, Lorenzo; Rios, Ryan Randy; Riu, Imma; Rivoltella, Giancesare; Rizatdinova, Flera; Rizvi, Eram; Robertson, Steven; Robichaud-Veronneau, Andree; Robinson, Dave; Robinson, James; Robinson, Mary; Robson, Aidan; Rocha de Lima, Jose Guilherme; Roda, Chiara; Roda Dos Santos, Denis; Rodier, Stephane; Rodriguez, Diego; Rodriguez Garcia, Yohany; Roe, Adam; Roe, Shaun; Røhne, Ole; Rojo, Victoria; Rolli, Simona; Romaniouk, Anatoli; Romanov, Victor; Romeo, Gaston; Romero Maltrana, Diego; Roos, Lydia; Ros, Eduardo; Rosati, Stefano; Rosbach, Kilian; Rose, Matthew; Rosenbaum, Gabriel; Rosenberg, Eli; Rosendahl, Peter Lundgaard; Rosselet, Laurent; Rossetti, Valerio; Rossi, Elvira; Rossi, Leonardo Paolo; Rossi, Lucio; Rotaru, Marina; Roth, Itamar; Rothberg, Joseph; Rousseau, David; Royon, Christophe; Rozanov, Alexander; Rozen, Yoram; Ruan, Xifeng; Rubinskiy, Igor; Ruckert, Benjamin; Ruckstuhl, Nicole; Rud, Viacheslav; Rudolph, Gerald; Rühr, Frederik; Ruggieri, Federico; Ruiz-Martinez, Aranzazu; Rulikowska-Zarebska, Elzbieta; Rumiantsev, Viktor; Rumyantsev, Leonid; Runge, Kay; Runolfsson, Ogmundur; Rurikova, Zuzana; Rusakovich, Nikolai; Rust, Dave; Rutherfoord, John; Ruwiedel, Christoph; Ruzicka, Pavel; Ryabov, Yury; Ryadovikov, Vasily; Ryan, Patrick; Rybar, Martin; Rybkin, Grigori; Ryder, Nick; Rzaeva, Sevda; Saavedra, Aldo; Sadeh, Iftach; Sadrozinski, Hartmut; Sadykov, Renat; Safai Tehrani, Francesco; Sakamoto, Hiroshi; Salamanna, Giuseppe; Salamon, Andrea; Saleem, Muhammad; Salihagic, Denis; Salnikov, Andrei; Salt, José; Salvachua Ferrando, Belén; Salvatore, Daniela; Salvatore, Pasquale Fabrizio; Salvucci, Antonio; Salzburger, Andreas; Sampsonidis, Dimitrios; Samset, Björn Hallvard; Sandaker, Heidi; Sander, Heinz Georg; Sanders, Michiel; Sandhoff, Marisa; Sandoval, Tanya; Sandstroem, Rikard; Sandvoss, Stephan; Sankey, Dave; Sansoni, Andrea; Santamarina Rios, Cibran; Santoni, Claudio; Santonico, Rinaldo; Santos, Helena; Saraiva, João; Sarangi, Tapas; Sarkisyan-Grinbaum, Edward; Sarri, Francesca; Sartisohn, Georg; Sasaki, Osamu; Sasaki, Takashi; Sasao, Noboru; Satsounkevitch, Igor; Sauvage, Gilles; Sauvan, Jean-Baptiste; Savard, Pierre; Savinov, Vladimir; Savu, Dan Octavian; Savva, Panagiota; Sawyer, Lee; Saxon, David; Says, Louis-Pierre; Sbarra, Carla; Sbrizzi, Antonio; Scallon, Olivia; Scannicchio, Diana; Scarcella, Mark; Schaarschmidt, Jana; Schacht, Peter; Schäfer, Uli; Schaepe, Steffen; Schaetzel, Sebastian; Schaffer, Arthur; Schaile, Dorothee; Schamberger, R. Dean; Schamov, Andrey; Scharf, Veit; Schegelsky, Valery; Scheirich, Daniel; Schernau, Michael; Scherzer, Max; Schiavi, Carlo; Schieck, Jochen; Schioppa, Marco; Schlenker, Stefan; Schlereth, James; Schmidt, Evelyn; Schmidt, Michael; Schmieden, Kristof; Schmitt, Christian; Schmitt, Sebastian; Schmitz, Martin; Schöning, André; Schott, Matthias; Schouten, Doug; Schovancova, Jaroslava; Schram, Malachi; Schroeder, Christian; Schroer, Nicolai; Schuh, Silvia; Schuler, Georges; Schultes, Joachim; Schultz-Coulon, Hans-Christian; Schulz, Holger; Schumacher, Jan; Schumacher, Markus; Schumm, Bruce; Schune, Philippe; Schwanenberger, Christian; Schwartzman, Ariel; Schwemling, Philippe; Schwienhorst, Reinhard; Schwierz, Rainer; Schwindling, Jerome; Scott, Bill; Searcy, Jacob; Sedykh, Evgeny; Segura, Ester; Seidel, Sally; Seiden, Abraham; Seifert, Frank; Seixas, José; Sekhniaidze, Givi; Seliverstov, Dmitry; Sellden, Bjoern; Sellers, Graham; Seman, Michal; Semprini-Cesari, Nicola; Serfon, Cedric; Serin, Laurent; Seuster, Rolf; Severini, Horst; Sevior, Martin; Sfyrla, Anna; Shabalina, Elizaveta; Shamim, Mansoora; Shan, Lianyou; Shank, James; Shao, Qi Tao; Shapiro, Marjorie; Shatalov, Pavel; Shaver, Leif; Shaw, Christian; Shaw, Kate; Sherman, Daniel; Sherwood, Peter; Shibata, Akira; Shichi, Hideharu; Shimizu, Shima; Shimojima, Makoto; Shin, Taeksu; Shmeleva, Alevtina; Shochet, Mel; Short, Daniel; Shupe, Michael; Sicho, Petr; Sidoti, Antonio; Siebel, Anca-Mirela; Siegert, Frank; Siegrist, James; Sijacki, Djordje; Silbert, Ohad; Silva, José; Silver, Yiftah; Silverstein, Daniel; Silverstein, Samuel; Simak, Vladislav; Simard, Olivier; Simic, Ljiljana; Simion, Stefan; Simmons, Brinick; Simonyan, Margar; Sinervo, Pekka; Sinev, Nikolai; Sipica, Valentin; Siragusa, Giovanni; Sisakyan, Alexei; Sivoklokov, Serguei; Sjölin, Jörgen; Sjursen, Therese; Skinnari, Louise Anastasia; Skovpen, Kirill; Skubic, Patrick; Skvorodnev, Nikolai; Slater, Mark; Slavicek, Tomas; Sliwa, Krzysztof; Sloan, Terrence; Sloper, John erik; Smakhtin, Vladimir; Smirnov, Sergei; Smirnova, Lidia; Smirnova, Oxana; Smith, Ben Campbell; Smith, Douglas; Smith, Kenway; Smizanska, Maria; Smolek, Karel; Snesarev, Andrei; Snow, Steve; Snow, Joel; Snuverink, Jochem; Snyder, Scott; Soares, Mara; Sobie, Randall; Sodomka, Jaromir; Soffer, Abner; Solans, Carlos; Solar, Michael; Solc, Jaroslav; Soldatov, Evgeny; Soldevila, Urmila; Solfaroli Camillocci, Elena; Solodkov, Alexander; Solovyanov, Oleg; Sondericker, John; Soni, Nitesh; Sopko, Vit; Sopko, Bruno; Sorbi, Massimo; Sosebee, Mark; Soukharev, Andrey; Spagnolo, Stefania; Spanò, Francesco; Spighi, Roberto; Spigo, Giancarlo; Spila, Federico; Spiriti, Eleuterio; Spiwoks, Ralf; Spousta, Martin; Spreitzer, Teresa; Spurlock, Barry; St Denis, Richard Dante; Stahl, Thorsten; Stahlman, Jonathan; Stamen, Rainer; Stanecka, Ewa; Stanek, Robert; Stanescu, Cristian; Stapnes, Steinar; Starchenko, Evgeny; Stark, Jan; Staroba, Pavel; Starovoitov, Pavel; Staude, Arnold; Stavina, Pavel; Stavropoulos, Georgios; Steele, Genevieve; Steinbach, Peter; Steinberg, Peter; Stekl, Ivan; Stelzer, Bernd; Stelzer, Harald Joerg; Stelzer-Chilton, Oliver; Stenzel, Hasko; Stevenson, Kyle; Stewart, Graeme; Stillings, Jan Andre; Stockmanns, Tobias; Stockton, Mark; Stoerig, Kathrin; Stoicea, Gabriel; Stonjek, Stefan; Strachota, Pavel; Stradling, Alden; Straessner, Arno; Strandberg, Jonas; Strandberg, Sara; Strandlie, Are; Strang, Michael; Strauss, Emanuel; Strauss, Michael; Strizenec, Pavol; Ströhmer, Raimund; Strom, David; Strong, John; Stroynowski, Ryszard; Strube, Jan; Stugu, Bjarne; Stumer, Iuliu; Stupak, John; Sturm, Philipp; Soh, Dart-yin; Su, Dong; Subramania, Halasya Siva; Succurro, Antonella; Sugaya, Yorihito; Sugimoto, Takuya; Suhr, Chad; Suita, Koichi; Suk, Michal; Sulin, Vladimir; Sultansoy, Saleh; Sumida, Toshi; Sun, Xiaohu; Sundermann, Jan Erik; Suruliz, Kerim; Sushkov, Serge; Susinno, Giancarlo; Sutton, Mark; Suzuki, Yu; Svatos, Michal; Sviridov, Yuri; Swedish, Stephen; Sykora, Ivan; Sykora, Tomas; Szeless, Balazs; Sánchez, Javier; Ta, Duc; Tackmann, Kerstin; Taffard, Anyes; Tafirout, Reda; Taga, Adrian; Taiblum, Nimrod; Takahashi, Yuta; Takai, Helio; Takashima, Ryuichi; Takeda, Hiroshi; Takeshita, Tohru; Talby, Mossadek; Talyshev, Alexey; Tamsett, Matthew; Tanaka, Junichi; Tanaka, Reisaburo; Tanaka, Satoshi; Tanaka, Shuji; Tanaka, Yoshito; Tani, Kazutoshi; Tannoury, Nancy; Tappern, Geoffrey; Tapprogge, Stefan; Tardif, Dominique; Tarem, Shlomit; Tarrade, Fabien; Tartarelli, Giuseppe Francesco; Tas, Petr; Tasevsky, Marek; Tassi, Enrico; Tatarkhanov, Mous; Tayalati, Yahya; Taylor, Christopher; Taylor, Frank; Taylor, Geoffrey; Taylor, Wendy; Teixeira Dias Castanheira, Matilde; Teixeira-Dias, Pedro; Temming, Kim Katrin; Ten Kate, Herman; Teng, Ping-Kun; Terada, Susumu; Terashi, Koji; Terron, Juan; Terwort, Mark; Testa, Marianna; Teuscher, Richard; Thadome, Jocelyn; Therhaag, Jan; Theveneaux-Pelzer, Timothée; Thioye, Moustapha; Thoma, Sascha; Thomas, Juergen; Thompson, Emily; Thompson, Paul; Thompson, Peter; Thompson, Stan; Thomson, Evelyn; Thomson, Mark; Thun, Rudolf; Tic, Tomáš; Tikhomirov, Vladimir; Tikhonov, Yury; Timmermans, Charles; Tipton, Paul; Tique Aires Viegas, Florbela De Jes; Tisserant, Sylvain; Tobias, Jürgen; Toczek, Barbara; Todorov, Theodore; Todorova-Nova, Sharka; Toggerson, Brokk; Tojo, Junji; Tokár, Stanislav; Tokunaga, Kaoru; Tokushuku, Katsuo; Tollefson, Kirsten; Tomoto, Makoto; Tompkins, Lauren; Toms, Konstantin; Tong, Guoliang; Tonoyan, Arshak; Topfel, Cyril; Topilin, Nikolai; Torchiani, Ingo; Torrence, Eric; Torró Pastor, Emma; Toth, Jozsef; Touchard, Francois; Tovey, Daniel; Traynor, Daniel; Trefzger, Thomas; Treis, Johannes; Tremblet, Louis; Tricoli, Alesandro; Trigger, Isabel Marian; Trincaz-Duvoid, Sophie; Trinh, Thi Nguyet; Tripiana, Martin; Trischuk, William; Trivedi, Arjun; Trocmé, Benjamin; Troncon, Clara; Trottier-McDonald, Michel; Trzupek, Adam; Tsarouchas, Charilaos; Tseng, Jeffrey; Tsiakiris, Menelaos; Tsiareshka, Pavel; Tsionou, Dimitra; Tsipolitis, Georgios; Tsiskaridze, Vakhtang; Tskhadadze, Edisher; Tsukerman, Ilya; Tsulaia, Vakhtang; Tsung, Jieh-Wen; Tsuno, Soshi; Tsybychev, Dmitri; Tua, Alan; Tuggle, Joseph; Turala, Michal; Turecek, Daniel; Turk Cakir, Ilkay; Turlay, Emmanuel; Turra, Ruggero; Tuts, Michael; Tykhonov, Andrii; Tylmad, Maja; Tyndel, Mike; Tyrvainen, Harri; Tzanakos, George; Uchida, Kirika; Ueda, Ikuo; Ueno, Ryuichi; Ugland, Maren; Uhlenbrock, Mathias; Uhrmacher, Michael; Ukegawa, Fumihiko; Unal, Guillaume; Underwood, David; Undrus, Alexander; Unel, Gokhan; Unno, Yoshinobu; Urbaniec, Dustin; Urkovsky, Evgeny; Urrejola, Pedro; Usai, Giulio; Uslenghi, Massimiliano; Vacavant, Laurent; Vacek, Vaclav; Vachon, Brigitte; Vahsen, Sven; Valenta, Jan; Valente, Paolo; Valentinetti, Sara; Valkar, Stefan; Valladolid Gallego, Eva; Vallecorsa, Sofia; Valls Ferrer, Juan Antonio; van der Graaf, Harry; van der Kraaij, Erik; Van Der Leeuw, Robin; van der Poel, Egge; van der Ster, Daniel; Van Eijk, Bob; van Eldik, Niels; van Gemmeren, Peter; van Kesteren, Zdenko; van Vulpen, Ivo; Vandelli, Wainer; Vandoni, Giovanna; Vaniachine, Alexandre; Vankov, Peter; Vannucci, Francois; Varela Rodriguez, Fernando; Vari, Riccardo; Varnes, Erich; Varouchas, Dimitris; Vartapetian, Armen; Varvell, Kevin; Vassilakopoulos, Vassilios; Vazeille, Francois; Vegni, Guido; Veillet, Jean-Jacques; Vellidis, Constantine; Veloso, Filipe; Veness, Raymond; Veneziano, Stefano; Ventura, Andrea; Ventura, Daniel; Venturi, Manuela; Venturi, Nicola; Vercesi, Valerio; Verducci, Monica; Verkerke, Wouter; Vermeulen, Jos; Vest, Anja; Vetterli, Michel; Vichou, Irene; Vickey, Trevor; Viehhauser, Georg; Viel, Simon; Villa, Mauro; Villaplana Perez, Miguel; Vilucchi, Elisabetta; Vincter, Manuella; Vinek, Elisabeth; Vinogradov, Vladimir; Virchaux, Marc; Viret, Sébastien; Virzi, Joseph; Vitale, Antonio; Vitells, Ofer; Viti, Michele; Vivarelli, Iacopo; Vives Vaque, Francesc; Vlachos, Sotirios; Vlasak, Michal; Vlasov, Nikolai; Vogel, Adrian; Vokac, Petr; Volpi, Guido; Volpi, Matteo; Volpini, Giovanni; von der Schmitt, Hans; von Loeben, Joerg; von Radziewski, Holger; von Toerne, Eckhard; Vorobel, Vit; Vorobiev, Alexander; Vorwerk, Volker; Vos, Marcel; Voss, Rudiger; Voss, Thorsten Tobias; Vossebeld, Joost; Vranjes, Nenad; Vranjes Milosavljevic, Marija; Vrba, Vaclav; Vreeswijk, Marcel; Vu Anh, Tuan; Vuillermet, Raphael; Vukotic, Ilija; Wagner, Wolfgang; Wagner, Peter; Wahlen, Helmut; Wakabayashi, Jun; Walbersloh, Jorg; Walch, Shannon; Walder, James; Walker, Rodney; Walkowiak, Wolfgang; Wall, Richard; Waller, Peter; Wang, Chiho; Wang, Haichen; Wang, Hulin; Wang, Jike; Wang, Jin; Wang, Joshua C; Wang, Rui; Wang, Song-Ming; Warburton, Andreas; Ward, Patricia; Warsinsky, Markus; Watkins, Peter; Watson, Alan; Watson, Miriam; Watts, Gordon; Watts, Stephen; Waugh, Anthony; Waugh, Ben; Weber, Jens; Weber, Marc; Weber, Michele; Weber, Pavel; Weidberg, Anthony; Weigell, Philipp; Weingarten, Jens; Weiser, Christian; Wellenstein, Hermann; Wells, Phillippa; Wen, Mei; Wenaus, Torre; Wendler, Shanti; Weng, Zhili; Wengler, Thorsten; Wenig, Siegfried; Wermes, Norbert; Werner, Matthias; Werner, Per; Werth, Michael; Wessels, Martin; Weydert, Carole; Whalen, Kathleen; Wheeler-Ellis, Sarah Jane; Whitaker, Scott; White, Andrew; White, Martin; White, Sebastian; Whitehead, Samuel Robert; Whiteson, Daniel; Whittington, Denver; Wicek, Francois; Wicke, Daniel; Wickens, Fred; Wiedenmann, Werner; Wielers, Monika; Wienemann, Peter; Wiglesworth, Craig; Wiik, Liv Antje Mari; Wijeratne, Peter Alexander; Wildauer, Andreas; Wildt, Martin Andre; Wilhelm, Ivan; Wilkens, Henric George; Will, Jonas Zacharias; Williams, Eric; Williams, Hugh; Willis, William; Willocq, Stephane; Wilson, John; Wilson, Michael Galante; Wilson, Alan; Wingerter-Seez, Isabelle; Winkelmann, Stefan; Winklmeier, Frank; Wittgen, Matthias; Wolter, Marcin Wladyslaw; Wolters, Helmut; Wooden, Gemma; Wosiek, Barbara; Wotschack, Jorg; Woudstra, Martin; Wraight, Kenneth; Wright, Catherine; Wrona, Bozydar; Wu, Sau Lan; Wu, Xin; Wu, Yusheng; Wulf, Evan; Wunstorf, Renate; Wynne, Benjamin; Xaplanteris, Leonidas; Xella, Stefania; Xie, Song; Xie, Yigang; Xu, Chao; Xu, Da; Xu, Guofa; Yabsley, Bruce; Yamada, Miho; Yamamoto, Akira; Yamamoto, Kyoko; Yamamoto, Shimpei; Yamamura, Taiki; Yamaoka, Jared; Yamazaki, Takayuki; Yamazaki, Yuji; Yan, Zhen; Yang, Haijun; Yang, Un-Ki; Yang, Yi; Yang, Yi; Yang, Zhaoyu; Yanush, Serguei; Yao, Weiming; Yao, Yushu; Yasu, Yoshiji; Ybeles Smit, Gabriel Valentijn; Ye, Jingbo; Ye, Shuwei; Yilmaz, Metin; Yoosoofmiya, Reza; Yorita, Kohei; Yoshida, Riktura; Young, Charles; Youssef, Saul; Yu, Dantong; Yu, Jaehoon; Yu, Jie; Yuan, Li; Yurkewicz, Adam; Zaets, Vassilli; Zaidan, Remi; Zaitsev, Alexander; Zajacova, Zuzana; Zalite, Youris; Zanello, Lucia; Zarzhitsky, Pavel; Zaytsev, Alexander; Zeitnitz, Christian; Zeller, Michael; Zemla, Andrzej; Zendler, Carolin; Zenin, Anton; Zenin, Oleg; Ženiš, Tibor; Zenonos, Zenonas; Zenz, Seth; Zerwas, Dirk; Zevi della Porta, Giovanni; Zhan, Zhichao; Zhang, Dongliang; Zhang, Huaqiao; Zhang, Jinlong; Zhang, Xueyao; Zhang, Zhiqing; Zhao, Long; Zhao, Tianchi; Zhao, Zhengguo; Zhemchugov, Alexey; Zheng, Shuchen; Zhong, Jiahang; Zhou, Bing; Zhou, Ning; Zhou, Yue; Zhu, Cheng Guang; Zhu, Hongbo; Zhu, Yingchun; Zhuang, Xuai; Zhuravlov, Vadym; Zieminska, Daria; Zimmermann, Robert; Zimmermann, Simone; Zimmermann, Stephanie; Ziolkowski, Michael; Zitoun, Robert; Živković, Lidija; Zmouchko, Viatcheslav; Zobernig, Georg; Zoccoli, Antonio; Zolnierowski, Yves; Zsenei, Andras; zur Nedden, Martin; Zutshi, Vishnu; Zwalinski, Lukasz

    2012-01-03

    Proton-proton collisions at $\\sqrt{s}=7$ TeV and heavy ion collisions at $\\sqrt{s_{NN}}$=2.76 TeV were produced by the LHC and recorded using the ATLAS experiment's trigger system in 2010. The LHC is designed with a maximum bunch crossing rate of 40 MHz and the ATLAS trigger system is designed to record approximately 200 of these per second. The trigger system selects events by rapidly identifying signatures of muon, electron, photon, tau lepton, jet, and B meson candidates, as well as using global event signatures, such as missing transverse energy. An overview of the ATLAS trigger system, the evolution of the system during 2010 and the performance of the trigger system components and selections based on the 2010 collision data are shown. A brief outline of plans for the trigger system in 2011 is presented

  19. A DAQ-Device-Based Continuous Wave Near-Infrared Spectroscopy System for Measuring Human Functional Brain Activity

    Directory of Open Access Journals (Sweden)

    Gang Xu

    2014-01-01

    Full Text Available In the last two decades, functional near-infrared spectroscopy (fNIRS is getting more and more popular as a neuroimaging technique. The fNIRS instrument can be used to measure local hemodynamic response, which indirectly reflects the functional neural activities in human brain. In this study, an easily implemented way to establish DAQ-device-based fNIRS system was proposed. Basic instrumentation components (light sources driving, signal conditioning, sensors, and optical fiber of the fNIRS system were described. The digital in-phase and quadrature demodulation method was applied in LabVIEW software to distinguish light sources from different emitters. The effectiveness of the custom-made system was verified by simultaneous measurement with a commercial instrument ETG-4000 during Valsalva maneuver experiment. The light intensity data acquired from two systems were highly correlated for lower wavelength (Pearson’s correlation coefficient r = 0.92, P < 0.01 and higher wavelength (r = 0.84, P < 0.01. Further, another mental arithmetic experiment was implemented to detect neural activation in the prefrontal cortex. For 9 participants, significant cerebral activation was detected in 6 subjects (P < 0.05 for oxyhemoglobin and in 8 subjects (P < 0.01 for deoxyhemoglobin.

  20. The ATLAS distributed analysis system

    International Nuclear Information System (INIS)

    Legger, F

    2014-01-01

    In the LHC operations era, analysis of the multi-petabyte ATLAS data sample by globally distributed physicists is a challenging task. To attain the required scale the ATLAS Computing Model was designed around the concept of Grid computing, realized in the Worldwide LHC Computing Grid (WLCG), the largest distributed computational resource existing in the sciences. The ATLAS experiment currently stores over 140 PB of data and runs about 140,000 concurrent jobs continuously at WLCG sites. During the first run of the LHC, the ATLAS Distributed Analysis (DA) service has operated stably and scaled as planned. More than 1600 users submitted jobs in 2012, with 2 million or more analysis jobs per week, peaking at about a million jobs per day. The system dynamically distributes popular data to expedite processing and maximally utilize resources. The reliability of the DA service is high and steadily improving; Grid sites are continually validated against a set of standard tests, and a dedicated team of expert shifters provides user support and communicates user problems to the sites. Both the user support techniques and the direct feedback of users have been effective in improving the success rate and user experience when utilizing the distributed computing environment. In this contribution a description of the main components, activities and achievements of ATLAS distributed analysis is given. Several future improvements being undertaken will be described.

  1. The ATLAS distributed analysis system

    Science.gov (United States)

    Legger, F.; Atlas Collaboration

    2014-06-01

    In the LHC operations era, analysis of the multi-petabyte ATLAS data sample by globally distributed physicists is a challenging task. To attain the required scale the ATLAS Computing Model was designed around the concept of Grid computing, realized in the Worldwide LHC Computing Grid (WLCG), the largest distributed computational resource existing in the sciences. The ATLAS experiment currently stores over 140 PB of data and runs about 140,000 concurrent jobs continuously at WLCG sites. During the first run of the LHC, the ATLAS Distributed Analysis (DA) service has operated stably and scaled as planned. More than 1600 users submitted jobs in 2012, with 2 million or more analysis jobs per week, peaking at about a million jobs per day. The system dynamically distributes popular data to expedite processing and maximally utilize resources. The reliability of the DA service is high and steadily improving; Grid sites are continually validated against a set of standard tests, and a dedicated team of expert shifters provides user support and communicates user problems to the sites. Both the user support techniques and the direct feedback of users have been effective in improving the success rate and user experience when utilizing the distributed computing environment. In this contribution a description of the main components, activities and achievements of ATLAS distributed analysis is given. Several future improvements being undertaken will be described.

  2. System administration of ATLAS TDAQ computing environment

    Science.gov (United States)

    Adeel-Ur-Rehman, A.; Bujor, F.; Benes, J.; Caramarcu, C.; Dobson, M.; Dumitrescu, A.; Dumitru, I.; Leahu, M.; Valsan, L.; Oreshkin, A.; Popov, D.; Unel, G.; Zaytsev, A.

    2010-04-01

    This contribution gives a thorough overview of the ATLAS TDAQ SysAdmin group activities which deals with administration of the TDAQ computing environment supporting High Level Trigger, Event Filter and other subsystems of the ATLAS detector operating on LHC collider at CERN. The current installation consists of approximately 1500 netbooted nodes managed by more than 60 dedicated servers, about 40 multi-screen user interface machines installed in the control rooms and various hardware and service monitoring machines as well. In the final configuration, the online computer farm will be capable of hosting tens of thousands applications running simultaneously. The software distribution requirements are matched by the two level NFS based solution. Hardware and network monitoring systems of ATLAS TDAQ are based on NAGIOS and MySQL cluster behind it for accounting and storing the monitoring data collected, IPMI tools, CERN LANDB and the dedicated tools developed by the group, e.g. ConfdbUI. The user management schema deployed in TDAQ environment is founded on the authentication and role management system based on LDAP. External access to the ATLAS online computing facilities is provided by means of the gateways supplied with an accounting system as well. Current activities of the group include deployment of the centralized storage system, testing and validating hardware solutions for future use within the ATLAS TDAQ environment including new multi-core blade servers, developing GUI tools for user authentication and roles management, testing and validating 64-bit OS, and upgrading the existing TDAQ hardware components, authentication servers and the gateways.

  3. Advanced Alignment of the ATLAS Tracking System

    CERN Document Server

    Pedraza Lopez, S; The ATLAS collaboration

    2012-01-01

    In order to reconstruct trajectories of charged particles, ATLAS is equipped with a tracking system built using different technologies embedded in a 2T solenoidal magnetic field. ATLAS physics goals require high resolution, unbiased measurement of all charged particle kinematic parameters in order to assure accurate invariant mass reconstruction and interaction and decay vertex finding. These critically depend on the systematic effects related to the alignment of the tracking system. In order to eliminate malicious systematic deformations, various advanced tools and techniques have been put in place. These include information from known mass resonances, energy of electrons and positrons measured by the electromagnetic calorimeters, etc. Despite being stable under normal running conditions, ATLAS tracking system responses to sudden environ-mental changes (temperature, magnetic field) by small collective deformations. These have to be identified and corrected in order to assure uniform, highest quality tracking...

  4. ATLAS Grid Data Processing: system evolution and scalability

    CERN Document Server

    Golubkov, D; The ATLAS collaboration; Klimentov, A; Minaenko, A; Nevski, P; Vaniachine, A; Walker, R

    2012-01-01

    The production system for Grid Data Processing handles petascale ATLAS data reprocessing and Monte Carlo activities. The production system empowered further data processing steps on the Grid performed by dozens of ATLAS physics groups with coordinated access to computing resources worldwide, including additional resources sponsored by regional facilities. The system provides knowledge management of configuration parameters for massive data processing tasks, reproducibility of results, scalable database access, orchestrated workflow and performance monitoring, dynamic workload sharing, automated fault tolerance and petascale data integrity control. The system evolves to accommodate a growing number of users and new requirements from our contacts in ATLAS main areas: Trigger, Physics, Data Preparation and Software & Computing. To assure scalability, the next generation production system architecture development is in progress. We report on scaling up the production system for a growing number of users provi...

  5. The ATLAS Production System Evolution

    CERN Document Server

    Borodin, Mikhail; The ATLAS collaboration

    2017-01-01

    The second generation of the ATLAS Production System called ProdSys2 is a distributed workload manager that runs daily hundreds of thousands of jobs, from dozens of different ATLAS-specific workflows, across more than a hundred heterogeneous sites. It achieves high utilization by combining dynamic job definition based upon many criteria, such as input and output size, memory requirements and CPU consumption, with manageable scheduling policies and by supporting different kinds of computational resources, such as GRID, clouds, supercomputers and volunteer computers. The system dynamically assigns a group of jobs (task) to a group of geographically distributed computing resources. Dynamic assignment and resource utilization is one of the major features of the system. The Production System has a sophisticated job fault recovery mechanism, which efficiently allows running multi-terabyte tasks without human intervention. We have implemented new features which allow automatic task submission and chaining of differe...

  6. The ATLAS Tier-0: Overview and operational experience

    International Nuclear Information System (INIS)

    Elsing, Markus; Goossens, Luc; Nairz, Armin; Negri, Guido

    2010-01-01

    Within the ATLAS hierarchical, multi-tier computing infrastructure, the Tier-0 centre at CERN is mainly responsible for prompt processing of the raw data coming from the online DAQ system, to archive the raw and derived data on tape, to register the data with the relevant catalogues and to distribute them to the associated Tier-1 centers. The Tier-0 is already fully functional. It has been successfully participating in all cosmic and commissioning data taking since May 2007, and was ramped up to its foreseen full size, performance and throughput for the cosmic (and short single-beam) run periods between July and October 2008. Data and work flows for collision data taking were exercised in several 'Full Dress Rehearsals' (FDRs) in the course of 2008. The transition from an expert to a shifter-based system was successfully established in July 2008. This article will give an overview of the Tier-0 system, its data and work flows, and operations model. It will review the operational experience gained in cosmic, commissioning, and FDR exercises during the past year. And it will give an outlook on planned developments and the evolution of the system towards first collision data taking expected now in late Autumn 2009.

  7. The ATLAS Tier-0: Overview and operational experience

    Science.gov (United States)

    Elsing, Markus; Goossens, Luc; Nairz, Armin; Negri, Guido

    2010-04-01

    Within the ATLAS hierarchical, multi-tier computing infrastructure, the Tier-0 centre at CERN is mainly responsible for prompt processing of the raw data coming from the online DAQ system, to archive the raw and derived data on tape, to register the data with the relevant catalogues and to distribute them to the associated Tier-1 centers. The Tier-0 is already fully functional. It has been successfully participating in all cosmic and commissioning data taking since May 2007, and was ramped up to its foreseen full size, performance and throughput for the cosmic (and short single-beam) run periods between July and October 2008. Data and work flows for collision data taking were exercised in several "Full Dress Rehearsals" (FDRs) in the course of 2008. The transition from an expert to a shifter-based system was successfully established in July 2008. This article will give an overview of the Tier-0 system, its data and work flows, and operations model. It will review the operational experience gained in cosmic, commissioning, and FDR exercises during the past year. And it will give an outlook on planned developments and the evolution of the system towards first collision data taking expected now in late Autumn 2009.

  8. PanDA: distributed production and distributed analysis system for ATLAS

    International Nuclear Information System (INIS)

    Maeno, T

    2008-01-01

    A new distributed software system was developed in the fall of 2005 for the ATLAS experiment at the LHC. This system, called PANDA, provides an integrated service architecture with late binding of jobs, maximal automation through layered services, tight binding with ATLAS Distributed Data Management system [1], advanced error discovery and recovery procedures, and other features. In this talk, we will describe the PANDA software system. Special emphasis will be placed on the evolution of PANDA based on one and half year of real experience in carrying out Computer System Commissioning data production [2] for ATLAS. The architecture of PANDA is well suited for the computing needs of the ATLAS experiment, which is expected to be one of the first HEP experiments to operate at the petabyte scale

  9. ATLAS TDAQ System Administration: an overview and evolution

    CERN Document Server

    LEE, CJ; The ATLAS collaboration; BOGDANCHIKOV, A; BRASOLIN, F; CONTESCU, AC; DARLEA, GL; KOROL, A; SCANNICCHIO, DA; TWOMEY, M; VALSAN, ML

    2013-01-01

    The ATLAS Trigger and Data Acquisition (TDAQ) system is responsible for the online processing of live data streaming from the ATLAS experiment at the Large Hadron Collider (LHC) at CERN. The system processes the direct data readout from ~100 million channels on the detector through three trigger levels, selecting interesting events for analysis with a factor of 10^7 reduction on the data rate with a latency of less than a few seconds. Most of the functionality is implemented on ~3000 servers composing the online farm. Due to the critical functionality of the system a sophisticated computing environment is maintained, covering the online farm and ATLAS control rooms, as well as a number of development and testing labs. The specificity of the system required the development of dedicated applications (e.g. ConfDB, BWM) for system configuration and maintenance; in parallel other Open Source tools (Puppet and Quattor) are used to centrally configure the operating systems. The health monitoring of the TDAQ system h...

  10. ATLAS TDAQ System Administration: an overview and evolution

    CERN Document Server

    LEE, CJ; The ATLAS collaboration; BOGDANCHIKOV, A; BRASOLIN, F; CONTESCU, AC; DARLEA, G-L; KOROL, A; SCANNICCHIO, DA; TWOMEY, M; VALSAN, ML

    2013-01-01

    The ATLAS Trigger and Data Acquisition (TDAQ) system is responsible for the online processing of live data streaming from the ATLAS experiment at the Large Hadron Collider (LHC) at CERN. The system processes the direct data readout from ~100 million channels on the detector through multiple trigger levels, selecting interesting events for analysis with a factor of $10^{7}$ reduction on the data rate with a latency of less than a few seconds. Most of the functionality is implemented on ~3000 servers composing the online farm. Due to the critical functionality of the system a sophisticated computing environment is maintained, covering the online farm and ATLAS control rooms, as well as a number of development and testing labs. The specificity of the system required the development of dedicated applications (e.g. ConfDB, BWM) for system configuration and maintenance; in parallel other Open Source tools (Puppet and Quattor) are used to centrally configure the operating systems. The health monitoring of the TDAQ s...

  11. A quality control atlas for scintillation camera systems

    International Nuclear Information System (INIS)

    Busemann Sokole, E.; Graham, L.S.; Todd-Pokropek, A.; Wegst, A.; Robilotta, C.C.

    2002-01-01

    Full text: The accurate interpretation of quality control and clinical nuclear medicine image data is coupled to an understanding of image patterns and quantitative results. Understanding is gained by learning from different examples, and knowledge of underlying principles of image production. An Atlas of examples has been created to assist with interpreting quality control tests and recognizing artifacts in clinical examples. The project was initiated and supported by the International Atomic Energy Agency (IAEA). The Atlas was developed and written by Busemann Sokole from image examples submitted from nuclear medicine users from around the world. The descriptive text was written in a consistent format to accompany each image or image set. Each example in the atlas finally consisted of the images; a brief description of the data acquisition, radionuclide/radiopharmaceutical, specific circumstances under which the image was produced; results describing the images and subsequent conclusions; comments, where appropriate, giving guidelines for follow-up strategies and trouble shooting; and occasional literature references. Hardcopy images required digitizing into JPEG format for inclusion into a digital document. Where possible, an example was contained on one page. The atlas was reviewed by an international group of experts. A total of about 250 examples were compiled into 6 sections: planar, SPECT, whole body, camera/computer interface, environment/radioactivity, and display/hardcopy. Subtle loss of image quality may be difficult to detect. SPECT examples, therefore, include simulations demonstrating effects of deterioration in camera performance (e.g. center-of-rotation offset, non-uniformity) or suboptimal clinical performance. The atlas includes normal results, results from poor adjustment of the camera system, poor results obtained at acceptance testing, artifacts due to system malfunction, and artifacts due to environmental situations. Some image patterns are

  12. ATLAS TDAQ System Administration: evolution and re-design

    CERN Document Server

    Ballestrero, Sergio; The ATLAS collaboration; Brasolin, Franco; Contescu, Alexandru Cristian; Dubrov, Sergei; Fazio, Daniel; Korol, Aleksandr; Lee, Christopher Jon; Scannicchio, Diana; Twomey, Matthew Shaun

    2015-01-01

    The ATLAS Trigger and Data Acquisition (TDAQ) system is responsible for the online processing of live data, streaming from the ATLAS experiment at the Large Hadron Collider (LHC) at CERN. The online farm is composed of $\\sim 3000$ servers, processing the data readout from $\\sim 100$ million detector channels through multiple trigger levels. During the two years of the first Long Shutdown (LS1) there has been a tremendous amount of work done by the ATLAS TDAQ System Administrators, implementing numerous new software applications, upgrading the OS and the hardware, changing some design philosophies and exploiting the High Level Trigger farm with different purposes. The OS version has been upgraded to SLC6; for the largest part of the farm, which is composed by net booted nodes, this required a completely new design of the net booting system. In parallel, the migration to Puppet of the Configuration Management systems has been completed for both net booted and local booted hosts; the Post-Boot Scripts system and...

  13. System Architecture Modeling for Technology Portfolio Management using ATLAS

    Science.gov (United States)

    Thompson, Robert W.; O'Neil, Daniel A.

    2006-01-01

    Strategic planners and technology portfolio managers have traditionally relied on consensus-based tools, such as Analytical Hierarchy Process (AHP) and Quality Function Deployment (QFD) in planning the funding of technology development. While useful to a certain extent, these tools are limited in the ability to fully quantify the impact of a technology choice on system mass, system reliability, project schedule, and lifecycle cost. The Advanced Technology Lifecycle Analysis System (ATLAS) aims to provide strategic planners a decision support tool for analyzing technology selections within a Space Exploration Architecture (SEA). Using ATLAS, strategic planners can select physics-based system models from a library, configure the systems with technologies and performance parameters, and plan the deployment of a SEA. Key parameters for current and future technologies have been collected from subject-matter experts and other documented sources in the Technology Tool Box (TTB). ATLAS can be used to compare the technical feasibility and economic viability of a set of technology choices for one SEA, and compare it against another set of technology choices or another SEA. System architecture modeling in ATLAS is a multi-step process. First, the modeler defines the system level requirements. Second, the modeler identifies technologies of interest whose impact on an SEA. Third, the system modeling team creates models of architecture elements (e.g. launch vehicles, in-space transfer vehicles, crew vehicles) if they are not already in the model library. Finally, the architecture modeler develops a script for the ATLAS tool to run, and the results for comparison are generated.

  14. Experiences with the new ATLAS Distributed Data Management System

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00214543; The ATLAS collaboration

    2016-01-01

    The ATLAS Distributed Data Management (DDM) system has evolved drastically in the last two years with the Rucio software fully replacing the previous system before the start of LHC Run-2. The ATLAS DDM system manages now more than 200 petabytes spread on 130 storage sites and can handle file transfer rates of up to 30Hz. In this talk, we discuss our experience acquired in developing, commissioning, running and maintaining such a large system. First, we describe the general architecture of the system, our integration with external services like the WLCG File Transfer Service and the evolution of the system over its first year of production. Then, we show the performance of the system, describe the integration of new technologies such as object stores, and outline future developments which mainly focus on performance and automation. Finally we discuss the long term evolution of ATLAS data management.

  15. The ATLAS Detector Safety System

    CERN Multimedia

    Helfried Burckhart; Kathy Pommes; Heidi Sandaker

    The ATLAS Detector Safety System (DSS) has the mandate to put the detector in a safe state in case an abnormal situation arises which could be potentially dangerous for the detector. It covers the CERN alarm severity levels 1 and 2, which address serious risks for the equipment. The highest level 3, which also includes danger for persons, is the responsibility of the CERN-wide system CSAM, which always triggers an intervention by the CERN fire brigade. DSS works independently from and hence complements the Detector Control System, which is the tool to operate the experiment. The DSS is organized in a Front- End (FE), which fulfills autonomously the safety functions and a Back-End (BE) for interaction and configuration. The overall layout is shown in the picture below. ATLAS DSS configuration The FE implementation is based on a redundant Programmable Logical Crate (PLC) system which is used also in industry for such safety applications. Each of the two PLCs alone, one located underground and one at the s...

  16. Multilevel Workflow System in the ATLAS Experiment

    CERN Document Server

    Borodin, M; The ATLAS collaboration; Golubkov, D; Klimentov, A; Maeno, T; Vaniachine, A

    2015-01-01

    The ATLAS experiment is scaling up Big Data processing for the next LHC run using a multilevel workflow system comprised of many layers. In Big Data processing ATLAS deals with datasets, not individual files. Similarly a task (comprised of many jobs) has become a unit of the ATLAS workflow in distributed computing, with about 0.8M tasks processed per year. In order to manage the diversity of LHC physics (exceeding 35K physics samples per year), the individual data processing tasks are organized into workflows. For example, the Monte Carlo workflow is composed of many steps: generate or configure hard-processes, hadronize signal and minimum-bias (pileup) events, simulate energy deposition in the ATLAS detector, digitize electronics response, simulate triggers, reconstruct data, convert the reconstructed data into ROOT ntuples for physics analysis, etc. Outputs are merged and/or filtered as necessary to optimize the chain. The bi-level workflow manager - ProdSys2 - generates actual workflow tasks and their jobs...

  17. Calculation Sheet for the Basic Design of the ATLAS Fluid System

    Energy Technology Data Exchange (ETDEWEB)

    Park, Hyun Sik; Moon, S. K.; Yun, B. J.; Kwon, T. S.; Choi, K. Y.; Cho, S.; Park, C. K.; Lee, S. J.; Kim, Y. S.; Song, C. H.; Baek, W. P.; Hong, S. D

    2007-03-15

    The basic design of an integral effect test loop for pressurized water reactors (PWRs), the ATLAS (Advanced Thermal-hydraulic Test Loop for Accident Simulation), has been carried out by Thermal-Hydraulics Safety Research Team in Korea Atomic Energy Research Institute (KAERI). The ATLAS facility has been designed to have the length scale of 1/2 and area scale of 1/144 compared with the reference plant, APR1400, and is scaled for full pressure and temperature conditions. This report includes calculation sheets for the basic design of ATLAS fluid systems, which are consisted of a reactor pressure vessel with core simulator, the primary loop piping, a pressurizer, reactor coolant pumps, steam generators, the secondary system, the safety system, the auxiliary system, and the heat loss compensation system. The present calculation sheets will be used to help understanding the basic design of the ATLAS fluid system and its based scaling methodology.

  18. Calculation Sheet for the Basic Design of the ATLAS Fluid System

    International Nuclear Information System (INIS)

    Park, Hyun Sik; Moon, S. K.; Yun, B. J.; Kwon, T. S.; Choi, K. Y.; Cho, S.; Park, C. K.; Lee, S. J.; Kim, Y. S.; Song, C. H.; Baek, W. P.; Hong, S. D.

    2007-03-01

    The basic design of an integral effect test loop for pressurized water reactors (PWRs), the ATLAS (Advanced Thermal-hydraulic Test Loop for Accident Simulation), has been carried out by Thermal-Hydraulics Safety Research Team in Korea Atomic Energy Research Institute (KAERI). The ATLAS facility has been designed to have the length scale of 1/2 and area scale of 1/144 compared with the reference plant, APR1400, and is scaled for full pressure and temperature conditions. This report includes calculation sheets for the basic design of ATLAS fluid systems, which are consisted of a reactor pressure vessel with core simulator, the primary loop piping, a pressurizer, reactor coolant pumps, steam generators, the secondary system, the safety system, the auxiliary system, and the heat loss compensation system. The present calculation sheets will be used to help understanding the basic design of the ATLAS fluid system and its based scaling methodology

  19. AGIS: Evolution of Distributed Computing information system for ATLAS

    Science.gov (United States)

    Anisenkov, A.; Di Girolamo, A.; Alandes, M.; Karavakis, E.

    2015-12-01

    ATLAS, a particle physics experiment at the Large Hadron Collider at CERN, produces petabytes of data annually through simulation production and tens of petabytes of data per year from the detector itself. The ATLAS computing model embraces the Grid paradigm and a high degree of decentralization of computing resources in order to meet the ATLAS requirements of petabytes scale data operations. It has been evolved after the first period of LHC data taking (Run-1) in order to cope with new challenges of the upcoming Run- 2. In this paper we describe the evolution and recent developments of the ATLAS Grid Information System (AGIS), developed in order to integrate configuration and status information about resources, services and topology of the computing infrastructure used by the ATLAS Distributed Computing applications and services.

  20. Readout Unit-FPGA version for link multipexers, DAQ and VELO trigger

    CERN Document Server

    Müller, H; Guirao, A; Bal, F

    2003-01-01

    The FPGA-based Readout Unit (RU) was designed as entry stage to the readout networks of the LHCb data acquisition and L1-VELO topology trigger systems. The RU performs subevent building from up to 16 custom S-link inputs towards a commercial readout network via a PCI interface card. For output to custom links, as required in datalink multiplexer applications, an output S-link transmitter interface is alternatively available. Baseline readout networks for the RU are intelligent Gbit-ethernet NIC cards for the DAQ system and SCI shared memory network for the L1-VELO system. Any new protocols, like 10Gbit ethernet or Infiniband may be adopted as far as proper PCI interfaces and Linux device drivers will become available. The two baseline RU modes of operation are: 1.) link-multiplexer with N*Slink to single-Slink 2.) eventbuilder interface with quad Slink-to-PCI network interface.

  1. ATLAS Level-1 Calorimeter Trigger Subsystem Tests of a Prototype Cluster Processor Module

    CERN Document Server

    Garvey, J; Apostologlou, P; Ay, C; Barnett, B M; Bauss, B; Brawn, I P; Bohm, C; Dahlhoff, A; Davis, A O; Edwards, J; Eisenhandler, E F; Gee, C N P; Gillman, A R; Hanke, P; Hellman, S; Hidévgi, A; Hillier, S J; Jakobs, K; Kluge, E E; Landon, M; Mahboubi, K; Mahout, G; Meier, K; Meshkov, P; Moye, T H; Mills, D; Moyse, E; Nix, O; Penno, K; Perera, V J O; Qian, W; Schmitt, K; Schäfer, U; Silverstein, S; Staley, R J; Thomas, J; Trefzger, T M; Watkins, P M; Watson, A; 9th Workshop On Electronics For LHC Experiments - LECC 2003

    2003-01-01

    The Level-1 Calorimeter Trigger consists of a Preprocessor (PP), a Cluster Processor (CP), and a Jet/Energy-sum Processor (JEP). The CP and JEP receive digitised trigger-tower data from the Preprocessor and produce trigger multiplicity and Region-of-Interest (RoI) information. The trigger will also provide intermediate results to the data acquisition (DAQ) system for monitoring and diagnostic purposes by using Readout Driver (ROD) Modules. The CP Modules (CPM) are designed to find isolated electron/photon and hadron/tau clusters in overlapping windows of trigger towers. Each pipelined CPM processes 8-bit data from a total of 128 trigger towers at each LHC crossing. Four full-specification prototypes of CPMs have been built and results of complete tests on individual boards will be presented. These modules were then integrated with other modules to build an ATLAS Level-1 Calorimeter Trigger subsystem test bench. Realtime data were exchanged between modules, and time-slice readout data were tagged and transferr...

  2. The Detector Safety System of the ATLAS experiment

    International Nuclear Information System (INIS)

    Beltramello, O; Burckhart, H J; Franz, S; Jaekel, M; Jeckel, M; Lueders, S; Morpurgo, G; Santos Pedrosa, F dos; Pommes, K; Sandaker, H

    2009-01-01

    The ATLAS detector at the Large Hadron Collider at CERN is one of the most advanced detectors for High Energy Physics experiments ever built. It consists of the order of ten functionally independent sub-detectors, which all have dedicated services like power, cooling, gas supply. A Detector Safety System has been built to detect possible operational problems and abnormal and potentially dangerous situations at an early stage and, if needed, to bring the relevant part of ATLAS automatically into a safe state. The procedures and the configuration specific to ATLAS are described in detail and first operational experience is given.

  3. Status and Evolution of ATLAS Workload Management System PanDA

    CERN Document Server

    AUTHOR|(CDS)2067365; The ATLAS collaboration

    2012-01-01

    The ATLAS experiment at the LHC uses a sophisticated workload management system, PanDA, to provide access for thousands of physicists to distributed computing resources of unprecedented scale. This system has proved to be robust and scalable during three years of LHC operations. We describe the design and performance of PanDA in ATLAS. The features which make PanDA successful in ATLAS could be applicable to other exabyte scale scientific projects. We describe plans to evolve PanDA towards a general workload management system for the new BigData initiative announced by the US government. Other planned future improvements to PanDA will also be described

  4. FE-I4 pixel chip characterization with USBpix3 test system

    Energy Technology Data Exchange (ETDEWEB)

    Filimonov, Viacheslav; Gonella, Laura; Hemperek, Tomasz; Huegging, Fabian; Janssen, Jens; Krueger, Hans; Pohl, David-Leon; Wermes, Norbert [University of Bonn, Bonn (Germany)

    2015-07-01

    The USBpix readout system is a small and light weighting test system for the ATLAS pixel readout chips. It is widely used to operate and characterize FE-I4 pixel modules in lab and test beam environments. For multi-chip modules the resources on the Multi-IO board, that is the central control unit of the readout system, are coming to their limits, which makes the simultaneous readout of more than one chip at a time challenging. Therefore an upgrade of the current USBpix system has been developed. The upgraded system is called USBpix3 - the main focus of the talk. Characterization of single chip FE-I4 modules was performed with USBpix3 prototype (digital, analog, threshold and source scans; tuning). PyBAR (Bonn ATLAS Readout in Python scripting language) was used as readout software. PyBAR consists of FEI4 DAQ and Data Analysis Libraries in Python. The presentation describes the USBpix3 system, results of FE-I4 modules characterization and preparation for the multi-chip module and multi-module readout with USBpix3.

  5. Task Management in the New ATLAS Production System

    CERN Document Server

    De, K; The ATLAS collaboration; Klimentov, A; Potekhin, M; Vaniachine, A

    2013-01-01

    The ATLAS Production System is the top level workflow manager which translates physicists' needs for production level processing into actual workflows executed across about a hundred processing sites used globally by ATLAS. As the production workload increased in volume and complexity in recent years (the ATLAS production tasks count is above one million, with each task containing hundreds or thousands of jobs) there is a need to upgrade the Production System to meet the challenging requirements of the next LHC run while minimizing the operating costs. Providing a front-end and a management layer for petascale data processing and analysis, the new Production System contains generic subsystems that can be used in a wider range of applications. The main subsystems are the Database Engine for Tasks (DEFT) and the Job Execution and Definition Interface (JEDI). Based on users' requests, the DEFT subsystem manages inter-dependent groups of tasks (Meta-Tasks) and generates corresponding data processing workflows. Th...

  6. Task Management in the New ATLAS Production System

    CERN Document Server

    De, K; The ATLAS collaboration; Klimentov, A; Potekhin, M; Vaniachine, A

    2014-01-01

    The ATLAS Production System is the top level workflow manager which translates physicists' needs for production level processing into actual workflows executed across about a hundred processing sites used globally by ATLAS. As the production workload increased in volume and complexity in recent years (the ATLAS production tasks count is above one million, with each task containing hundreds or thousands of jobs) there is a need to upgrade the Production System to meet the challenging requirements of the next LHC run while minimizing the operating costs. Providing a front-end and a management layer for petascale data processing and analysis, the new Production System contains generic subsystems that can be used in a wider range of applications. The main subsystems are the Database Engine for Tasks (DEFT) and the Job Execution and Definition Interface (JEDI). Based on users' requests, the DEFT subsystem manages inter-dependent groups of tasks (Meta-Tasks) and generates corresponding data processing workflows. Th...

  7. Status of the ATLAS control system upgrade

    International Nuclear Information System (INIS)

    Munson, F.H.; Ferraretto, M.; Rutherford, B.

    1992-01-01

    Certain components of the ATLAS control system are two generations behind today's technology. It has been decided to upgrade the control system. in part, by replacing Digital Equipment Corporation (DEC) PDP-11 computers with present-day VAX technology. Two primary goals have been defined for the upgraded control system. The first of these goals is to keep additional ''in-house'' written software to a minimum, while providing the portability necessary to ensure the continued use of existing software. In an attempt to achieve this goal, commercially-available software has been utilized to provide a foundation for the final control-system configuration. The second goal is to develop the new control system, while not interfering with accelerator operations. This paper describes some of the motivation for upgrading the ATLAS control system, the basic features of the new control system, and the present status of the system's development

  8. Noise evaluation of silicon strip super-module with ABCN250 readout chips for the ATLAS detector upgrade at the High Luminosity LHC

    Energy Technology Data Exchange (ETDEWEB)

    Todome, K., E-mail: todome@hep.phys.titech.ac.jp [Department of Physics, Tokyo Institute of Technology, Ookayama 2-12-1, Meguro-ku, Tokyo 152-8551 (Japan); Solid State Div., Hamamatsu Photonics K.K., 1126-1, Ichino-cho, Higashi-ku, Hamamatsu-shi, Shizuoka 435-8558 (Japan); Jinnouchi, O. [Department of Physics, Tokyo Institute of Technology, Ookayama 2-12-1, Meguro-ku, Tokyo 152-8551 (Japan); Solid State Div., Hamamatsu Photonics K.K., 1126-1, Ichino-cho, Higashi-ku, Hamamatsu-shi, Shizuoka 435-8558 (Japan); Clark, A.; Barbier, G.; Cadoux, F.; Favre, Y.; Ferrere, D.; Gonzalez-Sevilla, S.; Iacobucci, G.; La Marra, D.; Perrin, E.; Weber, M. [DPNC, University of Geneva, CH-1211 Geneva 4 (Switzerland); Solid State Div., Hamamatsu Photonics K.K., 1126-1, Ichino-cho, Higashi-ku, Hamamatsu-shi, Shizuoka 435-8558 (Japan); Ikegami, Y.; Nakamura, K.; Takubo, Y.; Unno, Y. [Institute of Particle and Nuclear Study, KEK, Oho 1-1, Tsukuba, Ibaraki 305-0801 (Japan); Solid State Div., Hamamatsu Photonics K.K., 1126-1, Ichino-cho, Higashi-ku, Hamamatsu-shi, Shizuoka 435-8558 (Japan); Takashima, R. [Department of Science Education, Kyoto University of Education, Kyoto 612-8522 (Japan); Solid State Div., Hamamatsu Photonics K.K., 1126-1, Ichino-cho, Higashi-ku, Hamamatsu-shi, Shizuoka 435-8558 (Japan); Tojo, J. [Department of Physics, Faculty of Science, Kyushu University, 744 Motooka, Nishi-ku, Fukuoka 819-0395 (Japan); Solid State Div., Hamamatsu Photonics K.K., 1126-1, Ichino-cho, Higashi-ku, Hamamatsu-shi, Shizuoka 435-8558 (Japan); Kono, T. [Ochadai Academic Production, Ochanomizu University, 2-1-1, Otsuka, Bunkyo-ku, Tokyo 112-8610 (Japan); Solid State Div., Hamamatsu Photonics K.K., 1126-1, Ichino-cho, Higashi-ku, Hamamatsu-shi, Shizuoka 435-8558 (Japan); and others

    2016-09-21

    Toward High Luminosity LHC (HL-LHC), the whole ATLAS inner tracker will be replaced, including the semiconductor tracker (SCT) which is the silicon micro strip detector for tracking charged particles. In development of the SCT, integration of the detector is the important issue. One of the concepts of integration is the “super-module” in which individual modules are assembled to produce the SCT ladder. A super-module prototype has been developed to demonstrate its functionality. One of the concerns in integrating the super-modules is the electrical coupling between each module, because it may increase intrinsic noise of the system. To investigate the electrical performance of the prototype, the new Data Acquisition (DAQ) system has been developed by using SEABAS. The electric performance of the super-module prototype, especially the input noise and random noise hit rate, was investigated by using SEABAS system.

  9. Performance of the ATLAS trigger system in 2015

    Energy Technology Data Exchange (ETDEWEB)

    Aaboud, M. [Univ. Mohamed Premier et LPTPM, Oujda (Morocco). Faculte des Sciences; Aad, G. [CPPM, Aix-Marseille Univ. et CNRS/IN2P3, Marseille (France); Abbott, B. [Oklahoma Univ., Norman, OK (United States). Homer L. Dodge Dept. of Physics and Astronomy; Collaboration: Atlas Collaboration; and others

    2017-05-15

    During 2015 the ATLAS experiment recorded 3.8 fb{sup -1} of proton-proton collision data at a centre-of-mass energy of 13 TeV. The ATLAS trigger system is a crucial component of the experiment, responsible for selecting events of interest at a recording rate of approximately 1 kHz from up to 40 MHz of collisions. This paper presents a short overview of the changes to the trigger and data acquisition systems during the first long shutdown of the LHC and shows the performance of the trigger system and its components based on the 2015 proton-proton collision data. (orig.)

  10. Performance of the ATLAS Trigger System in 2015

    CERN Document Server

    Aaboud, Morad; Abbott, Brad; Abdallah, Jalal; Abdinov, Ovsat; Abeloos, Baptiste; Aben, Rosemarie; AbouZeid, Ossama; Abraham, Nicola; Abramowicz, Halina; Abreu, Henso; Abreu, Ricardo; Abulaiti, Yiming; Acharya, Bobby Samir; Adachi, Shunsuke; Adamczyk, Leszek; Adams, David; Adelman, Jahred; Adomeit, Stefanie; Adye, Tim; Affolder, Tony; Agatonovic-Jovin, Tatjana; Aguilar-Saavedra, Juan Antonio; Ahlen, Steven; Ahmadov, Faig; Aielli, Giulio; Akerstedt, Henrik; Åkesson, Torsten Paul Ake; Akimov, Andrei; Alberghi, Gian Luigi; Albert, Justin; Albrand, Solveig; Alconada Verzini, Maria Josefina; Aleksa, Martin; Aleksandrov, Igor; Alexa, Calin; Alexander, Gideon; Alexopoulos, Theodoros; Alhroob, Muhammad; Ali, Babar; Aliev, Malik; Alimonti, Gianluca; Alison, John; Alkire, Steven Patrick; Allbrooke, Benedict; Allen, Benjamin William; Allport, Phillip; Aloisio, Alberto; Alonso, Alejandro; Alonso, Francisco; Alpigiani, Cristiano; Alshehri, Azzah Aziz; Alstaty, Mahmoud; Alvarez Gonzalez, Barbara; Άlvarez Piqueras, Damián; Alviggi, Mariagrazia; Amadio, Brian Thomas; Amaral Coutinho, Yara; Amelung, Christoph; Amidei, Dante; Amor Dos Santos, Susana Patricia; Amorim, Antonio; Amoroso, Simone; Amundsen, Glenn; Anastopoulos, Christos; Ancu, Lucian Stefan; Andari, Nansi; Andeen, Timothy; Anders, Christoph Falk; Anders, Gabriel; Anders, John Kenneth; Anderson, Kelby; Andreazza, Attilio; Andrei, George Victor; Angelidakis, Stylianos; Angelozzi, Ivan; Angerami, Aaron; Anghinolfi, Francis; Anisenkov, Alexey; Anjos, Nuno; Annovi, Alberto; Antel, Claire; Antonelli, Mario; Antonov, Alexey; Antrim, Daniel Joseph; Anulli, Fabio; Aoki, Masato; Aperio Bella, Ludovica; Arabidze, Giorgi; Arai, Yasuo; Araque, Juan Pedro; Arce, Ayana; Arduh, Francisco Anuar; Arguin, Jean-Francois; Argyropoulos, Spyridon; Arik, Metin; Armbruster, Aaron James; Armitage, Lewis James; Arnaez, Olivier; Arnold, Hannah; Arratia, Miguel; Arslan, Ozan; Artamonov, Andrei; Artoni, Giacomo; Artz, Sebastian; Asai, Shoji; Asbah, Nedaa; Ashkenazi, Adi; Åsman, Barbro; Asquith, Lily; Assamagan, Ketevi; Astalos, Robert; Atkinson, Markus; Atlay, Naim Bora; Augsten, Kamil; Avolio, Giuseppe; Axen, Bradley; Ayoub, Mohamad Kassem; Azuelos, Georges; Baak, Max; Baas, Alessandra; Baca, Matthew John; Bachacou, Henri; Bachas, Konstantinos; Backes, Moritz; Backhaus, Malte; Bagiacchi, Paolo; Bagnaia, Paolo; Bai, Yu; Baines, John; Bajic, Milena; Baker, Oliver Keith; Baldin, Evgenii; Balek, Petr; Balestri, Thomas; Balli, Fabrice; Balunas, William Keaton; Banas, Elzbieta; Banerjee, Swagato; Bannoura, Arwa A E; Barak, Liron; Barberio, Elisabetta Luigia; Barberis, Dario; Barbero, Marlon; Barillari, Teresa; Barisits, Martin-Stefan; Barklow, Timothy; Barlow, Nick; Barnes, Sarah Louise; Barnett, Bruce; Barnett, Michael; Barnovska-Blenessy, Zuzana; Baroncelli, Antonio; Barone, Gaetano; Barr, Alan; Barranco Navarro, Laura; Barreiro, Fernando; Barreiro Guimarães da Costa, João; Bartoldus, Rainer; Barton, Adam Edward; Bartos, Pavol; Basalaev, Artem; Bassalat, Ahmed; Bates, Richard; Batista, Santiago Juan; Batley, Richard; Battaglia, Marco; Bauce, Matteo; Bauer, Florian; Bawa, Harinder Singh; Beacham, James; Beattie, Michael David; Beau, Tristan; Beauchemin, Pierre-Hugues; Bechtle, Philip; Beck, Hans~Peter; Becker, Kathrin; Becker, Maurice; Beckingham, Matthew; Becot, Cyril; Beddall, Andrew; Beddall, Ayda; Bednyakov, Vadim; Bedognetti, Matteo; Bee, Christopher; Beemster, Lars; Beermann, Thomas; Begel, Michael; Behr, Janna Katharina; Bell, Andrew Stuart; Bella, Gideon; Bellagamba, Lorenzo; Bellerive, Alain; Bellomo, Massimiliano; Belotskiy, Konstantin; Beltramello, Olga; Belyaev, Nikita; Benary, Odette; Benchekroun, Driss; Bender, Michael; Bendtz, Katarina; Benekos, Nektarios; Benhammou, Yan; Benhar Noccioli, Eleonora; Benitez, Jose; Benjamin, Douglas; Bensinger, James; Bentvelsen, Stan; Beresford, Lydia; Beretta, Matteo; Berge, David; Bergeaas Kuutmann, Elin; Berger, Nicolas; Beringer, Jürg; Berlendis, Simon; Bernard, Nathan Rogers; Bernius, Catrin; Bernlochner, Florian Urs; Berry, Tracey; Berta, Peter; Bertella, Claudia; Bertoli, Gabriele; Bertolucci, Federico; Bertram, Iain Alexander; Bertsche, Carolyn; Bertsche, David; Besjes, Geert-Jan; Bessidskaia Bylund, Olga; Bessner, Martin Florian; Besson, Nathalie; Betancourt, Christopher; Bethani, Agni; Bethke, Siegfried; Bevan, Adrian John; Bianchi, Riccardo-Maria; Bianco, Michele; Biebel, Otmar; Biedermann, Dustin; Bielski, Rafal; Biesuz, Nicolo Vladi; Biglietti, Michela; Bilbao De Mendizabal, Javier; Billoud, Thomas Remy Victor; Bilokon, Halina; Bindi, Marcello; Bingul, Ahmet; Bini, Cesare; Biondi, Silvia; Bisanz, Tobias; Bjergaard, David Martin; Black, Curtis; Black, James; Black, Kevin; Blackburn, Daniel; Blair, Robert; Blazek, Tomas; Bloch, Ingo; Blocker, Craig; Blue, Andrew; Blum, Walter; Blumenschein, Ulrike; Blunier, Sylvain; Bobbink, Gerjan; Bobrovnikov, Victor; Bocchetta, Simona Serena; Bocci, Andrea; Bock, Christopher; Boehler, Michael; Boerner, Daniela; Bogaerts, Joannes Andreas; Bogavac, Danijela; Bogdanchikov, Alexander; Bohm, Christian; Boisvert, Veronique; Bokan, Petar; Bold, Tomasz; Boldyrev, Alexey; Bomben, Marco; Bona, Marcella; Boonekamp, Maarten; Borisov, Anatoly; Borissov, Guennadi; Bortfeldt, Jonathan; Bortoletto, Daniela; Bortolotto, Valerio; Bos, Kors; Boscherini, Davide; Bosman, Martine; Bossio Sola, Jonathan David; Boudreau, Joseph; Bouffard, Julian; Bouhova-Thacker, Evelina Vassileva; Boumediene, Djamel Eddine; Bourdarios, Claire; Boutle, Sarah Kate; Boveia, Antonio; Boyd, James; Boyko, Igor; Bracinik, Juraj; Brandt, Andrew; Brandt, Gerhard; Brandt, Oleg; Bratzler, Uwe; Brau, Benjamin; Brau, James; Breaden Madden, William Dmitri; Brendlinger, Kurt; Brennan, Amelia Jean; Brenner, Lydia; Brenner, Richard; Bressler, Shikma; Bristow, Timothy Michael; Britton, Dave; Britzger, Daniel; Brochu, Frederic; Brock, Ian; Brock, Raymond; Brooijmans, Gustaaf; Brooks, Timothy; Brooks, William; Brosamer, Jacquelyn; Brost, Elizabeth; Broughton, James; Bruckman de Renstrom, Pawel; Bruncko, Dusan; Bruneliere, Renaud; Bruni, Alessia; Bruni, Graziano; Bruni, Lucrezia Stella; Brunt, Benjamin; Bruschi, Marco; Bruscino, Nello; Bryant, Patrick; Bryngemark, Lene; Buanes, Trygve; Buat, Quentin; Buchholz, Peter; Buckley, Andrew; Budagov, Ioulian; Buehrer, Felix; Bugge, Magnar Kopangen; Bulekov, Oleg; Bullock, Daniel; Burckhart, Helfried; Burdin, Sergey; Burgard, Carsten Daniel; Burger, Angela Maria; Burghgrave, Blake; Burka, Klaudia; Burke, Stephen; Burmeister, Ingo; Burr, Jonathan Thomas Peter; Busato, Emmanuel; Büscher, Daniel; Büscher, Volker; Bussey, Peter; Butler, John; Buttar, Craig; Butterworth, Jonathan; Butti, Pierfrancesco; Buttinger, William; Buzatu, Adrian; Buzykaev, Aleksey; Cabrera Urbán, Susana; Caforio, Davide; Cairo, Valentina; Cakir, Orhan; Calace, Noemi; Calafiura, Paolo; Calandri, Alessandro; Calderini, Giovanni; Calfayan, Philippe; Callea, Giuseppe; Caloba, Luiz; Calvente Lopez, Sergio; Calvet, David; Calvet, Samuel; Calvet, Thomas Philippe; Camacho Toro, Reina; Camarda, Stefano; Camarri, Paolo; Cameron, David; Caminal Armadans, Roger; Camincher, Clement; Campana, Simone; Campanelli, Mario; Camplani, Alessandra; Campoverde, Angel; Canale, Vincenzo; Canepa, Anadi; Cano Bret, Marc; Cantero, Josu; Cao, Tingting; Capeans Garrido, Maria Del Mar; Caprini, Irinel; Caprini, Mihai; Capua, Marcella; Carbone, Ryne Michael; Cardarelli, Roberto; Cardillo, Fabio; Carli, Ina; Carli, Tancredi; Carlino, Gianpaolo; Carlson, Benjamin Taylor; Carminati, Leonardo; Carney, Rebecca; Caron, Sascha; Carquin, Edson; Carrillo-Montoya, German D; Carter, Janet; Carvalho, João; Casadei, Diego; Casado, Maria Pilar; Casolino, Mirkoantonio; Casper, David William; Castaneda-Miranda, Elizabeth; Castelijn, Remco; Castelli, Angelantonio; Castillo Gimenez, Victoria; Castro, Nuno Filipe; Catinaccio, Andrea; Catmore, James; Cattai, Ariella; Caudron, Julien; Cavaliere, Viviana; Cavallaro, Emanuele; Cavalli, Donatella; Cavalli-Sforza, Matteo; Cavasinni, Vincenzo; Ceradini, Filippo; Cerda Alberich, Leonor; Santiago Cerqueira, Augusto; Cerri, Alessandro; Cerrito, Lucio; Cerutti, Fabio; Cervelli, Alberto; Cetin, Serkant Ali; Chafaq, Aziz; Chakraborty, Dhiman; Chan, Stephen Kam-wah; Chan, Yat Long; Chang, Philip; Chapman, John Derek; Charlton, Dave; Chatterjee, Avishek; Chau, Chav Chhiv; Chavez Barajas, Carlos Alberto; Che, Siinn; Cheatham, Susan; Chegwidden, Andrew; Chekanov, Sergei; Chekulaev, Sergey; Chelkov, Gueorgui; Chelstowska, Magda Anna; Chen, Chunhui; Chen, Hucheng; Chen, Karen; Chen, Shenjian; Chen, Shion; Chen, Xin; Chen, Ye; Cheng, Hok Chuen; Cheng, Huajie; Cheng, Yangyang; Cheplakov, Alexander; Cheremushkina, Evgenia; Cherkaoui El Moursli, Rajaa; Chernyatin, Valeriy; Cheu, Elliott; Chevalier, Laurent; Chiarella, Vitaliano; Chiarelli, Giorgio; Chiodini, Gabriele; Chisholm, Andrew; Chitan, Adrian; Chizhov, Mihail; Choi, Kyungeon; Chomont, Arthur Rene; Chouridou, Sofia; Chow, Bonnie Kar Bo; Christodoulou, Valentinos; Chromek-Burckhart, Doris; Chudoba, Jiri; Chuinard, Annabelle Julia; Chwastowski, Janusz; Chytka, Ladislav; Ciapetti, Guido; Ciftci, Abbas Kenan; Cinca, Diane; Cindro, Vladimir; Cioara, Irina Antonela; Ciocca, Claudia; Ciocio, Alessandra; Cirotto, Francesco; Citron, Zvi Hirsh; Citterio, Mauro; Ciubancan, Mihai; Clark, Allan G; Clark, Brian Lee; Clark, Michael; Clark, Philip James; Clarke, Robert; Clement, Christophe; Coadou, Yann; Cobal, Marina; Coccaro, Andrea; Cochran, James H; Colasurdo, Luca; Cole, Brian; Colijn, Auke-Pieter; Collot, Johann; Colombo, Tommaso; Compostella, Gabriele; Conde Muiño, Patricia; Coniavitis, Elias; Connell, Simon Henry; Connelly, Ian; Consorti, Valerio; Constantinescu, Serban; Conti, Geraldine; Conventi, Francesco; Cooke, Mark; Cooper, Ben; Cooper-Sarkar, Amanda; Cormier, Felix; Cormier, Kyle James Read; Cornelissen, Thijs; Corradi, Massimo; Corriveau, Francois; Cortes-Gonzalez, Arely; Cortiana, Giorgio; Costa, Giuseppe; Costa, María José; Costanzo, Davide; Cottin, Giovanna; Cowan, Glen; Cox, Brian; Cranmer, Kyle; Crawley, Samuel Joseph; Cree, Graham; Crépé-Renaudin, Sabine; Crescioli, Francesco; Cribbs, Wayne Allen; Crispin Ortuzar, Mireia; Cristinziani, Markus; Croft, Vince; Crosetti, Giovanni; Cueto, Ana; Cuhadar Donszelmann, Tulay; Cummings, Jane; Curatolo, Maria; Cúth, Jakub; Czirr, Hendrik; Czodrowski, Patrick; D'amen, Gabriele; D'Auria, Saverio; D'Onofrio, Monica; Da Cunha Sargedas De Sousa, Mario Jose; Da Via, Cinzia; Dabrowski, Wladyslaw; Dado, Tomas; Dai, Tiesheng; Dale, Orjan; Dallaire, Frederick; Dallapiccola, Carlo; Dam, Mogens; Dandoy, Jeffrey; Dang, Nguyen Phuong; Daniells, Andrew Christopher; Dann, Nicholas Stuart; Danninger, Matthias; Dano Hoffmann, Maria; Dao, Valerio; Darbo, Giovanni; Darmora, Smita; Dassoulas, James; Dattagupta, Aparajita; Davey, Will; David, Claire; Davidek, Tomas; Davies, Merlin; Davison, Peter; Dawe, Edmund; Dawson, Ian; De, Kaushik; de Asmundis, Riccardo; De Benedetti, Abraham; De Castro, Stefano; De Cecco, Sandro; De Groot, Nicolo; de Jong, Paul; De la Torre, Hector; De Lorenzi, Francesco; De Maria, Antonio; De Pedis, Daniele; De Salvo, Alessandro; De Sanctis, Umberto; De Santo, Antonella; De Vivie De Regie, Jean-Baptiste; Dearnaley, William James; Debbe, Ramiro; Debenedetti, Chiara; Dedovich, Dmitri; Dehghanian, Nooshin; Deigaard, Ingrid; Del Gaudio, Michela; Del Peso, Jose; Del Prete, Tarcisio; Delgove, David; Deliot, Frederic; Delitzsch, Chris Malena; Dell'Acqua, Andrea; Dell'Asta, Lidia; Dell'Orso, Mauro; Della Pietra, Massimo; della Volpe, Domenico; Delmastro, Marco; Delsart, Pierre-Antoine; DeMarco, David; Demers, Sarah; Demichev, Mikhail; Demilly, Aurelien; Denisov, Sergey; Denysiuk, Denys; Derendarz, Dominik; Derkaoui, Jamal Eddine; Derue, Frederic; Dervan, Paul; Desch, Klaus Kurt; Deterre, Cecile; Dette, Karola; Deviveiros, Pier-Olivier; Dewhurst, Alastair; Dhaliwal, Saminder; Di Ciaccio, Anna; Di Ciaccio, Lucia; Di Clemente, William Kennedy; Di Donato, Camilla; Di Girolamo, Alessandro; Di Girolamo, Beniamino; Di Micco, Biagio; Di Nardo, Roberto; Di Simone, Andrea; Di Sipio, Riccardo; Di Valentino, David; Diaconu, Cristinel; Diamond, Miriam; Dias, Flavia; Diaz, Marco Aurelio; Diehl, Edward; Dietrich, Janet; Díez Cornell, Sergio; Dimitrievska, Aleksandra; Dingfelder, Jochen; Dita, Petre; Dita, Sanda; Dittus, Fridolin; Djama, Fares; Djobava, Tamar; Djuvsland, Julia Isabell; Barros do Vale, Maria Aline; Dobos, Daniel; Dobre, Monica; Doglioni, Caterina; Dolejsi, Jiri; Dolezal, Zdenek; Donadelli, Marisilvia; Donati, Simone; Dondero, Paolo; Donini, Julien; Dopke, Jens; Doria, Alessandra; Dova, Maria-Teresa; Doyle, Tony; Drechsler, Eric; Dris, Manolis; Du, Yanyan; Duarte-Campderros, Jorge; Duchovni, Ehud; Duckeck, Guenter; Ducu, Otilia Anamaria; Duda, Dominik; Dudarev, Alexey; Dudder, Andreas Christian; Duffield, Emily Marie; Duflot, Laurent; Dührssen, Michael; Dumancic, Mirta; Duncan, Anna Kathryn; Dunford, Monica; Duran Yildiz, Hatice; Düren, Michael; Durglishvili, Archil; Duschinger, Dirk; Dutta, Baishali; Dyndal, Mateusz; Eckardt, Christoph; Ecker, Katharina Maria; Edgar, Ryan Christopher; Edwards, Nicholas Charles; Eifert, Till; Eigen, Gerald; Einsweiler, Kevin; Ekelof, Tord; El Kacimi, Mohamed; Ellajosyula, Venugopal; Ellert, Mattias; Elles, Sabine; Ellinghaus, Frank; Elliot, Alison; Ellis, Nicolas; Elmsheuser, Johannes; Elsing, Markus; Emeliyanov, Dmitry; Enari, Yuji; Endner, Oliver Chris; Ennis, Joseph Stanford; Erdmann, Johannes; Ereditato, Antonio; Ernis, Gunar; Ernst, Jesse; Ernst, Michael; Errede, Steven; Ertel, Eugen; Escalier, Marc; Esch, Hendrik; Escobar, Carlos; Esposito, Bellisario; Etienvre, Anne-Isabelle; Etzion, Erez; Evans, Hal; Ezhilov, Alexey; Ezzi, Mohammed; Fabbri, Federica; Fabbri, Laura; Facini, Gabriel; Fakhrutdinov, Rinat; Falciano, Speranza; Falla, Rebecca Jane; Faltova, Jana; Fang, Yaquan; Fanti, Marcello; Farbin, Amir; Farilla, Addolorata; Farina, Christian; Farina, Edoardo Maria; Farooque, Trisha; Farrell, Steven; Farrington, Sinead; Farthouat, Philippe; Fassi, Farida; Fassnacht, Patrick; Fassouliotis, Dimitrios; Faucci Giannelli, Michele; Favareto, Andrea; Fawcett, William James; Fayard, Louis; Fedin, Oleg; Fedorko, Wojciech; Feigl, Simon; Feligioni, Lorenzo; Feng, Cunfeng; Feng, Eric; Feng, Haolu; Fenyuk, Alexander; Feremenga, Last; Fernandez Martinez, Patricia; Fernandez Perez, Sonia; Ferrando, James; Ferrari, Arnaud; Ferrari, Pamela; Ferrari, Roberto; Ferreira de Lima, Danilo Enoque; Ferrer, Antonio; Ferrere, Didier; Ferretti, Claudio; Fiedler, Frank; Filipčič, Andrej; Filipuzzi, Marco; Filthaut, Frank; Fincke-Keeler, Margret; Finelli, Kevin Daniel; Fiolhais, Miguel; Fiorini, Luca; Fischer, Adam; Fischer, Cora; Fischer, Julia; Fisher, Wade Cameron; Flaschel, Nils; Fleck, Ivor; Fleischmann, Philipp; Fletcher, Gareth Thomas; Fletcher, Rob Roy MacGregor; Flick, Tobias; Flierl, Bernhard Matthias; Flores Castillo, Luis; Flowerdew, Michael; Forcolin, Giulio Tiziano; Formica, Andrea; Forti, Alessandra; Foster, Andrew Geoffrey; Fournier, Daniel; Fox, Harald; Fracchia, Silvia; Francavilla, Paolo; Franchini, Matteo; Francis, David; Franconi, Laura; Franklin, Melissa; Frate, Meghan; Fraternali, Marco; Freeborn, David; Fressard-Batraneanu, Silvia; Friedrich, Felix; Froidevaux, Daniel; Frost, James; Fukunaga, Chikara; Fullana Torregrosa, Esteban; Fusayasu, Takahiro; Fuster, Juan; Gabaldon, Carolina; Gabizon, Ofir; Gabrielli, Alessandro; Gabrielli, Andrea; Gach, Grzegorz; Gadatsch, Stefan; Gagliardi, Guido; Gagnon, Louis Guillaume; Gagnon, Pauline; Galea, Cristina; Galhardo, Bruno; Gallas, Elizabeth; Gallop, Bruce; Gallus, Petr; Galster, Gorm Aske Gram Krohn; Gan, KK; Ganguly, Sanmay; Gao, Jun; Gao, Yanyan; Gao, Yongsheng; Garay Walls, Francisca; García, Carmen; García Navarro, José Enrique; Garcia-Sciveres, Maurice; Gardner, Robert; Garelli, Nicoletta; Garonne, Vincent; Gascon Bravo, Alberto; Gasnikova, Ksenia; Gatti, Claudio; Gaudiello, Andrea; Gaudio, Gabriella; Gauthier, Lea; Gavrilenko, Igor; Gay, Colin; Gaycken, Goetz; Gazis, Evangelos; Gecse, Zoltan; Gee, Norman; Geich-Gimbel, Christoph; Geisen, Marc; Geisler, Manuel Patrice; Gellerstedt, Karl; Gemme, Claudia; Genest, Marie-Hélène; Geng, Cong; Gentile, Simonetta; Gentsos, Christos; George, Simon; Gerbaudo, Davide; Gershon, Avi; Ghasemi, Sara; Ghneimat, Mazuza; Giacobbe, Benedetto; Giagu, Stefano; Giannetti, Paola; Gibson, Stephen; Gignac, Matthew; Gilchriese, Murdock; Gillam, Thomas; Gillberg, Dag; Gilles, Geoffrey; Gingrich, Douglas; Giokaris, Nikos; Giordani, MarioPaolo; Giorgi, Filippo Maria; Giraud, Pierre-Francois; Giromini, Paolo; Giugni, Danilo; Giuli, Francesco; Giuliani, Claudia; Giulini, Maddalena; Gjelsten, Børge Kile; Gkaitatzis, Stamatios; Gkialas, Ioannis; Gkougkousis, Evangelos Leonidas; Gladilin, Leonid; Glasman, Claudia; Glatzer, Julian; Glaysher, Paul; Glazov, Alexandre; Goblirsch-Kolb, Maximilian; Godlewski, Jan; Goldfarb, Steven; Golling, Tobias; Golubkov, Dmitry; Gomes, Agostinho; Gonçalo, Ricardo; Goncalves Pinto Firmino Da Costa, Joao; Gonella, Giulia; Gonella, Laura; Gongadze, Alexi; González de la Hoz, Santiago; Gonzalez-Sevilla, Sergio; Goossens, Luc; Gorbounov, Petr Andreevich; Gordon, Howard; Gorelov, Igor; Gorini, Benedetto; Gorini, Edoardo; Gorišek, Andrej; Gornicki, Edward; Goshaw, Alfred; Gössling, Claus; Gostkin, Mikhail Ivanovitch; Goudet, Christophe Raymond; Goujdami, Driss; Goussiou, Anna; Govender, Nicolin; Gozani, Eitan; Graber, Lars; Grabowska-Bold, Iwona; Gradin, Per Olov Joakim; Grafström, Per; Gramling, Johanna; Gramstad, Eirik; Grancagnolo, Sergio; Gratchev, Vadim; Gravila, Paul Mircea; Gray, Heather; Graziani, Enrico; Greenwood, Zeno Dixon; Grefe, Christian; Gregersen, Kristian; Gregor, Ingrid-Maria; Grenier, Philippe; Grevtsov, Kirill; Griffiths, Justin; Grillo, Alexander; Grimm, Kathryn; Grinstein, Sebastian; Gris, Philippe Luc Yves; Grivaz, Jean-Francois; Groh, Sabrina; Gross, Eilam; Grosse-Knetter, Joern; Grossi, Giulio Cornelio; Grout, Zara Jane; Guan, Liang; Guan, Wen; Guenther, Jaroslav; Guescini, Francesco; Guest, Daniel; Gueta, Orel; Gui, Bin; Guido, Elisa; Guillemin, Thibault; Guindon, Stefan; Gul, Umar; Gumpert, Christian; Guo, Jun; Guo, Yicheng; Gupta, Ruchi; Gupta, Shaun; Gustavino, Giuliano; Gutierrez, Phillip; Gutierrez Ortiz, Nicolas Gilberto; Gutschow, Christian; Guyot, Claude; Gwenlan, Claire; Gwilliam, Carl; Haas, Andy; Haber, Carl; Hadavand, Haleh Khani; Haddad, Nacim; Hadef, Asma; Hageböck, Stephan; Hagihara, Mutsuto; Hajduk, Zbigniew; Hakobyan, Hrachya; Haleem, Mahsana; Haley, Joseph; Halladjian, Garabed; Hallewell, Gregory David; Hamacher, Klaus; Hamal, Petr; Hamano, Kenji; Hamilton, Andrew; Hamity, Guillermo Nicolas; Hamnett, Phillip George; Han, Liang; Hanagaki, Kazunori; Hanawa, Keita; Hance, Michael; Haney, Bijan; Hanke, Paul; Hanna, Remie; Hansen, Jørgen Beck; Hansen, Jorn Dines; Hansen, Maike Christina; Hansen, Peter Henrik; Hara, Kazuhiko; Hard, Andrew; Harenberg, Torsten; Hariri, Faten; Harkusha, Siarhei; Harrington, Robert; Harrison, Paul Fraser; Hartjes, Fred; Hartmann, Nikolai Marcel; Hasegawa, Makoto; Hasegawa, Yoji; Hasib, Ahmed; Hassani, Samira; Haug, Sigve; Hauser, Reiner; Hauswald, Lorenz; Havranek, Miroslav; Hawkes, Christopher; Hawkings, Richard John; Hayakawa, Daiki; Hayden, Daniel; Hays, Chris; Hays, Jonathan Michael; Hayward, Helen; Haywood, Stephen; Head, Simon; Heck, Tobias; Hedberg, Vincent; Heelan, Louise; Heim, Sarah; Heim, Timon; Heinemann, Beate; Heinrich, Jochen Jens; Heinrich, Lukas; Heinz, Christian; Hejbal, Jiri; Helary, Louis; Hellman, Sten; Helsens, Clement; Henderson, James; Henderson, Robert; Heng, Yang; Henkelmann, Steffen; Henriques Correia, Ana Maria; Henrot-Versille, Sophie; Herbert, Geoffrey Henry; Herde, Hannah; Herget, Verena; Hernández Jiménez, Yesenia; Herten, Gregor; Hertenberger, Ralf; Hervas, Luis; Hesketh, Gavin Grant; Hessey, Nigel; Hetherly, Jeffrey Wayne; Higón-Rodriguez, Emilio; Hill, Ewan; Hill, John; Hiller, Karl Heinz; Hillier, Stephen; Hinchliffe, Ian; Hines, Elizabeth; Hirose, Minoru; Hirschbuehl, Dominic; Hoad, Xanthe; Hobbs, John; Hod, Noam; Hodgkinson, Mark; Hodgson, Paul; Hoecker, Andreas; Hoeferkamp, Martin; Hoenig, Friedrich; Hohn, David; Holmes, Tova Ray; Homann, Michael; Honda, Takuya; Hong, Tae Min; Hooberman, Benjamin Henry; Hopkins, Walter; Horii, Yasuyuki; Horton, Arthur James; Hostachy, Jean-Yves; Hou, Suen; Hoummada, Abdeslam; Howarth, James; Hoya, Joaquin; Hrabovsky, Miroslav; Hristova, Ivana; Hrivnac, Julius; Hryn'ova, Tetiana; Hrynevich, Aliaksei; Hsu, Pai-hsien Jennifer; Hsu, Shih-Chieh; Hu, Qipeng; Hu, Shuyang; Huang, Yanping; Hubacek, Zdenek; Hubaut, Fabrice; Huegging, Fabian; Huffman, Todd Brian; Hughes, Emlyn; Hughes, Gareth; Huhtinen, Mika; Huo, Peng; Huseynov, Nazim; Huston, Joey; Huth, John; Iacobucci, Giuseppe; Iakovidis, Georgios; Ibragimov, Iskander; Iconomidou-Fayard, Lydia; Ideal, Emma; Idrissi, Zineb; Iengo, Paolo; Igonkina, Olga; Iizawa, Tomoya; Ikai, Takashi; Ikegami, Yoichi; Ikeno, Masahiro; Ilchenko, Yuriy; Iliadis, Dimitrios; Ilic, Nikolina; Introzzi, Gianluca; Ioannou, Pavlos; Iodice, Mauro; Iordanidou, Kalliopi; Ippolito, Valerio; Ishijima, Naoki; Ishino, Masaya; Ishitsuka, Masaki; Ishmukhametov, Renat; Issever, Cigdem; Istin, Serhat; Ito, Fumiaki; Iturbe Ponce, Julia Mariana; Iuppa, Roberto; Iwanski, Wieslaw; Iwasaki, Hiroyuki; Izen, Joseph; Izzo, Vincenzo; Jabbar, Samina; Jackson, Brett; Jackson, Paul; Jain, Vivek; Jakobi, Katharina Bianca; Jakobs, Karl; Jakobsen, Sune; Jakoubek, Tomas; Jamin, David Olivier; Jana, Dilip; Jansky, Roland; Janssen, Jens; Janus, Michel; Janus, Piotr Andrzej; Jarlskog, Göran; Javadov, Namig; Javůrek, Tomáš; Jeanneau, Fabien; Jeanty, Laura; Jejelava, Juansher; Jeng, Geng-yuan; Jennens, David; Jenni, Peter; Jeske, Carl; Jézéquel, Stéphane; Ji, Haoshuang; Jia, Jiangyong; Jiang, Hai; Jiang, Yi; Jiang, Zihao; Jiggins, Stephen; Jimenez Pena, Javier; Jin, Shan; Jinaru, Adam; Jinnouchi, Osamu; Jivan, Harshna; Johansson, Per; Johns, Kenneth; Johnson, William Joseph; Jon-And, Kerstin; Jones, Graham; Jones, Roger; Jones, Sarah; Jones, Tim; Jongmanns, Jan; Jorge, Pedro; Jovicevic, Jelena; Ju, Xiangyang; Juste Rozas, Aurelio; Köhler, Markus Konrad; Kaczmarska, Anna; Kado, Marumi; Kagan, Harris; Kagan, Michael; Kahn, Sebastien Jonathan; Kaji, Toshiaki; Kajomovitz, Enrique; Kalderon, Charles William; Kaluza, Adam; Kama, Sami; Kamenshchikov, Andrey; Kanaya, Naoko; Kaneti, Steven; Kanjir, Luka; Kantserov, Vadim; Kanzaki, Junichi; Kaplan, Benjamin; Kaplan, Laser Seymour; Kapliy, Anton; Kar, Deepak; Karakostas, Konstantinos; Karamaoun, Andrew; Karastathis, Nikolaos; Kareem, Mohammad Jawad; Karentzos, Efstathios; Karnevskiy, Mikhail; Karpov, Sergey; Karpova, Zoya; Karthik, Krishnaiyengar; Kartvelishvili, Vakhtang; Karyukhin, Andrey; Kasahara, Kota; Kashif, Lashkar; Kass, Richard; Kastanas, Alex; Kataoka, Yousuke; Kato, Chikuma; Katre, Akshay; Katzy, Judith; Kawade, Kentaro; Kawagoe, Kiyotomo; Kawamoto, Tatsuo; Kawamura, Gen; Kazanin, Vassili; Keeler, Richard; Kehoe, Robert; Keller, John; Kempster, Jacob Julian; Keoshkerian, Houry; Kepka, Oldrich; Kerševan, Borut Paul; Kersten, Susanne; Keyes, Robert; Khader, Mazin; Khalil-zada, Farkhad; Khanov, Alexander; Kharlamov, Alexey; Kharlamova, Tatyana; Khoo, Teng Jian; Khovanskiy, Valery; Khramov, Evgeniy; Khubua, Jemal; Kido, Shogo; Kilby, Callum; Kim, Hee Yeun; Kim, Shinhong; Kim, Young-Kee; Kimura, Naoki; Kind, Oliver Maria; King, Barry; King, Matthew; Kirk, Julie; Kiryunin, Andrey; Kishimoto, Tomoe; Kisielewska, Danuta; Kiss, Florian; Kiuchi, Kenji; Kivernyk, Oleh; Kladiva, Eduard; Klein, Matthew Henry; Klein, Max; Klein, Uta; Kleinknecht, Konrad; Klimek, Pawel; Klimentov, Alexei; Klingenberg, Reiner; Klioutchnikova, Tatiana; Kluge, Eike-Erik; Kluit, Peter; Kluth, Stefan; Knapik, Joanna; Kneringer, Emmerich; Knoops, Edith; Knue, Andrea; Kobayashi, Aine; Kobayashi, Dai; Kobayashi, Tomio; Kobel, Michael; Kocian, Martin; Kodys, Peter; Koffas, Thomas; Koffeman, Els; Köhler, Nicolas Maximilian; Koi, Tatsumi; Kolanoski, Hermann; Kolb, Mathis; Koletsou, Iro; Komar, Aston; Komori, Yuto; Kondo, Takahiko; Kondrashova, Nataliia; Köneke, Karsten; König, Adriaan; Kono, Takanori; Konoplich, Rostislav; Konstantinidis, Nikolaos; Kopeliansky, Revital; Koperny, Stefan; Köpke, Lutz; Kopp, Anna Katharina; Korcyl, Krzysztof; Kordas, Kostantinos; Korn, Andreas; Korol, Aleksandr; Korolkov, Ilya; Korolkova, Elena; Kortner, Oliver; Kortner, Sandra; Kosek, Tomas; Kostyukhin, Vadim; Kotwal, Ashutosh; Koulouris, Aimilianos; Kourkoumeli-Charalampidi, Athina; Kourkoumelis, Christine; Kouskoura, Vasiliki; Kowalewska, Anna Bozena; Kowalewski, Robert Victor; Kowalski, Tadeusz; Kozakai, Chihiro; Kozanecki, Witold; Kozhin, Anatoly; Kramarenko, Viktor; Kramberger, Gregor; Krasnopevtsev, Dimitriy; Krasny, Mieczyslaw Witold; Krasznahorkay, Attila; Kravchenko, Anton; Kretz, Moritz; Kretzschmar, Jan; Kreutzfeldt, Kristof; Krieger, Peter; Krizka, Karol; Kroeninger, Kevin; Kroha, Hubert; Kroll, Joe; Kroseberg, Juergen; Krstic, Jelena; Kruchonak, Uladzimir; Krüger, Hans; Krumnack, Nils; Kruse, Mark; Kruskal, Michael; Kubota, Takashi; Kucuk, Hilal; Kuday, Sinan; Kuechler, Jan Thomas; Kuehn, Susanne; Kugel, Andreas; Kuger, Fabian; Kuhl, Thorsten; Kukhtin, Victor; Kukla, Romain; Kulchitsky, Yuri; Kuleshov, Sergey; Kuna, Marine; Kunigo, Takuto; Kupco, Alexander; Kurashige, Hisaya; Kurchaninov, Leonid; Kurochkin, Yurii; Kurth, Matthew Glenn; Kus, Vlastimil; Kuwertz, Emma Sian; Kuze, Masahiro; Kvita, Jiri; Kwan, Tony; Kyriazopoulos, Dimitrios; La Rosa, Alessandro; La Rosa Navarro, Jose Luis; La Rotonda, Laura; Lacasta, Carlos; Lacava, Francesco; Lacey, James; Lacker, Heiko; Lacour, Didier; Lacuesta, Vicente Ramón; Ladygin, Evgueni; Lafaye, Remi; Laforge, Bertrand; Lagouri, Theodota; Lai, Stanley; Lammers, Sabine; Lampl, Walter; Lançon, Eric; Landgraf, Ulrich; Landon, Murrough; Lanfermann, Marie Christine; Lang, Valerie Susanne; Lange, J örn Christian; Lankford, Andrew; Lanni, Francesco; Lantzsch, Kerstin; Lanza, Agostino; Laplace, Sandrine; Lapoire, Cecile; Laporte, Jean-Francois; Lari, Tommaso; Lasagni Manghi, Federico; Lassnig, Mario; Laurelli, Paolo; Lavrijsen, Wim; Law, Alexander; Laycock, Paul; Lazovich, Tomo; Lazzaroni, Massimo; Le, Brian; Le Dortz, Olivier; Le Guirriec, Emmanuel; Le Quilleuc, Eloi; LeBlanc, Matthew Edgar; LeCompte, Thomas; Ledroit-Guillon, Fabienne Agnes Marie; Lee, Claire Alexandra; Lee, Shih-Chang; Lee, Lawrence; Lefebvre, Benoit; Lefebvre, Guillaume; Lefebvre, Michel; Legger, Federica; Leggett, Charles; Lehan, Allan; Lehmann Miotto, Giovanna; Lei, Xiaowen; Leight, William Axel; Leister, Andrew Gerard; Leite, Marco Aurelio Lisboa; Leitner, Rupert; Lellouch, Daniel; Lemmer, Boris; Leney, Katharine; Lenz, Tatjana; Lenzi, Bruno; Leone, Robert; Leone, Sandra; Leonidopoulos, Christos; Leontsinis, Stefanos; Lerner, Giuseppe; Leroy, Claude; Lesage, Arthur; Lester, Christopher; Levchenko, Mikhail; Levêque, Jessica; Levin, Daniel; Levinson, Lorne; Levy, Mark; Lewis, Dave; Leyton, Michael; Li, Bing; Li, Changqiao; Li, Haifeng; Li, Lei; Li, Liang; Li, Qi; Li, Shu; Li, Xingguo; Li, Yichen; Liang, Zhijun; Liberti, Barbara; Liblong, Aaron; Lichard, Peter; Lie, Ki; Liebal, Jessica; Liebig, Wolfgang; Limosani, Antonio; Lin, Simon; Lin, Tai-Hua; Lindquist, Brian Edward; Lionti, Anthony Eric; Lipeles, Elliot; Lipniacka, Anna; Lisovyi, Mykhailo; Liss, Tony; Lister, Alison; Litke, Alan; Liu, Bo; Liu, Dong; Liu, Hao; Liu, Hongbin; Liu, Jian; Liu, Jianbei; Liu, Kun; Liu, Lulu; Liu, Minghui; Liu, Yanlin; Liu, Yanwen; Livan, Michele; Lleres, Annick; Llorente Merino, Javier; Lloyd, Stephen; Lo Sterzo, Francesco; Lobodzinska, Ewelina Maria; Loch, Peter; Loebinger, Fred; Loew, Kevin Michael; Loginov, Andrey; Lohse, Thomas; Lohwasser, Kristin; Lokajicek, Milos; Long, Brian Alexander; Long, Jonathan David; Long, Robin Eamonn; Longo, Luigi; Looper, Kristina Anne; Lopez Lopez, Jorge Andres; Lopez Mateos, David; Lopez Paredes, Brais; Lopez Paz, Ivan; Lopez Solis, Alvaro; Lorenz, Jeanette; Lorenzo Martinez, Narei; Losada, Marta; Lösel, Philipp Jonathan; Lou, XinChou; Lounis, Abdenour; Love, Jeremy; Love, Peter; Lu, Haonan; Lu, Nan; Lubatti, Henry; Luci, Claudio; Lucotte, Arnaud; Luedtke, Christian; Luehring, Frederick; Lukas, Wolfgang; Luminari, Lamberto; Lundberg, Olof; Lund-Jensen, Bengt; Luzi, Pierre Marc; Lynn, David; Lysak, Roman; Lytken, Else; Lyubushkin, Vladimir; Ma, Hong; Ma, Lian Liang; Ma, Yanhui; Maccarrone, Giovanni; Macchiolo, Anna; Macdonald, Calum Michael; Maček, Boštjan; Machado Miguens, Joana; Madaffari, Daniele; Madar, Romain; Maddocks, Harvey Jonathan; Mader, Wolfgang; Madsen, Alexander; Maeda, Junpei; Maeland, Steffen; Maeno, Tadashi; Maevskiy, Artem; Magradze, Erekle; Mahlstedt, Joern; Maiani, Camilla; Maidantchik, Carmen; Maier, Andreas Alexander; Maier, Thomas; Maio, Amélia; Majewski, Stephanie; Makida, Yasuhiro; Makovec, Nikola; Malaescu, Bogdan; Malecki, Pawel; Maleev, Victor; Malek, Fairouz; Mallik, Usha; Malon, David; Malone, Caitlin; Malone, Claire; Maltezos, Stavros; Malyukov, Sergei; Mamuzic, Judita; Mancini, Giada; Mandelli, Luciano; Mandić, Igor; Maneira, José; Manhaes de Andrade Filho, Luciano; Manjarres Ramos, Joany; Mann, Alexander; Manousos, Athanasios; Mansoulie, Bruno; Mansour, Jason Dhia; Mantifel, Rodger; Mantoani, Matteo; Manzoni, Stefano; Mapelli, Livio; Marceca, Gino; March, Luis; Marchiori, Giovanni; Marcisovsky, Michal; Marjanovic, Marija; Marley, Daniel; Marroquim, Fernando; Marsden, Stephen Philip; Marshall, Zach; Marti-Garcia, Salvador; Martin, Brian Thomas; Martin, Tim; Martin, Victoria Jane; Martin dit Latour, Bertrand; Martinez, Mario; Martinez Outschoorn, Verena; Martin-Haugh, Stewart; Martoiu, Victor Sorin; Martyniuk, Alex; Marzin, Antoine; Masetti, Lucia; Mashimo, Tetsuro; Mashinistov, Ruslan; Masik, Jiri; Maslennikov, Alexey; Massa, Ignazio; Massa, Lorenzo; Mastrandrea, Paolo; Mastroberardino, Anna; Masubuchi, Tatsuya; Mättig, Peter; Mattmann, Johannes; Maurer, Julien; Maxfield, Stephen; Maximov, Dmitriy; Mazini, Rachid; Maznas, Ioannis; Mazza, Simone Michele; Mc Fadden, Neil Christopher; Mc Goldrick, Garrin; Mc Kee, Shawn Patrick; McCarn, Allison; McCarthy, Robert; McCarthy, Tom; McClymont, Laurie; McDonald, Emily; Mcfayden, Josh; Mchedlidze, Gvantsa; McMahon, Steve; McNamara, Peter Charles; McPherson, Robert; Medinnis, Michael; Meehan, Samuel; Mehlhase, Sascha; Mehta, Andrew; Meier, Karlheinz; Meineck, Christian; Meirose, Bernhard; Melini, Davide; Mellado Garcia, Bruce Rafael; Melo, Matej; Meloni, Federico; Menary, Stephen Burns; Meng, Lingxin; Meng, Xiangting; Mengarelli, Alberto; Menke, Sven; Meoni, Evelin; Mergelmeyer, Sebastian; Mermod, Philippe; Merola, Leonardo; Meroni, Chiara; Merritt, Frank; Messina, Andrea; Metcalfe, Jessica; Mete, Alaettin Serhan; Meyer, Carsten; Meyer, Christopher; Meyer, Jean-Pierre; Meyer, Jochen; Meyer Zu Theenhausen, Hanno; Miano, Fabrizio; Middleton, Robin; Miglioranzi, Silvia; Mijović, Liza; Mikenberg, Giora; Mikestikova, Marcela; Mikuž, Marko; Milesi, Marco; Milic, Adriana; Miller, David; Mills, Corrinne; Milov, Alexander; Milstead, David; Minaenko, Andrey; Minami, Yuto; Minashvili, Irakli; Mincer, Allen; Mindur, Bartosz; Mineev, Mikhail; Minegishi, Yuji; Ming, Yao; Mir, Lluisa-Maria; Mistry, Khilesh; Mitani, Takashi; Mitrevski, Jovan; Mitsou, Vasiliki A; Miucci, Antonio; Miyagawa, Paul; Mizukami, Atsushi; Mjörnmark, Jan-Ulf; Mlynarikova, Michaela; Moa, Torbjoern; Mochizuki, Kazuya; Mogg, Philipp; Mohapatra, Soumya; Molander, Simon; Moles-Valls, Regina; Monden, Ryutaro; Mondragon, Matthew Craig; Mönig, Klaus; Monk, James; Monnier, Emmanuel; Montalbano, Alyssa; Montejo Berlingen, Javier; Monticelli, Fernando; Monzani, Simone; Moore, Roger; Morange, Nicolas; Moreno, Deywis; Moreno Llácer, María; Morettini, Paolo; Morgenstern, Stefanie; Mori, Daniel; Mori, Tatsuya; Morii, Masahiro; Morinaga, Masahiro; Morisbak, Vanja; Moritz, Sebastian; Morley, Anthony Keith; Mornacchi, Giuseppe; Morris, John; Mortensen, Simon Stark; Morvaj, Ljiljana; Moschovakos, Paris; Mosidze, Maia; Moss, Harry James; Moss, Josh; Motohashi, Kazuki; Mount, Richard; Mountricha, Eleni; Moyse, Edward; Muanza, Steve; Mudd, Richard; Mueller, Felix; Mueller, James; Mueller, Ralph Soeren Peter; Mueller, Thibaut; Muenstermann, Daniel; Mullen, Paul; Mullier, Geoffrey; Munoz Sanchez, Francisca Javiela; Murillo Quijada, Javier Alberto; Murray, Bill; Musheghyan, Haykuhi; Muškinja, Miha; Myagkov, Alexey; Myska, Miroslav; Nachman, Benjamin Philip; Nackenhorst, Olaf; Nagai, Koichi; Nagai, Ryo; Nagano, Kunihiro; Nagasaka, Yasushi; Nagata, Kazuki; Nagel, Martin; Nagy, Elemer; Nairz, Armin Michael; Nakahama, Yu; Nakamura, Koji; Nakamura, Tomoaki; Nakano, Itsuo; Naranjo Garcia, Roger Felipe; Narayan, Rohin; Narrias Villar, Daniel Isaac; Naryshkin, Iouri; Naumann, Thomas; Navarro, Gabriela; Nayyar, Ruchika; Neal, Homer; Nechaeva, Polina; Neep, Thomas James; Negri, Andrea; Negrini, Matteo; Nektarijevic, Snezana; Nellist, Clara; Nelson, Andrew; Nemecek, Stanislav; Nemethy, Peter; Nepomuceno, Andre Asevedo; Nessi, Marzio; Neubauer, Mark; Neumann, Manuel; Neves, Ricardo; Nevski, Pavel; Newman, Paul; Nguyen, Duong Hai; Nguyen Manh, Tuan; Nickerson, Richard; Nicolaidou, Rosy; Nielsen, Jason; Nikiforov, Andriy; Nikolaenko, Vladimir; Nikolic-Audit, Irena; Nikolopoulos, Konstantinos; Nilsen, Jon Kerr; Nilsson, Paul; Ninomiya, Yoichi; Nisati, Aleandro; Nisius, Richard; Nobe, Takuya; Nomachi, Masaharu; Nomidis, Ioannis; Nooney, Tamsin; Norberg, Scarlet; Nordberg, Markus; Norjoharuddeen, Nurfikri; Novgorodova, Olga; Nowak, Sebastian; Nozaki, Mitsuaki; Nozka, Libor; Ntekas, Konstantinos; Nurse, Emily; Nuti, Francesco; O'grady, Fionnbarr; O'Neil, Dugan; O'Rourke, Abigail Alexandra; O'Shea, Val; Oakham, Gerald; Oberlack, Horst; Obermann, Theresa; Ocariz, Jose; Ochi, Atsuhiko; Ochoa, Ines; Ochoa-Ricoux, Juan Pedro; Oda, Susumu; Odaka, Shigeru; Ogren, Harold; Oh, Alexander; Oh, Seog; Ohm, Christian; Ohman, Henrik; Oide, Hideyuki; Okawa, Hideki; Okumura, Yasuyuki; Okuyama, Toyonobu; Olariu, Albert; Oleiro Seabra, Luis Filipe; Olivares Pino, Sebastian Andres; Oliveira Damazio, Denis; Olszewski, Andrzej; Olszowska, Jolanta; Onofre, António; Onogi, Kouta; Onyisi, Peter; Oreglia, Mark; Oren, Yona; Orestano, Domizia; Orlando, Nicola; Orr, Robert; Osculati, Bianca; Ospanov, Rustem; Otero y Garzon, Gustavo; Otono, Hidetoshi; Ouchrif, Mohamed; Ould-Saada, Farid; Ouraou, Ahmimed; Oussoren, Koen Pieter; Ouyang, Qun; Owen, Mark; Owen, Rhys Edward; Ozcan, Veysi Erkcan; Ozturk, Nurcan; Pachal, Katherine; Pacheco Pages, Andres; Pacheco Rodriguez, Laura; Padilla Aranda, Cristobal; Pagáčová, Martina; Pagan Griso, Simone; Paganini, Michela; Paige, Frank; Pais, Preema; Pajchel, Katarina; Palacino, Gabriel; Palazzo, Serena; Palestini, Sandro; Palka, Marek; Pallin, Dominique; Panagiotopoulou, Evgenia; Panagoulias, Ilias; Pandini, Carlo Enrico; Panduro Vazquez, William; Pani, Priscilla; Panitkin, Sergey; Pantea, Dan; Paolozzi, Lorenzo; Papadopoulou, Theodora; Papageorgiou, Konstantinos; Paramonov, Alexander; Paredes Hernandez, Daniela; Parker, Adam Jackson; Parker, Michael Andrew; Parker, Kerry Ann; Parodi, Fabrizio; Parsons, John; Parzefall, Ulrich; Pascuzzi, Vincent; Pasqualucci, Enrico; Passaggio, Stefano; Pastore, Francesca; Pásztor, Gabriella; Pataraia, Sophio; Pater, Joleen; Pauly, Thilo; Pearce, James; Pearson, Benjamin; Pedersen, Lars Egholm; Pedersen, Maiken; Pedraza Lopez, Sebastian; Pedro, Rute; Peleganchuk, Sergey; Penc, Ondrej; Peng, Cong; Peng, Haiping; Penwell, John; Peralva, Bernardo; Perego, Marta Maria; Perepelitsa, Dennis; Perez Codina, Estel; Perini, Laura; Pernegger, Heinz; Perrella, Sabrina; Peschke, Richard; Peshekhonov, Vladimir; Peters, Krisztian; Peters, Yvonne; Petersen, Brian; Petersen, Troels; Petit, Elisabeth; Petridis, Andreas; Petridou, Chariclia; Petroff, Pierre; Petrolo, Emilio; Petrov, Mariyan; Petrucci, Fabrizio; Pettersson, Nora Emilia; Peyaud, Alan; Pezoa, Raquel; Phillips, Peter William; Piacquadio, Giacinto; Pianori, Elisabetta; Picazio, Attilio; Piccaro, Elisa; Piccinini, Maurizio; Pickering, Mark Andrew; Piegaia, Ricardo; Pilcher, James; Pilkington, Andrew; Pin, Arnaud Willy J; Pinamonti, Michele; Pinfold, James; Pingel, Almut; Pires, Sylvestre; Pirumov, Hayk; Pitt, Michael; Plazak, Lukas; Pleier, Marc-Andre; Pleskot, Vojtech; Plotnikova, Elena; Pluth, Daniel; Poettgen, Ruth; Poggioli, Luc; Pohl, David-leon; Polesello, Giacomo; Poley, Anne-luise; Policicchio, Antonio; Polifka, Richard; Polini, Alessandro; Pollard, Christopher Samuel; Polychronakos, Venetios; Pommès, Kathy; Pontecorvo, Ludovico; Pope, Bernard; Popeneciu, Gabriel Alexandru; Poppleton, Alan; Pospisil, Stanislav; Potamianos, Karolos; Potrap, Igor; Potter, Christina; Potter, Christopher; Poulard, Gilbert; Poveda, Joaquin; Pozdnyakov, Valery; Pozo Astigarraga, Mikel Eukeni; Pralavorio, Pascal; Pranko, Aliaksandr; Prell, Soeren; Price, Darren; Price, Lawrence; Primavera, Margherita; Prince, Sebastien; Prokofiev, Kirill; Prokoshin, Fedor; Protopopescu, Serban; Proudfoot, James; Przybycien, Mariusz; Puddu, Daniele; Purohit, Milind; Puzo, Patrick; Qian, Jianming; Qin, Gang; Qin, Yang; Quadt, Arnulf; Quayle, William; Queitsch-Maitland, Michaela; Quilty, Donnchadha; Raddum, Silje; Radeka, Veljko; Radescu, Voica; Radhakrishnan, Sooraj Krishnan; Radloff, Peter; Rados, Pere; Ragusa, Francesco; Rahal, Ghita; Raine, John Andrew; Rajagopalan, Srinivasan; Rammensee, Michael; Rangel-Smith, Camila; Ratti, Maria Giulia; Rauch, Daniel; Rauscher, Felix; Rave, Stefan; Ravenscroft, Thomas; Ravinovich, Ilia; Raymond, Michel; Read, Alexander Lincoln; Readioff, Nathan Peter; Reale, Marilea; Rebuzzi, Daniela; Redelbach, Andreas; Redlinger, George; Reece, Ryan; Reed, Robert; Reeves, Kendall; Rehnisch, Laura; Reichert, Joseph; Reiss, Andreas; Rembser, Christoph; Ren, Huan; Rescigno, Marco; Resconi, Silvia; Rezanova, Olga; Reznicek, Pavel; Rezvani, Reyhaneh; Richter, Robert; Richter, Stefan; Richter-Was, Elzbieta; Ricken, Oliver; Ridel, Melissa; Rieck, Patrick; Riegel, Christian Johann; Rieger, Julia; Rifki, Othmane; Rijssenbeek, Michael; Rimoldi, Adele; Rimoldi, Marco; Rinaldi, Lorenzo; Ristić, Branislav; Ritsch, Elmar; Riu, Imma; Rizatdinova, Flera; Rizvi, Eram; Rizzi, Chiara; Robertson, Steven; Robichaud-Veronneau, Andree; Robinson, Dave; Robinson, James; Robson, Aidan; Roda, Chiara; Rodina, Yulia; Rodriguez Perez, Andrea; Rodriguez Rodriguez, Daniel; Roe, Shaun; Rogan, Christopher Sean; Røhne, Ole; Roloff, Jennifer; Romaniouk, Anatoli; Romano, Marino; Romano Saez, Silvestre Marino; Romero Adam, Elena; Rompotis, Nikolaos; Ronzani, Manfredi; Roos, Lydia; Ros, Eduardo; Rosati, Stefano; Rosbach, Kilian; Rose, Peyton; Rosien, Nils-Arne; Rossetti, Valerio; Rossi, Elvira; Rossi, Leonardo Paolo; Rosten, Jonatan; Rosten, Rachel; Rotaru, Marina; Roth, Itamar; Rothberg, Joseph; Rousseau, David; Rozanov, Alexandre; Rozen, Yoram; Ruan, Xifeng; Rubbo, Francesco; Rudolph, Matthew Scott; Rühr, Frederik; Ruiz-Martinez, Aranzazu; Rurikova, Zuzana; Rusakovich, Nikolai; Ruschke, Alexander; Russell, Heather; Rutherfoord, John; Ruthmann, Nils; Ryabov, Yury; Rybar, Martin; Rybkin, Grigori; Ryu, Soo; Ryzhov, Andrey; Rzehorz, Gerhard Ferdinand; Saavedra, Aldo; Sabato, Gabriele; Sacerdoti, Sabrina; Sadrozinski, Hartmut; Sadykov, Renat; Safai Tehrani, Francesco; Saha, Puja; Sahinsoy, Merve; Saimpert, Matthias; Saito, Tomoyuki; Sakamoto, Hiroshi; Sakurai, Yuki; Salamanna, Giuseppe; Salamon, Andrea; Salazar Loyola, Javier Esteban; Salek, David; Sales De Bruin, Pedro Henrique; Salihagic, Denis; Salnikov, Andrei; Salt, José; Salvatore, Daniela; Salvatore, Pasquale Fabrizio; Salvucci, Antonio; Salzburger, Andreas; Sammel, Dirk; Sampsonidis, Dimitrios; Sánchez, Javier; Sanchez Martinez, Victoria; Sanchez Pineda, Arturo; Sandaker, Heidi; Sandbach, Ruth Laura; Sandhoff, Marisa; Sandoval, Carlos; Sankey, Dave; Sannino, Mario; Sansoni, Andrea; Santoni, Claudio; Santonico, Rinaldo; Santos, Helena; Santoyo Castillo, Itzebelt; Sapp, Kevin; Sapronov, Andrey; Saraiva, João; Sarrazin, Bjorn; Sasaki, Osamu; Sato, Koji; Sauvan, Emmanuel; Savage, Graham; Savard, Pierre; Savic, Natascha; Sawyer, Craig; Sawyer, Lee; Saxon, James; Sbarra, Carla; Sbrizzi, Antonio; Scanlon, Tim; Scannicchio, Diana; Scarcella, Mark; Scarfone, Valerio; Schaarschmidt, Jana; Schacht, Peter; Schachtner, Balthasar Maria; Schaefer, Douglas; Schaefer, Leigh; Schaefer, Ralph; Schaeffer, Jan; Schaepe, Steffen; Schaetzel, Sebastian; Schäfer, Uli; Schaffer, Arthur; Schaile, Dorothee; Schamberger, R Dean; Scharf, Veit; Schegelsky, Valery; Scheirich, Daniel; Schernau, Michael; Schiavi, Carlo; Schier, Sheena; Schillo, Christian; Schioppa, Marco; Schlenker, Stefan; Schmidt-Sommerfeld, Korbinian Ralf; Schmieden, Kristof; Schmitt, Christian; Schmitt, Stefan; Schmitz, Simon; Schneider, Basil; Schnoor, Ulrike; Schoeffel, Laurent; Schoening, Andre; Schoenrock, Bradley Daniel; Schopf, Elisabeth; Schott, Matthias; Schouwenberg, Jeroen; Schovancova, Jaroslava; Schramm, Steven; Schreyer, Manuel; Schuh, Natascha; Schulte, Alexandra; Schultens, Martin Johannes; Schultz-Coulon, Hans-Christian; Schulz, Holger; Schumacher, Markus; Schumm, Bruce; Schune, Philippe; Schwartzman, Ariel; Schwarz, Thomas Andrew; Schweiger, Hansdieter; Schwemling, Philippe; Schwienhorst, Reinhard; Schwindling, Jerome; Schwindt, Thomas; Sciolla, Gabriella; Scuri, Fabrizio; Scutti, Federico; Searcy, Jacob; Seema, Pienpen; Seidel, Sally; Seiden, Abraham; Seifert, Frank; Seixas, José; Sekhniaidze, Givi; Sekhon, Karishma; Sekula, Stephen; Seliverstov, Dmitry; Semprini-Cesari, Nicola; Serfon, Cedric; Serin, Laurent; Serkin, Leonid; Sessa, Marco; Seuster, Rolf; Severini, Horst; Sfiligoj, Tina; Sforza, Federico; Sfyrla, Anna; Shabalina, Elizaveta; Shaikh, Nabila Wahab; Shan, Lianyou; Shang, Ruo-yu; Shank, James; Shapiro, Marjorie; Shatalov, Pavel; Shaw, Kate; Shaw, Savanna Marie; Shcherbakova, Anna; Shehu, Ciwake Yusufu; Sherwood, Peter; Shi, Liaoshan; Shimizu, Shima; Shimmin, Chase Owen; Shimojima, Makoto; Shirabe, Shohei; Shiyakova, Mariya; Shmeleva, Alevtina; Shoaleh Saadi, Diane; Shochet, Mel; Shojaii, Seyed Ruhollah; Shope, David Richard; Shrestha, Suyog; Shulga, Evgeny; Shupe, Michael; Sicho, Petr; Sickles, Anne Marie; Sidebo, Per Edvin; Sideras Haddad, Elias; Sidiropoulou, Ourania; Sidorov, Dmitri; Sidoti, Antonio; Siegert, Frank; Sijacki, Djordje; Silva, José; Silverstein, Samuel; Simak, Vladislav; Simic, Ljiljana; Simion, Stefan; Simioni, Eduard; Simmons, Brinick; Simon, Dorian; Simon, Manuel; Sinervo, Pekka; Sinev, Nikolai; Sioli, Maximiliano; Siragusa, Giovanni; Sivoklokov, Serguei; Sjölin, Jörgen; Skinner, Malcolm Bruce; Skottowe, Hugh Philip; Skubic, Patrick; Slater, Mark; Slavicek, Tomas; Slawinska, Magdalena; Sliwa, Krzysztof; Slovak, Radim; Smakhtin, Vladimir; Smart, Ben; Smestad, Lillian; Smiesko, Juraj; Smirnov, Sergei; Smirnov, Yury; Smirnova, Lidia; Smirnova, Oxana; Smith, Joshua Wyatt; Smith, Matthew; Smith, Russell; Smizanska, Maria; Smolek, Karel; Snesarev, Andrei; Snyder, Ian Michael; Snyder, Scott; Sobie, Randall; Socher, Felix; Soffer, Abner; Soh, Dart-yin; Sokhrannyi, Grygorii; Solans Sanchez, Carlos; Solar, Michael; Soldatov, Evgeny; Soldevila, Urmila; Solodkov, Alexander; Soloshenko, Alexei; Solovyanov, Oleg; Solovyev, Victor; Sommer, Philip; Son, Hyungsuk; Song, Hong Ye; Sood, Alexander; Sopczak, Andre; Sopko, Vit; Sorin, Veronica; Sosa, David; Sotiropoulou, Calliope Louisa; Soualah, Rachik; Soukharev, Andrey; South, David; Sowden, Benjamin; Spagnolo, Stefania; Spalla, Margherita; Spangenberg, Martin; Spanò, Francesco; Sperlich, Dennis; Spettel, Fabian; Spieker, Thomas Malte; Spighi, Roberto; Spigo, Giancarlo; Spiller, Laurence Anthony; Spousta, Martin; St Denis, Richard Dante; Stabile, Alberto; Stamen, Rainer; Stamm, Soren; Stanecka, Ewa; Stanek, Robert; Stanescu, Cristian; Stanescu-Bellu, Madalina; Stanitzki, Marcel Michael; Stapnes, Steinar; Starchenko, Evgeny; Stark, Giordon; Stark, Jan; Staroba, Pavel; Starovoitov, Pavel; Stärz, Steffen; Staszewski, Rafal; Steinberg, Peter; Stelzer, Bernd; Stelzer, Harald Joerg; Stelzer-Chilton, Oliver; Stenzel, Hasko; Stewart, Graeme; Stillings, Jan Andre; Stockton, Mark; Stoebe, Michael; Stoicea, Gabriel; Stolte, Philipp; Stonjek, Stefan; Stradling, Alden; Straessner, Arno; Stramaglia, Maria Elena; Strandberg, Jonas; Strandberg, Sara; Strandlie, Are; Strauss, Michael; Strizenec, Pavol; Ströhmer, Raimund; Strom, David; Stroynowski, Ryszard; Strubig, Antonia; Stucci, Stefania Antonia; Stugu, Bjarne; Styles, Nicholas Adam; Su, Dong; Su, Jun; Suchek, Stanislav; Sugaya, Yorihito; Suk, Michal; Sulin, Vladimir; Sultansoy, Saleh; Sumida, Toshi; Sun, Siyuan; Sun, Xiaohu; Sundermann, Jan Erik; Suruliz, Kerim; Suster, Carl; Sutton, Mark; Suzuki, Shota; Svatos, Michal; Swiatlowski, Maximilian; Swift, Stewart Patrick; Sykora, Ivan; Sykora, Tomas; Ta, Duc; Taccini, Cecilia; Tackmann, Kerstin; Taenzer, Joe; Taffard, Anyes; Tafirout, Reda; Taiblum, Nimrod; Takai, Helio; Takashima, Ryuichi; Takeshita, Tohru; Takubo, Yosuke; Talby, Mossadek; Talyshev, Alexey; Tan, Kong Guan; Tanaka, Junichi; Tanaka, Masahiro; Tanaka, Reisaburo; Tanaka, Shuji; Tanioka, Ryo; Tannenwald, Benjamin Bordy; Tapia Araya, Sebastian; Tapprogge, Stefan; Tarem, Shlomit; Tartarelli, Giuseppe Francesco; Tas, Petr; Tasevsky, Marek; Tashiro, Takuya; Tassi, Enrico; Tavares Delgado, Ademar; Tayalati, Yahya; Taylor, Aaron; Taylor, Geoffrey; Taylor, Pierre Thor Elliot; Taylor, Wendy; Teischinger, Florian Alfred; Teixeira-Dias, Pedro; Temming, Kim Katrin; Temple, Darren; Ten Kate, Herman; Teng, Ping-Kun; Teoh, Jia Jian; Tepel, Fabian-Phillipp; Terada, Susumu; Terashi, Koji; Terron, Juan; Terzo, Stefano; Testa, Marianna; Teuscher, Richard; Theveneaux-Pelzer, Timothée; Thomas, Juergen; Thomas-Wilsker, Joshuha; Thompson, Paul; Thompson, Stan; Thomsen, Lotte Ansgaard; Thomson, Evelyn; Tibbetts, Mark James; Ticse Torres, Royer Edson; Tikhomirov, Vladimir; Tikhonov, Yury; Timoshenko, Sergey; Tipton, Paul; Tisserant, Sylvain; Todome, Kazuki; Todorov, Theodore; Todorova-Nova, Sharka; Tojo, Junji; Tokár, Stanislav; Tokushuku, Katsuo; Tolley, Emma; Tomlinson, Lee; Tomoto, Makoto; Tompkins, Lauren; Toms, Konstantin; Tong, Baojia(Tony); Tornambe, Peter; Torrence, Eric; Torres, Heberth; Torró Pastor, Emma; Toth, Jozsef; Touchard, Francois; Tovey, Daniel; Trefzger, Thomas; Tricoli, Alessandro; Trigger, Isabel Marian; Trincaz-Duvoid, Sophie; Tripiana, Martin; Trischuk, William; Trocmé, Benjamin; Trofymov, Artur; Troncon, Clara; Trottier-McDonald, Michel; Trovatelli, Monica; Truong, Loan; Trzebinski, Maciej; Trzupek, Adam; Tseng, Jeffrey; Tsiareshka, Pavel; Tsipolitis, Georgios; Tsirintanis, Nikolaos; Tsiskaridze, Shota; Tsiskaridze, Vakhtang; Tskhadadze, Edisher; Tsui, Ka Ming; Tsukerman, Ilya; Tsulaia, Vakhtang; Tsuno, Soshi; Tsybychev, Dmitri; Tu, Yanjun; Tudorache, Alexandra; Tudorache, Valentina; Tulbure, Traian Tiberiu; Tuna, Alexander Naip; Tupputi, Salvatore; Turchikhin, Semen; Turgeman, Daniel; Turk Cakir, Ilkay; Turra, Ruggero; Tuts, Michael; Ucchielli, Giulia; Ueda, Ikuo; Ughetto, Michael; Ukegawa, Fumihiko; Unal, Guillaume; Undrus, Alexander; Unel, Gokhan; Ungaro, Francesca; Unno, Yoshinobu; Unverdorben, Christopher; Urban, Jozef; Urquijo, Phillip; Urrejola, Pedro; Usai, Giulio; Usui, Junya; Vacavant, Laurent; Vacek, Vaclav; Vachon, Brigitte; Valderanis, Chrysostomos; Valdes Santurio, Eduardo; Valencic, Nika; Valentinetti, Sara; Valero, Alberto; Valery, Loic; Valkar, Stefan; Valls Ferrer, Juan Antonio; Van Den Wollenberg, Wouter; Van Der Deijl, Pieter; van der Graaf, Harry; van Eldik, Niels; van Gemmeren, Peter; Van Nieuwkoop, Jacobus; van Vulpen, Ivo; van Woerden, Marius Cornelis; Vanadia, Marco; Vandelli, Wainer; Vanguri, Rami; Vaniachine, Alexandre; Vankov, Peter; Vardanyan, Gagik; Vari, Riccardo; Varnes, Erich; Varol, Tulin; Varouchas, Dimitris; Vartapetian, Armen; Varvell, Kevin; Vasquez, Jared Gregory; Vasquez, Gerardo; Vazeille, Francois; Vazquez Schroeder, Tamara; Veatch, Jason; Veeraraghavan, Venkatesh; Veloce, Laurelle Maria; Veloso, Filipe; Veneziano, Stefano; Ventura, Andrea; Venturi, Manuela; Venturi, Nicola; Venturini, Alessio; Vercesi, Valerio; Verducci, Monica; Verkerke, Wouter; Vermeulen, Jos; Vest, Anja; Vetterli, Michel; Viazlo, Oleksandr; Vichou, Irene; Vickey, Trevor; Vickey Boeriu, Oana Elena; Viehhauser, Georg; Viel, Simon; Vigani, Luigi; Villa, Mauro; Villaplana Perez, Miguel; Vilucchi, Elisabetta; Vincter, Manuella; Vinogradov, Vladimir; Vittori, Camilla; Vivarelli, Iacopo; Vlachos, Sotirios; Vlasak, Michal; Vogel, Marcelo; Vokac, Petr; Volpi, Guido; Volpi, Matteo; von der Schmitt, Hans; von Toerne, Eckhard; Vorobel, Vit; Vorobev, Konstantin; Vos, Marcel; Voss, Rudiger; Vossebeld, Joost; Vranjes, Nenad; Vranjes Milosavljevic, Marija; Vrba, Vaclav; Vreeswijk, Marcel; Vuillermet, Raphael; Vukotic, Ilija; Wagner, Peter; Wagner, Wolfgang; Wahlberg, Hernan; Wahrmund, Sebastian; Wakabayashi, Jun; Walder, James; Walker, Rodney; Walkowiak, Wolfgang; Wallangen, Veronica; Wang, Chao; Wang, Chao; Wang, Fuquan; Wang, Haichen; Wang, Hulin; Wang, Jike; Wang, Jin; Wang, Kuhan; Wang, Rui; Wang, Song-Ming; Wang, Tingting; Wang, Wenxiao; Wanotayaroj, Chaowaroj; Warburton, Andreas; Ward, Patricia; Wardrope, David Robert; Washbrook, Andrew; Watkins, Peter; Watson, Alan; Watson, Miriam; Watts, Gordon; Watts, Stephen; Waugh, Ben; Webb, Samuel; Weber, Michele; Weber, Stefan Wolf; Weber, Stephen; Webster, Jordan S; Weidberg, Anthony; Weinert, Benjamin; Weingarten, Jens; Weiser, Christian; Weits, Hartger; Wells, Phillippa; Wenaus, Torre; Wengler, Thorsten; Wenig, Siegfried; Wermes, Norbert; Werner, Michael David; Werner, Per; Wessels, Martin; Wetter, Jeffrey; Whalen, Kathleen; Whallon, Nikola Lazar; Wharton, Andrew Mark; White, Andrew; White, Martin; White, Ryan; Whiteson, Daniel; Wickens, Fred; Wiedenmann, Werner; Wielers, Monika; Wiglesworth, Craig; Wiik-Fuchs, Liv Antje Mari; Wildauer, Andreas; Wilk, Fabian; Wilkens, Henric George; Williams, Hugh; Williams, Sarah; Willis, Christopher; Willocq, Stephane; Wilson, John; Wingerter-Seez, Isabelle; Winklmeier, Frank; Winston, Oliver James; Winter, Benedict Tobias; Wittgen, Matthias; Wolf, Tim Michael Heinz; Wolff, Robert; Wolter, Marcin Wladyslaw; Wolters, Helmut; Worm, Steven D; Wosiek, Barbara; Wotschack, Jorg; Woudstra, Martin; Wozniak, Krzysztof; Wu, Mengqing; Wu, Miles; Wu, Sau Lan; Wu, Xin; Wu, Yusheng; Wyatt, Terry Richard; Wynne, Benjamin; Xella, Stefania; Xi, Zhaoxu; Xu, Da; Xu, Lailin; Yabsley, Bruce; Yacoob, Sahal; Yamaguchi, Daiki; Yamaguchi, Yohei; Yamamoto, Akira; Yamamoto, Shimpei; Yamanaka, Takashi; Yamauchi, Katsuya; Yamazaki, Yuji; Yan, Zhen; Yang, Haijun; Yang, Hongtao; Yang, Yi; Yang, Zongchang; Yao, Weiming; Yap, Yee Chinn; Yasu, Yoshiji; Yatsenko, Elena; Yau Wong, Kaven Henry; Ye, Jingbo; Ye, Shuwei; Yeletskikh, Ivan; Yildirim, Eda; Yorita, Kohei; Yoshida, Rikutaro; Yoshihara, Keisuke; Young, Charles; Young, Christopher John; Youssef, Saul; Yu, David Ren-Hwa; Yu, Jaehoon; Yu, Jiaming; Yu, Jie; Yuan, Li; Yuen, Stephanie P; Yusuff, Imran; Zabinski, Bartlomiej; Zacharis, George; Zaidan, Remi; Zaitsev, Alexander; Zakharchuk, Nataliia; Zalieckas, Justas; Zaman, Aungshuman; Zambito, Stefano; Zanello, Lucia; Zanzi, Daniele; Zeitnitz, Christian; Zeman, Martin; Zemla, Andrzej; Zeng, Jian Cong; Zeng, Qi; Zenin, Oleg; Ženiš, Tibor; Zerwas, Dirk; Zhang, Dongliang; Zhang, Fangzhou; Zhang, Guangyi; Zhang, Huijun; Zhang, Jinlong; Zhang, Lei; Zhang, Liqing; Zhang, Matt; Zhang, Rui; Zhang, Ruiqi; Zhang, Xueyao; Zhang, Zhiqing; Zhao, Xiandong; Zhao, Yongke; Zhao, Zhengguo; Zhemchugov, Alexey; Zhong, Jiahang; Zhou, Bing; Zhou, Chen; Zhou, Lei; Zhou, Li; Zhou, Mingliang; Zhou, Ning; Zhu, Cheng Guang; Zhu, Hongbo; Zhu, Junjie; Zhu, Yingchun; Zhuang, Xuai; Zhukov, Konstantin; Zibell, Andre; Zieminska, Daria; Zimine, Nikolai; Zimmermann, Christoph; Zimmermann, Stephanie; Zinonos, Zinonas; Zinser, Markus; Ziolkowski, Michael; Živković, Lidija; Zobernig, Georg; Zoccoli, Antonio; zur Nedden, Martin; Zwalinski, Lukasz

    2017-05-18

    During 2015 the ATLAS experiment recorded $3.8 \\mathrm{fb}^{-1}$ of proton--proton collision data at a centre-of-mass energy of $13 \\mathrm{TeV}$. The ATLAS trigger system is a crucial component of the experiment, responsible for selecting events of interest at a recording rate of approximately 1 kHz from up to 40 MHz of collisions. This paper presents a short overview of the changes to the trigger and data acquisition systems during the first long shutdown of the LHC and shows the performance of the trigger system and its components based on the 2015 proton--proton collision data.

  11. Supervision of the ATLAS High Level Trigger System

    CERN Document Server

    Wheeler, S.; Meessen, C.; Qian, Z.; Touchard, F.; Negri, France A.; Zobernig, H.; CHEP 2003 Computing in High Energy Physics; Negri, France A.

    2003-01-01

    The ATLAS High Level Trigger (HLT) system provides software-based event selection after the initial LVL1 hardware trigger. It is composed of two stages, the LVL2 trigger and the Event Filter. The HLT is implemented as software tasks running on large processor farms. An essential part of the HLT is the supervision system, which is responsible for configuring, coordinating, controlling and monitoring the many hundreds of processes running in the HLT. A prototype implementation of the supervision system, using tools from the ATLAS Online Software system is presented. Results from scalability tests are also presented where the supervision system was shown to be capable of controlling over 1000 HLT processes running on 230 nodes.

  12. NOvA Event Building, Buffering and Data-Driven Triggering From Within the DAQ System

    Energy Technology Data Exchange (ETDEWEB)

    Fischler, M. [Fermilab; Green, C. [Fermilab; Kowalkowski, J. [Fermilab; Norman, A. [Fermilab; Paterno, M. [Fermilab; Rechenmacher, R. [Fermilab

    2012-06-22

    To make its core measurements, the NOvA experiment needs to make real-time data-driven decisions involving beam-spill time correlation and other triggering issues. NOvA-DDT is a prototype Data-Driven Triggering system, built using the Fermilab artdaq generic DAQ/Event-building tools set. This provides the advantages of sharing online software infrastructure with other Intensity Frontier experiments, and of being able to use any offline analysis module--unchanged--as a component of the online triggering decisions. The NOvA-artdaq architecture chosen has significant advantages, including graceful degradation if the triggering decision software fails or cannot be done quickly enough for some fraction of the time-slice ``events.'' We have tested and measured the performance and overhead of NOvA-DDT using an actual Hough transform based trigger decision module taken from the NOvA offline software. The results of these tests--98 ms mean time per event on only 1/16 of th e available processing power of a node, and overheads of about 2 ms per event--provide a proof of concept: NOvA-DDT is a viable strategy for data acquisition, event building, and trigger processing at the NOvA far detector.

  13. Task management in the new ATLAS production system

    International Nuclear Information System (INIS)

    De, K; Golubkov, D; Klimentov, A; Potekhin, M; Vaniachine, A

    2014-01-01

    This document describes the design of the new Production System of the ATLAS experiment at the LHC [1]. The Production System is the top level workflow manager which translates physicists' needs for production level processing and analysis into actual workflows executed across over a hundred Grid sites used globally by ATLAS. As the production workload increased in volume and complexity in recent years (the ATLAS production tasks count is above one million, with each task containing hundreds or thousands of jobs) there is a need to upgrade the Production System to meet the challenging requirements of the next LHC run while minimizing the operating costs. In the new design, the main subsystems are the Database Engine for Tasks (DEFT) and the Job Execution and Definition Interface (JEDI). Based on users' requests, DEFT manages inter-dependent groups of tasks (Meta-Tasks) and generates corresponding data processing workflows. The JEDI component then dynamically translates the task definitions from DEFT into actual workload jobs executed in the PanDA Workload Management System [2]. We present the requirements, design parameters, basics of the object model and concrete solutions utilized in building the new Production System and its components.

  14. Evaluation and proposal of improvement for the measurement system in ATLAS

    International Nuclear Information System (INIS)

    Cho, Dong Woo; Kim, Jong Rok; Park, Jun Kwon

    2007-03-01

    The project independently evaluated the validities and reliability of measurement system in ATLAS, then proposed plans to improve the measurement system from evaluated results. For this objectives, we evaluated the design, technical backgrounds, verifying data of measurement system in ATLAS. From this evaluation, we proposed plans for improvement on parts which need improvement

  15. Exploring the human body space: A geographical information system based anatomical atlas

    Directory of Open Access Journals (Sweden)

    Antonio Barbeito

    2016-06-01

    Full Text Available Anatomical atlases allow mapping the anatomical structures of the human body. Early versions of these systems consisted of analogical representations with informative text and labeled images of the human body. With computer systems, digital versions emerged and the third and fourth dimensions were introduced. Consequently, these systems increased their efficiency, allowing more realistic visualizations with improved interactivity and functionality. The 4D atlases allow modeling changes over time on the structures represented. The anatomical atlases based on geographic information system (GIS environments allow the creation of platforms with a high degree of interactivity and new tools to explore and analyze the human body. In this study we expand the functions of a human body representation system by creating new vector data, topology, functions, and an improved user interface. The new prototype emulates a 3D GIS with a topological model of the human body, replicates the information provided by anatomical atlases, and provides a higher level of functionality and interactivity. At this stage, the developed system is intended to be used as an educational tool and integrates into the same interface the typical representations of surface and sectional atlases.

  16. IT Infrastructure Design and Implementation Considerations for the ATLAS TDAQ System

    CERN Document Server

    Dobson, M; The ATLAS collaboration; Caramarcu, C; Dumitru, I; Valsan, L; Darlea, G L; Bujor, F; Bogdanchikov, A G; Korol, A A; Zaytsev, A S; Ballestrero, S

    2013-01-01

    This paper gives a thorough overview of the ATLAS TDAQ SysAdmin group activities which deals with administration of the TDAQ computing environment supporting Front End detector hardware, Data Flow, Event Filter and other subsystems of the ATLAS detector operating on the LHC accelerator at CERN. The current installation consists of approximately 1500 netbooted nodes managed by more than 60 dedicated servers, a high performance centralized storage system, about 50 multi-screen user interface systems installed in the control rooms and various hardware and critical service monitoring machines. In the final configuration, the online computer farm will be capable of hosting tens of thousands applications running simultaneously. The ATLAS TDAQ computing environment is now serving more than 3000 users subdivided into approximately 300 categories in correspondence with their roles in the system. The access and role management system is custom built on top of an LDAP schema. The engineering infrastructure of the ATLAS ...

  17. The Fiber Optic System for the Advanced Topographic Laser Altimeter System (ATLAS) Instrument

    Science.gov (United States)

    Ott, Melanie N.; Thomes, Joe; Onuma, Eleanya; Switzer, Robert; Chuska, Richard; Blair, Diana; Frese, Erich; Matyseck, Marc

    2016-01-01

    The Advanced Topographic Laser Altimeter System (ATLAS) Instrument has been in integration and testing over the past 18 months in preparation for the Ice, Cloud and Land Elevation Satellite - 2 (ICESat-2) Mission, scheduled to launch in 2017. ICESat-2 is the follow on to ICESat which launched in 2003 and operated until 2009. ATLAS will measure the elevation of ice sheets, glaciers and sea ice or the "cryosphere" (as well as terrain) to provide data for assessing the earth's global climate changes. Where ICESat's instrument, the Geo-Science Laser Altimeter (GLAS) used a single beam measured with a 70 m spot on the ground and a distance between spots of 170 m, ATLAS will measure a spot size of 10 m with a spacing of 70 cm using six beams to measure terrain height changes as small as 4 mm. The ATLAS pulsed transmission system consists of two lasers operating at 532 nm with transmitter optics for beam steering, a diffractive optical element that splits the signal into 6 separate beams, receivers for start pulse detection and a wavelength tracking system. The optical receiver telescope system consists of optics that focus all six beams into optical fibers that feed a filter system that transmits the signal via fiber assemblies to the detectors. Also included on the instrument is a system that calibrates the alignment of the transmitted pulses to the receiver optics for precise signal capture. The larger electro optical subsystems for transmission, calibration, and signal receive, stay aligned and transmitting sufficiently due to the optical fiber system that links them together. The robust design of the fiber optic system, consisting of a variety of multi fiber arrays and simplex assemblies with multiple fiber core sizes and types, will enable the system to maintain consistent critical alignments for the entire life of the mission. Some of the development approaches used to meet the challenging optical system requirements for ATLAS are discussed here.

  18. The fiber optic system for the Advanced Topographic Laser Altimeter System (ATLAS) instrument.

    Science.gov (United States)

    Ott, Melanie N; Thomes, Joe; Onuma, Eleanya; Switzer, Robert; Chuska, Richard; Blair, Diana; Frese, Erich; Matyseck, Marc

    2016-08-28

    The Advanced Topographic Laser Altimeter System (ATLAS) Instrument has been in integration and testing over the past 18 months in preparation for the Ice, Cloud and Land Elevation Satellite - 2 (ICESat-2) Mission, scheduled to launch in 2017. ICESat-2 is the follow on to ICESat which launched in 2003 and operated until 2009. ATLAS will measure the elevation of ice sheets, glaciers and sea ice or the "cryosphere" (as well as terrain) to provide data for assessing the earth's global climate changes. Where ICESat's instrument, the Geo-Science Laser Altimeter (GLAS) used a single beam measured with a 70 m spot on the ground and a distance between spots of 170 m, ATLAS will measure a spot size of 10 m with a spacing of 70 cm using six beams to measure terrain height changes as small as 4 mm.[1] The ATLAS pulsed transmission system consists of two lasers operating at 532 nm with transmitter optics for beam steering, a diffractive optical element that splits the signal into 6 separate beams, receivers for start pulse detection and a wavelength tracking system. The optical receiver telescope system consists of optics that focus all six beams into optical fibers that feed a filter system that transmits the signal via fiber assemblies to the detectors. Also included on the instrument is a system that calibrates the alignment of the transmitted pulses to the receiver optics for precise signal capture. The larger electro optical subsystems for transmission, calibration, and signal receive, stay aligned and transmitting sufficiently due to the optical fiber system that links them together. The robust design of the fiber optic system, consisting of a variety of multi fiber arrays and simplex assemblies with multiple fiber core sizes and types, will enable the system to maintain consistent critical alignments for the entire life of the mission. Some of the development approaches used to meet the challenging optical system requirements for ATLAS are discussed here.

  19. Planetary Data Systems (PDS) Imaging Node Atlas II

    Science.gov (United States)

    Stanboli, Alice; McAuley, James M.

    2013-01-01

    The Planetary Image Atlas (PIA) is a Rich Internet Application (RIA) that serves planetary imaging data to the science community and the general public. PIA also utilizes the USGS Unified Planetary Coordinate system (UPC) and the on-Mars map server. The Atlas was designed to provide the ability to search and filter through greater than 8 million planetary image files. This software is a three-tier Web application that contains a search engine backend (MySQL, JAVA), Web service interface (SOAP) between server and client, and a GWT Google Maps API client front end. This application allows for the search, retrieval, and download of planetary images and associated meta-data from the following missions: 2001 Mars Odyssey, Cassini, Galileo, LCROSS, Lunar Reconnaissance Orbiter, Mars Exploration Rover, Mars Express, Magellan, Mars Global Surveyor, Mars Pathfinder, Mars Reconnaissance Orbiter, MESSENGER, Phoe nix, Viking Lander, Viking Orbiter, and Voyager. The Atlas utilizes the UPC to translate mission-specific coordinate systems into a unified coordinate system, allowing the end user to query across missions of similar targets. If desired, the end user can also use a mission-specific view of the Atlas. The mission-specific views rely on the same code base. This application is a major improvement over the initial version of the Planetary Image Atlas. It is a multi-mission search engine. This tool includes both basic and advanced search capabilities, providing a product search tool to interrogate the collection of planetary images. This tool lets the end user query information about each image, and ignores the data that the user has no interest in. Users can reduce the number of images to look at by defining an area of interest with latitude and longitude ranges.

  20. Control and Data Acquisition System of the ATLAS Facility

    International Nuclear Information System (INIS)

    Choi, Ki-Yong; Kwon, Tae-Soon; Cho, Seok; Park, Hyun-Sik; Baek, Won-Pil; Kim, Jung-Taek

    2007-02-01

    This report describes the control and data acquisition system of an integral effect test facility, ATLAS (Advanced Thermal-hydraulic Test Loop for Accident Simulation) facility, which recently has been constructed at KAERI (Korea Atomic Energy Research Institute). The control and data acquisition system of the ATLAS is established with the hybrid distributed control system (DCS) by RTP corp. The ARIDES system on a LINUX platform which is provided by BNF Technology Inc. is used for a control software. The IO signals consists of 1995 channels and they are processed at 10Hz. The Human-Machine-Interface (HMI) consists of 43 processing windows and they are classified according to fluid system. All control devices can be controlled by manual, auto, sequence, group, and table control methods. The monitoring system can display the real time trend or historical data of the selected IO signals on LCD monitors in a graphical form. The data logging system can be started or stopped by operator and the logging frequency can be selected among 0.5, 1, 2, 10Hz. The fluid system of the ATLAS facility consists of several systems including a primary system to auxiliary system. Each fluid system has a control similarity to the prototype plant, APR1400/OPR1000

  1. The Run-2 ATLAS Trigger System

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00222798; The ATLAS collaboration

    2016-01-01

    The ATLAS trigger successfully collected collision data during the first run of the LHC between 2009-2013 at different centre-of-mass energies between 900 GeV and 8 TeV. The trigger system consists of a hardware Level-1 and a software-based high level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. In Run-2, the LHC will operate at centre-of-mass energies of 13 and 14 TeV and higher luminosity, resulting in roughly five times higher trigger rates. A brief review of the ATLAS trigger system upgrades that were implemented between Run-1 and Run-2, allowing to cope with the increased trigger rates while maintaining or even improving the efficiency to select physics processes of interest, will be given. This includes changes to the Level-1 calorimeter and muon trigger systems, the introduction of a new Level-1 topological trigger module and the merging of the previously two-level HLT system into a single event filter farm. A ...

  2. Evolution of the ReadOut System of the ATLAS experiment

    CERN Document Server

    Borga, A; The ATLAS collaboration; Joos, M; Schumacher, J; Tremblet, L; Vandelli, W; Vermeulen, J; Werner, P; Wickens, F

    2014-01-01

    The ReadOut System (ROS) is a central and essential part of the ATLAS data-acquisition system. It receives and buffers event data accepted from all sub-detectors and first-level trigger subsystems. Event data are subsequently forwarded to the High-Level Trigger system and Event Builder via a GbE-based network. The ATLAS ROS will be completely renewed in view of the demanding conditions expected during LHC Run 2 and Run 3. The new ROS will consist of roughly 100 Linux-based 2U-high rack-mounted server PCs, each equipped with 2 PCIe I/O cards and four 10GbE interfaces. The FPGA-based PCIe I/O cards, developed by the ALICE collaboration, will be configured with ATLAS-specific firmware, called RobinNP. They will provide connectivity to about 2000 point-to-point optical links conveying the ATLAS event data. This dense configuration provides an excellent test bench for studying I/O efficiency and challenges in current COTS PC architectures with non-uniform memory and I/O access paths. In this paper the requirements...

  3. Alignment of the ATLAS Inner Detector Tracking System

    CERN Document Server

    Heller, C; The ATLAS collaboration

    2011-01-01

    ATLAS is one of the multipurpose experiments that records the products of the LHC proton-proton and heavy ion collisions. In order to reconstruct trajectories of charged particles produced in these collisions, ATLAS is equipped with a tracking system built using two different technologies, silicon planar sensors (pixel and microstrips) and drift-tube based detectors. Together they constitute the ATLAS Inner Detector, which is embedded in a 2 T axial field. Efficiently reconstructing tracks from charged particles traversing the detector, and precisely measure their momenta is of crucial importance for physics analyses. In order to achieve its scientific goals, an alignment of the ATLAS Inner Detector is required to accurately determine its more than 700,000 degrees of freedom. The goal of the alignment is set such that the limited knowledge of the sensor locations should not deteriorate the resolution of track parameters by more than 20% with respect to the intrinsic tracker resolution. The implementation of t...

  4. NOvA Event Building, Buffering and Data-Driven Triggering From Within the DAQ System

    International Nuclear Information System (INIS)

    Fischler, M; Rechenmacher, R; Green, C; Kowalkowski, J; Norman, A; Paterno, M

    2012-01-01

    The NOvA experiment is a long baseline neutrino experiment design to make precision probes of the structure of neutrino mixing. The experiment features a unique deadtimeless data acquisition system that is capable acquiring and building an event data stream from the continuous readout of the more than 360,000 far detector channels. In order to achieve its physics goals the experiment must be able to buffer, correlate and extract the data in this stream with the beam-spills that occur that Fermilab. In addition the NOvA experiment seeks to enhance its data collection efficiencies for rare class of event topologies that are valuable for calibration through the use of data driven triggering. The NOvA-DDT is a prototype Data-Driven Triggering system. NOvA-DDT has been developed using the Fermilab artdaq generic DAQ/Event-building toolkit. This toolkit provides the advantages of sharing online software infrastructure with other Intensity Frontier experiments, and of being able to use any offline analysis module-unchanged-as a component of the online triggering decisions. We have measured the performance and overhead of NOvA-DDT framework using a Hough transform based trigger decision module developed for the NOvA detector to identify cosmic rays. The results of these tests which were run on the NOvA prototype near detector, yielded a mean processing time of 98 ms per event, while consuming only 1/16th of the available processing capacity. These results provide a proof of concept that a NOvA-DDT based processing system is a viable strategy for data acquisition and triggering for the NOvA far detector.

  5. gLExec Integration with the ATLAS PanDA Workload Management System

    CERN Document Server

    Karavakis, Edward; Campana, Simone; De, Kaushik; Di Girolamo, Alessandro; Litmaath, Maarten; Maeno, Tadashi; Medrano Llamas, Ramon; Nilsson, Paul; Wenaus, Torre

    2015-01-01

    ATLAS user jobs are executed on Worker Nodes (WNs) by pilots sent to sites by pilot factories. This paradigm serves to allow a high job reliability and although it has clear advantages, such as making the working environment homogeneous, the approach presents security and traceability challenges. To address these challenges, gLExec can be used to let the payloads for each user be executed under a different UNIX user id that uniquely identifies the ATLAS user. This paper describes the recent improvements and evolution of the security model within the ATLAS PanDA system, including improvements in the PanDA pilot, in the PanDA server and their integration with MyProxy, a credential caching system that entitles a person or a service to act in the name of the issuer of the credential. Finally, it presents results from ATLAS user jobs running with gLExec and describes the deployment campaign within ATLAS.

  6. Commissioning the ATLAS Level-1 Central Trigger System

    CERN Document Server

    Sherman, Daniel

    2010-01-01

    The ATLAS Level-1 central trigger is a critical part of ATLAS operation. It receives the 40 MHz bunch clock from the LHC and distributes it to all sub-detectors. It initiates their read-out by forming the Level-1 Accept decision, which is based on information from the calorimeter and muon trigger processors and a variety of additional trigger inputs from detectors in the forward region. It also provides trigger summary information to the data acquisition system and the Level-2 trigger system. In this paper, we present the completion of the installed central trigger system, its performance during cosmic-ray data taking and the experience gained with triggering on the first LHC beams.

  7. ATLAS Maintenance and Operation management system

    CERN Document Server

    Copy, B

    2007-01-01

    The maintenance and operation of the ATLAS detector will involve thousands of contributors from 170 physics institutes. Planning and coordinating the action of ATLAS members, ensuring their expertise is properly leveraged and that no parts of the detector are understaffed or overstaffed will be a challenging task. The ATLAS Maintenance and Operation application (referred to as Operation Task Planner inside the ATLAS experiment) offers a fluent web based interface that combines the flexibility and comfort of a desktop application, intuitive data visualization and navigation techniques, with a lightweight service oriented architecture. We will review the application, its usage within the ATLAS experiment, its underlying design and implementation.

  8. The ATLAS distributed analysis system

    OpenAIRE

    Legger, F.

    2014-01-01

    In the LHC operations era, analysis of the multi-petabyte ATLAS data sample by globally distributed physicists is a challenging task. To attain the required scale the ATLAS Computing Model was designed around the concept of grid computing, realized in the Worldwide LHC Computing Grid (WLCG), the largest distributed computational resource existing in the sciences. The ATLAS experiment currently stores over 140 PB of data and runs about 140,000 concurrent jobs continuously at WLCG sites. During...

  9. ATLAS Tile calorimeter calibration and monitoring systems

    CERN Document Server

    Marjanovic, Marija; The ATLAS collaboration

    2018-01-01

    The ATLAS Tile Calorimeter (TileCal) is the central section of the hadronic calorimeter of the ATLAS experiment. This sampling calorimeter uses steel plates as absorber and scintillating tiles as active medium. The light produced by the passage of charged particles is transmitted by wavelength shifting fibers to photo-multiplier tubes (PMTs), located in the outer part of the calorimeter. The readout is segmented into about 5000 cells, each one being read out by two PMTs in parallel. To calibrate and monitor the stability and performance of the full readout chain during the data taking, a set of calibration sub-systems is used. The TileCal calibration system comprises Cesium radioactive sources, laser, charge injection elements, and an integrator based readout system. Combined information from all systems allows to monitor and to equalize the calorimeter response at each stage of the signal evolution, from scintillation light to digitization. Calibration runs are monitored from a data quality perspective and u...

  10. ATLAS Tile calorimeter calibration and monitoring systems

    CERN Document Server

    Cortes-Gonzalez, Arely; The ATLAS collaboration

    2017-01-01

    The ATLAS Tile Calorimeter is the central section of the hadronic calorimeter of the ATLAS experiment and provides important information for reconstruction of hadrons, jets, hadronic decays of tau leptons and missing transverse energy. This sampling calorimeter uses steel plates as absorber and scintillating tiles as active medium. The light produced by the passage of charged particles is transmitted by wavelength shifting fibres to photomultiplier tubes, located in the outer part of the calorimeter. The readout is segmented into about 5000 cells (longitudinally and transversally), each of them being read out by two photomultiplier in parallel. To calibrate and monitor the stability and performance of each part of the readout chain during the data taking, a set of calibration systems is used. The calibration system comprises Cesium radioactive sources, laser, charge injection elements and an integrator based readout system. Combined information from all systems allows to monitor and equalise the calorimeter r...

  11. Contributions to the integrated graphical user interface

    International Nuclear Information System (INIS)

    Badescu, E.; Caprini, M.

    2003-01-01

    The Online Software is part of the distributed Data Acquisition System (DAQ) for the ATLAS experiment that will start taking data in 2007 at the Large Hadron Collider at CERN. The Online Software system is responsible for overall experiment control, including run control, configuration and monitoring of Trigger and Data Acquisition System (TDAQ) and management of data-taking partitions. The system encompasses all the software dealing with configuring, controlling and monitoring the data acquisition system but excludes anything dealing with the management, processing or transportation of physics data. In other words, the Online Software is supposed to act as the 'glue' to a quantity of heterogeneous sub-system, providing not only a uniform control interface, but also the possibility of easily abstracting the specificities of those subsystems in order to provide them with control services. The component model architecture has been adopted for the system, each component being developed as an individual package. All the hardware and software configurations of the data taking partitions are stored in configuration databases. The Process Manager component performs the basic job control of the software components. The Integrated Graphical User Interface (IGUI) is one of the integration components of the Online Software, allowing the operator to control and monitor the status of the current data taking run in terms of its main parameters, detector configuration, trigger rate, buffer occupancy and state of the subsystems. The component has been designed as a Java application, having defined some specialized panels for allowing the user to send the main DAQ commands and displaying messages, state or run specific parameters of the whole system or related to all the other components (Run Control, Run Parameters, DAQ Supervisor, Process Manager, Message Reporting, Monitoring or Data Flow). The design of this component allows the users to develop their own panel to be displayed

  12. gLExec Integration with the ATLAS PanDA Workload Management System

    CERN Document Server

    Edward Karavakis; The ATLAS collaboration; Campana, Simone; De, Kaushik; Di Girolamo, Alessandro; Maarten Litmaath; Maeno, Tadashi; Medrano Llamas, Ramon; Nilsson, Paul; Wenaus, Torre

    2015-01-01

    The ATLAS Experiment at the Large Hadron Collider has collected data during Run 1 and is ready to collect data in Run 2. The ATLAS data are distributed, processed and analysed at more than 130 grid and cloud sites across the world. At any given time, there are more than 150,000 concurrent jobs running and about a million jobs are submitted on a daily basis on behalf of thousands of physicists within the ATLAS collaboration. The Production and Distributed Analysis (PanDA) workload management system has proved to be a key component of ATLAS and plays a crucial role in the success of the large-scale distributed computing as it is the sole system for distributed processing of Grid jobs across the collaboration since October 2007. ATLAS user jobs are executed on worker nodes by pilots sent to the sites by pilot factories. This pilot architecture has greatly improved job reliability and although it has clear advantages, such as making the working environment homogeneous by hiding any potential heterogeneities, the ...

  13. The Run-2 ATLAS Trigger System

    CERN Document Server

    Ruiz-Martinez, Aranzazu; The ATLAS collaboration

    2016-01-01

    The ATLAS trigger has been successfully collecting collision data during the first run of the LHC between 2009-2013 at a centre-of-mass energy between 900 GeV and 8 TeV. The trigger system consists of a hardware Level-1 (L1) and a software based high-level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. In Run-2, the LHC will operate at centre-of-mass energies of 13 and 14 TeV resulting in roughly five times higher trigger rates. We will briefly review the ATLAS trigger system upgrades that were implemented during the shutdown, allowing us to cope with the increased trigger rates while maintaining or even improving our efficiency to select relevant physics processes. This includes changes to the L1 calorimeter and muon trigger systems, the introduction of a new L1 topological trigger module and the merging of the previously two-level HLT system into a single event filter farm. At hand of a few examples, we will show the ...

  14. Development of an X-ray imaging system with SOI pixel detectors

    Energy Technology Data Exchange (ETDEWEB)

    Nishimura, Ryutaro, E-mail: ryunishi@post.kek.jp [School of High Energy Accelerator Science, SOKENDAI (The Graduate University for Advanced Studies), Oho 1-1, Tsukuba, Ibaraki 305-0801 (Japan); Arai, Yasuo; Miyoshi, Toshinobu [Institute of Particle and Nuclear Studies, High Energy Accelerator Research Organization (KEK-IPNS), Oho 1-1, Tsukuba, Ibaraki 305-0801 (Japan); Hirano, Keiichi; Kishimoto, Shunji; Hashimoto, Ryo [Institute of Materials Structure Science, High Energy Accelerator Research Organization (KEK-IMSS), Oho 1-1, Tsukuba, Ibaraki 305-0801 (Japan)

    2016-09-21

    An X-ray imaging system employing pixel sensors in silicon-on-insulator technology is currently under development. The system consists of an SOI pixel detector (INTPIX4) and a DAQ system based on a multi-purpose readout board (SEABAS2). To correct a bottleneck in the total throughput of the DAQ of the first prototype, parallel processing of the data taking and storing processes and a FIFO buffer were implemented for the new DAQ release. Due to these upgrades, the DAQ throughput was improved from 6 Hz (41 Mbps) to 90 Hz (613 Mbps). The first X-ray imaging system with the new DAQ software release was tested using 33.3 keV and 9.5 keV mono X-rays for three-dimensional computerized tomography. The results of these tests are presented. - Highlights: • The X-ray imaging system employing the SOI pixel sensor is currently under development. • The DAQ of the first prototype has the bottleneck in the total throughput. • The new DAQ release solve the bottleneck by parallel processing and FIFO buffer. • The new DAQ release was tested using 33.3 keV and 9.5 keV mono X-rays.

  15. The Run-2 ATLAS Trigger System: Design, Performance and Plan

    CERN Document Server

    zur Nedden, Martin; The ATLAS collaboration

    2016-01-01

    In high-energy physics experiments, online selection is crucial to select interesting collisions from the large data volume. The ATLAS experiment at the Large Hadron Collider (LHC) utilizes the trigger system that consists of a hardware Level-1 (L1) and a software based high-level trigger (HLT), reducing the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of about 1000 Hz. The ATLAS trigger has been successfully collecting collision data during the first run of the LHC (Run-1) between 2009-2013 at a centre-of-mass energy between 900 GeV and 8 TeV. In the second run of LHC (Run-2) starting from 2015, the LHC operates at centre-of-mass energy of 13 TeV and provides a higher luminosity of collisions. Also, the number of collisions occurring in a same bunch crossing increases. The ATLAS trigger system has to cope with these challenges, while maintaining or even improving the efficiency to select relevant physics processes. In this talk, first we will review the ATLAS trigger ...

  16. An Embedded Real-Time System on ATLAS ROBIN

    OpenAIRE

    Yu, Maoyuan

    2012-01-01

    ATLAS is the largest particle detector at the Large Hadron Collider for high energy physics experiments that produces over 40 TB/s event data. The ATLAS Readout Buffer INput(ROBIN) subsystem is an essential device to buffer and reduce the data, which has a IBM PowerPC core for the control functionalities. This dissertation addresses the software design of an embedded real-time system centering on the PowerPC micro-controller, as the management core of the ROBIN. A page-based solution is pr...

  17. The ATLAS software installation system for LCG/EGEE

    Energy Technology Data Exchange (ETDEWEB)

    Salvo, A D [Istituto Nazionale di Fisica Nucleare, sez. Roma 1 (Italy); Barchiesi, A [Universita di Roma I ' La Sapienza' (Italy); Gnanvo, K [Queen Mary and Westfield College (United Kingdom); Gwilliam, C [University of Liverpool (United Kingdom); Kennedy, J; Krobath, G [Ludwig-Maximilians-Universitaet Muenchen (Germany); Olszewski, A [Henryk Niewodniczanski Institute of Nuclear Physics, Polish Academy of Sciences (Poland); Rybkine, G [Royal Holloway College (United Kingdom)

    2008-07-15

    The huge amount of resources available in the Grids, and the necessity to have the most up-to-date experimental software deployed in all the sites within a few hours, have driven the need for an automatic installation system for the LHC experiments. In this work we describe the ATLAS system for the experiment software installation in LCG/EGEE, based on the Light Job Submission Framework for Installation (LJSFi), an independent job submission framework for generic submission and job tracking in EGEE. LJSFi is able to automatically discover, check, install, test and tag the full set of resources made available in LCG/EGEE to the ATLAS Virtual Organization in a few hours, depending on the site availability.

  18. The 2004 ATLAS Combined Test Beam

    CERN Multimedia

    The ATLAS CTB Team, .

    2004-01-01

    In the year 2004, ATLAS has been involved in a huge combined test beam (CTB) effort in H8. A complete slice of the barrel detector and of the Muon End-cap has been tested, with the following clear goals: pre-commission the final elements and study the detector performance in a realistic combined data taking. Thanks to this experience, a lot of expertise in the operations has been acquired and much data (~ 4.6 TB of data, ~ 90 million events on castor) has been collected and is already under analysis. The CTB has been characterized by different phases with an incremental presence of sub-detectors modules and associated DAQ infrastructure, as well as incremental improvement of analysis tools for prompt data certification. The physics goals of the CTB have been defined in consultation with the physics coordinator, all the sub-detector representatives and the combined performance group representative. With all these indications, a detailed run plan day-by-day schedule was defined before the CTB start and was foll...

  19. The ATLAS Distributed Data Management System & Databases

    CERN Document Server

    Garonne, V; The ATLAS collaboration; Barisits, M; Beermann, T; Vigne, R; Serfon, C

    2013-01-01

    The ATLAS Distributed Data Management (DDM) System is responsible for the global management of petabytes of high energy physics data. The current system, DQ2, has a critical dependency on Relational Database Management Systems (RDBMS), like Oracle. RDBMS are well-suited to enforcing data integrity in online transaction processing applications, however, concerns have been raised about the scalability of its data warehouse-like workload. In particular, analysis of archived data or aggregation of transactional data for summary purposes is problematic. Therefore, we have evaluated new approaches to handle vast amounts of data. We have investigated a class of database technologies commonly referred to as NoSQL databases. This includes distributed filesystems, like HDFS, that support parallel execution of computational tasks on distributed data, as well as schema-less approaches via key-value stores, like HBase. In this talk we will describe our use cases in ATLAS, share our experiences with various databases used ...

  20. Evolution of the ATLAS distributed computing system during the LHC long shutdown

    Science.gov (United States)

    Campana, S.; Atlas Collaboration

    2014-06-01

    The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the Worldwide LHC Computing Grid (WLCG) distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1 PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileup. We will describe the evolution of the ADC software foreseen during this period. This includes consolidating the existing Production and Distributed Analysis framework (PanDA) and ATLAS Grid Information System (AGIS), together with the development and commissioning of next generation systems for distributed data management (DDM/Rucio) and production (Prodsys-2). We will explain how new technologies such as Cloud Computing and NoSQL databases, which ATLAS investigated as R&D projects in past years, will be integrated in production. Finally, we will describe more fundamental developments such as breaking job-to-data locality by exploiting storage federations and caches, and event level (rather than file or dataset level) workload engines.

  1. Evolution of the ATLAS distributed computing system during the LHC long shutdown

    International Nuclear Information System (INIS)

    Campana, S

    2014-01-01

    The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the Worldwide LHC Computing Grid (WLCG) distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1 PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileup. We will describe the evolution of the ADC software foreseen during this period. This includes consolidating the existing Production and Distributed Analysis framework (PanDA) and ATLAS Grid Information System (AGIS), together with the development and commissioning of next generation systems for distributed data management (DDM/Rucio) and production (Prodsys-2). We will explain how new technologies such as Cloud Computing and NoSQL databases, which ATLAS investigated as R and D projects in past years, will be integrated in production. Finally, we will describe more fundamental developments such as breaking job-to-data locality by exploiting storage federations and caches, and event level (rather than file or dataset level) workload engines.

  2. Experiences with the new ATLAS Distributed Data Management System

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00214543; The ATLAS collaboration; Serfon, Cedric; Barisits, Martin-Stefan; Lassnig, Mario; Beermann, Thomas; Guan, Wen

    2017-01-01

    The ATLAS Distributed Data Management (DDM) system has evolved drastically in the last two years with the Rucio software fully replacing the previous system before the start of LHC Run-2. The ATLAS DDM system manages now more than 250 petabytes spread on 130 storage sites and can handle file transfer rates of up to 30Hz. In this paper, we discuss our experience acquired in developing, commissioning, running and maintaining such a large system. First, we describe the general architecture of the system, our integration with external services like the WLCG File Transfer Service and the evolution of the system over its first years of production. Then, we show the performance of the system, describe the integration of new technologies such as object stores, and outline some new developments, which mainly focus on performance and automation.

  3. Performance of a proximity cryogenic system for the ATLAS central solenoid magnet

    CERN Document Server

    Doi, Y; Makida, Y; Kondo, Y; Kawai, M; Aoki, K; Haruyama, T; Kondo, T; Mizumaki, S; Wachi, Y; Mine, S; Haug, F; Delruelle, N; Passardi, Giorgio; ten Kate, H H J

    2002-01-01

    The ATLAS central solenoid magnet has been designed and constructed as a collaborative work between KEK and CERN for the ATLAS experiment in the LHC project The solenoid provides an axial magnetic field of 2 Tesla at the center of the tracking volume of the ATLAS detector. The solenoid is installed in a common cryostat of a liquid-argon calorimeter in order to minimize the mass of the cryostat wall. The coil is cooled indirectly by using two-phase helium flow in a pair of serpentine cooling line. The cryogen is supplied by the ATLAS cryogenic plant, which also supplies helium to the Toroid magnet systems. The proximity cryogenic system for the solenoid has two major components: a control dewar and a valve unit In addition, a programmable logic controller, PLC, was prepared for the automatic operation and solenoid test in Japan. This paper describes the design of the proximity cryogenic system and results of the performance test. (7 refs).

  4. Evolution of the ATLAS Distributed Computing system during the LHC Long shutdown

    CERN Document Server

    Campana, S; The ATLAS collaboration

    2014-01-01

    The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the WLCG distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileu...

  5. Operation of the Upgraded ATLAS Level-1 Central Trigger System

    CERN Document Server

    Glatzer, Julian Maximilian Volker; The ATLAS collaboration

    2015-01-01

    The ATLAS Level-1 Central Trigger (L1CT) system is a central part of ATLAS data-taking and has undergone a major upgrade for Run 2 of the LHC, in order to cope with the expected increase of instantaneous luminosity of a factor of 2 with respect to Run 1. The upgraded hardware offers more flexibility in the trigger decisions due to the double amount of trigger inputs and usable trigger channels. It also provides an interface to the new topological trigger system. Operationally - particularly useful for commissioning, calibration and test runs - it allows concurrent running of up to 3 different sub-detector combinations. In this contribution, we give an overview of the operational software framework of the L1CT system with particular emphasis of the configuration, controls and monitoring aspects. The software framework allows a consistent configuration with respect to the ATLAS experiment and the LHC machine, upstream and downstream trigger processors, and the data acquisition. Trigger and dead-time rates are m...

  6. Diagnostic Systems and Resources utilization of the ATLAS High Level Trigger

    CERN Document Server

    Sidoti, A; The ATLAS collaboration; Ospanov, R

    2010-01-01

    Since the LHC started colliding protons in December 2009, the ATLAS trigger has operated very successfully with a collision rate which has increased by several orders of magnitude. The trigger monitoring and data quality infrastructure was essential to this success. We describe the software tools used to monitor the trigger system performance and assess the overall quality of the trigger selection during collisions running. ATLAS has broad physics goals which require a large number of different active triggers due to complex event topology, requiring quite sophisticated software structures and concepts. The trigger of the ATLAS experiment is built as a three level system. The first level is realized in hardware while the high level triggers (HLT) are software based and run on large PC farms. The trigger reduces the bunch crossing rate of 40 MHz, at design, to an average event rate of about 200 Hz for storage. Since the ATLAS detector is a general purpose detector, the trigger must be sensitive to a large numb...

  7. Integration Of PanDA Workload Management System With Supercomputers for ATLAS

    CERN Document Server

    Oleynik, Danila; The ATLAS collaboration; De, Kaushik; Wenaus, Torre; Maeno, Tadashi; Barreiro Megino, Fernando Harald; Nilsson, Paul; Guan, Wen; Panitkin, Sergey

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production ANd Distributed Analysis system) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more t...

  8. The detector control system of the ATLAS experiment

    International Nuclear Information System (INIS)

    Poy, A Barriuso; Burckhart, H J; Cook, J; Franz, S; Gutzwiller, O; Hallgren, B; Schlenker, S; Varela, F; Boterenbrood, H; Filimonov, V; Khomutnikov, V

    2008-01-01

    The ATLAS experiment is one of the experiments at the Large Hadron Collider, constructed to study elementary particle interactions in collisions of high-energy proton beams. The individual detector components as well as the common experimental infrastructure are supervised by the Detector Control System (DCS). The DCS enables equipment supervision using operator commands, reads, processes and archives the operational parameters of the detector, allows for error recognition and handling, manages the communication with external control systems, and provides a synchronization mechanism with the physics data acquisition system. Given the enormous size and complexity of ATLAS, special emphasis was put on the use of standardized hardware and software components enabling efficient development and long-term maintainability of the DCS over the lifetime of the experiment. Currently, the DCS is being used successfully during the experiment commissioning phase

  9. Yarr: A PCIe based readout system for semiconductor tracking systems

    Energy Technology Data Exchange (ETDEWEB)

    Heim, Timon [Bergische Universitaet Wuppertal, Wuppertal (Germany); CERN, Geneva (Switzerland); Maettig, Peter [Bergische Universitaet Wuppertal, Wuppertal (Germany); Pernegger, Heinz [CERN, Geneva (Switzerland)

    2015-07-01

    The Yarr readout system is a novel DAQ concept, using an FPGA board connected via PCIe to a computer, to read out semiconductor tracking systems. The system uses the FPGA as a reconfigurable IO interface which, in conjunction with the very high speed of the PCIe bus, enables a focus of processing the data stream coming from the pixel detector in software. Modern computer system could potentially make the need of custom signal processing hardware in readout systems obsolete and the Yarr readout system showcases this for FE-I4 chips, which are state-of-the-art readout chips used in the ATLAS Pixel Insertable B-Layer and developed for tracking in high multiplicity environments. The underlying concept of the Yarr readout system tries to move intelligence from hardware into the software without the loss of performance, which is made possible by modern multi-core processors. The FPGA board firmware acts like a buffer and does no further processing of the data stream, enabling rapid integration of new hardware due to minimal firmware minimisation.

  10. Soft real-time alarm messages for ATLAS TDAQ

    Science.gov (United States)

    Darlea, G.; Al Shabibi, A.; Martin, B.; Lehmann Miotto, G.

    2010-05-01

    The ATLAS TDAQ network consists of three separate Ethernet-based networks (Data, Control and Management) with over 2000 end-nodes. The TDAQ system has to be aware of the meaningful network failures and events in order for it to take effective recovery actions. The first stage of the process is implemented with Spectrum, a commercial network management tool. Spectrum detects and registers all network events, then it publishes the information via a CORBA programming interface. A gateway program (called NSG—Network Service Gateway) connects to Spectrum through CORBA and exposes to its clients a Java RMI interface. This interface implements a callback mechanism that allows the clients to subscribe for monitoring "interesting" parts of the network. The last stage of the TDAQ network monitoring tool is implemented in a module named DNC (DAQ to Network Connection), which filters the events that are to be reported to the TDAQ system: it subscribes to the gateway only for the machines that are currently active in the system and it forwards only the alarms that are considered important for the current TDAQ data taking session. The network information is then synthesized and presented in a human-readable format. These messages can be further processed either by the shifter who is in charge, the network expert or the Online Expert System. This article aims to describe the different mechanisms of the chain that transports the network events to the front-end user, as well as the constraints and rules that govern the filtering and the final format of the alarm messages.

  11. Soft real-time alarm messages for ATLAS TDAQ

    International Nuclear Information System (INIS)

    Darlea, G.; Al Shabibi, A.; Martin, B.; Lehmann Miotto, G.

    2010-01-01

    The ATLAS TDAQ network consists of three separate Ethernet-based networks (Data, Control and Management) with over 2000 end-nodes. The TDAQ system has to be aware of the meaningful network failures and events in order for it to take effective recovery actions. The first stage of the process is implemented with Spectrum, a commercial network management tool. Spectrum detects and registers all network events, then it publishes the information via a CORBA programming interface. A gateway program (called NSG-Network Service Gateway) connects to Spectrum through CORBA and exposes to its clients a Java RMI interface. This interface implements a callback mechanism that allows the clients to subscribe for monitoring 'interesting' parts of the network. The last stage of the TDAQ network monitoring tool is implemented in a module named DNC (DAQ to Network Connection), which filters the events that are to be reported to the TDAQ system: it subscribes to the gateway only for the machines that are currently active in the system and it forwards only the alarms that are considered important for the current TDAQ data taking session. The network information is then synthesized and presented in a human-readable format. These messages can be further processed either by the shifter who is in charge, the network expert or the Online Expert System. This article aims to describe the different mechanisms of the chain that transports the network events to the front-end user, as well as the constraints and rules that govern the filtering and the final format of the alarm messages.

  12. Soft real-time alarm messages for ATLAS TDAQ

    Energy Technology Data Exchange (ETDEWEB)

    Darlea, G., E-mail: georgiana.lavinia.darlea@cern.c [CERN, Geneva (Switzerland); Al Shabibi, A.; Martin, B.; Lehmann Miotto, G. [CERN, Geneva (Switzerland)

    2010-05-21

    The ATLAS TDAQ network consists of three separate Ethernet-based networks (Data, Control and Management) with over 2000 end-nodes. The TDAQ system has to be aware of the meaningful network failures and events in order for it to take effective recovery actions. The first stage of the process is implemented with Spectrum, a commercial network management tool. Spectrum detects and registers all network events, then it publishes the information via a CORBA programming interface. A gateway program (called NSG-Network Service Gateway) connects to Spectrum through CORBA and exposes to its clients a Java RMI interface. This interface implements a callback mechanism that allows the clients to subscribe for monitoring 'interesting' parts of the network. The last stage of the TDAQ network monitoring tool is implemented in a module named DNC (DAQ to Network Connection), which filters the events that are to be reported to the TDAQ system: it subscribes to the gateway only for the machines that are currently active in the system and it forwards only the alarms that are considered important for the current TDAQ data taking session. The network information is then synthesized and presented in a human-readable format. These messages can be further processed either by the shifter who is in charge, the network expert or the Online Expert System. This article aims to describe the different mechanisms of the chain that transports the network events to the front-end user, as well as the constraints and rules that govern the filtering and the final format of the alarm messages.

  13. The consistency service of the ATLAS Distributed Data Management system

    CERN Document Server

    Serfon, C; The ATLAS collaboration

    2011-01-01

    With the continuously increasing volume of data produced by ATLAS and stored on the WLCG sites, the probability of data corruption or data losses, due to software and hardware failures is increasing. In order to ensure the consistency of all data produced by ATLAS a Consistency Service has been developed as part of the DQ2 Distributed Data Management system. This service is fed by the different ATLAS tools, i.e. the analysis tools, production tools, DQ2 site services or by site administrators that report corrupted or lost files. It automatically corrects the errors reported and informs the users in case of irrecoverable file loss.

  14. The Consistency Service of the ATLAS Distributed Data Management system

    CERN Document Server

    Serfon, C; The ATLAS collaboration

    2010-01-01

    With the continuously increasing volume of data produced by ATLAS and stored on the WLCG sites, the probability of data corruption or data losses, due to software and hardware failure is increasing. In order to ensure the consistency of all data produced by ATLAS a Consistency Service has been developed as part of the DQ2 Distributed Data Management system. This service is fed by the different ATLAS tools, i.e. the analysis tools, production tools, DQ2 site services or by site administrators that report corrupted or lost files. It automatically correct the errors reported and informs the users in case of irrecoverable file loss.

  15. Scaling up ATLAS production system for the LHC Run 2 and beyond: project ProdSys2

    CERN Document Server

    Borodin, Mikhail; The ATLAS collaboration; García Navarro, José Enrique; Golubkov, Dmitry; Klimentov, Alexei; Maeno, Tadashi; Vaniachine, Alexandre

    2015-01-01

    The Big Data processing needs of the ATLAS experiment grow continuously, as more data and more use cases emerge. For Big Data processing the ATLAS experiment adopted the data transformation approach, where software applications transform the input data into outputs. In the ATLAS production system, each data transformation is represented by a task, a collection of many jobs, submitted by the ATLAS workload management system (PanDA) and executed on the Grid. Our experience shows that the rate of tasks submission grows exponentially over the years. To scale up the ATLAS production system for new challenges, we started the ProdSys2 project. PanDA has been upgraded with the Job Execution and Definition Interface (JEDI). Patterns in ATLAS data transformation workflows composed of many tasks, provided a scalable production system framework for template definitions of the many-tasks workflows. These workflows are being implemented in the Database Engine for Tasks (DEfT) that generates individual tasks for processing ...

  16. FAIR DAQ system: Performances and global DAQ management

    International Nuclear Information System (INIS)

    Ordine, A.; Boiano, A.; Zaghi, A.

    1997-01-01

    We present on overview of the features of FAIR (FAst Inter-crate Readout), a novel open-quotes plug-n-playclose quotes trigger and readout oriented bus system. It provides for an effective low-cost homogeneous, highly extendible and scalable, front-end environment. Readout and event-building are performed, at the same time, without the need of CPUs, by means of a transparent hardware level protocol. The measured rate of data transfer and event-building can be as fast as 22ns/longword (1.44 Gbit/s). The measured performances will be discussed. The open-quotes plug-n-playclose quotes feature will be also presented in some detail along with the control system based on a network embedded in the bus

  17. Argonne Tandem Linac Accelerator System (ATLAS)

    Data.gov (United States)

    Federal Laboratory Consortium — ATLAS is a national user facility at Argonne National Laboratory in Argonne, Illinois. The ATLAS facility is a leading facility for nuclear structure research in the...

  18. A High-Resolution In Vivo Atlas of the Human Brain's Serotonin System.

    Science.gov (United States)

    Beliveau, Vincent; Ganz, Melanie; Feng, Ling; Ozenne, Brice; Højgaard, Liselotte; Fisher, Patrick M; Svarer, Claus; Greve, Douglas N; Knudsen, Gitte M

    2017-01-04

    The serotonin (5-hydroxytryptamine, 5-HT) system modulates many important brain functions and is critically involved in many neuropsychiatric disorders. Here, we present a high-resolution, multidimensional, in vivo atlas of four of the human brain's 5-HT receptors (5-HT 1A , 5-HT 1B , 5-HT 2A , and 5-HT 4 ) and the 5-HT transporter (5-HTT). The atlas is created from molecular and structural high-resolution neuroimaging data consisting of positron emission tomography (PET) and magnetic resonance imaging (MRI) scans acquired in a total of 210 healthy individuals. Comparison of the regional PET binding measures with postmortem human brain autoradiography outcomes showed a high correlation for the five 5-HT targets and this enabled us to transform the atlas to represent protein densities (in picomoles per milliliter). We also assessed the regional association between protein concentration and mRNA expression in the human brain by comparing the 5-HT density across the atlas with data from the Allen Human Brain atlas and identified receptor- and transporter-specific associations that show the regional relation between the two measures. Together, these data provide unparalleled insight into the serotonin system of the human brain. We present a high-resolution positron emission tomography (PET)- and magnetic resonance imaging-based human brain atlas of important serotonin receptors and the transporter. The regional PET-derived binding measures correlate strongly with the corresponding autoradiography protein levels. The strong correlation enables the transformation of the PET-derived human brain atlas into a protein density map of the serotonin (5-hydroxytryptamine, 5-HT) system. Next, we compared the regional receptor/transporter protein densities with mRNA levels and uncovered unique associations between protein expression and density at high detail. This new in vivo neuroimaging atlas of the 5-HT system not only provides insight in the human brain's regional protein

  19. The Database Driven ATLAS Trigger Configuration System

    CERN Document Server

    Martyniuk, Alex; The ATLAS collaboration

    2015-01-01

    This contribution describes the trigger selection configuration system of the ATLAS low- and high-level trigger (HLT) and the upgrades it received in preparation for LHC Run 2. The ATLAS trigger configuration system is responsible for applying the physics selection parameters for the online data taking at both trigger levels and the proper connection of the trigger lines across those levels. Here the low-level trigger consists of the already existing central trigger (CT) and the new Level-1 Topological trigger (L1Topo), which has been added for Run 2. In detail the tasks of the configuration system during the online data taking are Application of the selection criteria, e.g. energy cuts, minimum multiplicities, trigger object correlation, at the three trigger components L1Topo, CT, and HLT On-the-fly, e.g. rate-dependent, generation and application of prescale factors to the CT and HLT to adjust the trigger rates to the data taking conditions, such as falling luminosity or rate spikes in the detector readout ...

  20. Module and electronics developments for the ATLAS ITK pixel system

    CERN Document Server

    Munoz Sanchez, Francisca Javiela; The ATLAS collaboration

    2017-01-01

    The ATLAS experiment is preparing for an extensive modification of its detectors in the course of the planned HL-LHC accelerator upgrade around 2025. The ATLAS upgrade includes the replacement of the entire tracking system by an all-silicon detector (Inner Tracker, ITk). The five innermost layers of ITk will be a pixel detector built of new sensor and readout electronics technologies to improve the tracking performance and cope with the severe HL-LHC environment in terms of occupancy and radiation. The total area of the new pixel system could measure up to 14 m2, depending on the final layout choice, which is expected to take place in 2017. In this paper an overview of the ongoing R\\&D activities on modules and electronics for the ATLAS ITk is given including the main developments and achievements in silicon planar and 3D sensor technologies, readout and power challenges.

  1. A High-Resolution In Vivo Atlas of the Human Brain's Serotonin System

    DEFF Research Database (Denmark)

    Beliveau, Vincent; Ganz-Benjaminsen, Melanie; Feng, Ling

    2017-01-01

    The serotonin (5-hydroxytryptamine, 5-HT) system modulates many important brain functions and is critically involved in many neuropsychiatric disorders. Here, we present a high-resolution, multidimensional, in vivo atlas of four of the human brain's 5-HT receptors (5-HT1A, 5-HT1B, 5-HT2A, and 5-HT4...... with postmortem human brain autoradiography outcomes showed a high correlation for the five 5-HT targets and this enabled us to transform the atlas to represent protein densities (in picomoles per milliliter). We also assessed the regional association between protein concentration and mRNA expression in the human...... brain by comparing the 5-HT density across the atlas with data from the Allen Human Brain atlas and identified receptor- and transporter-specific associations that show the regional relation between the two measures. Together, these data provide unparalleled insight into the serotonin system...

  2. The ATLAS High Level Trigger Steering Framework and the Trigger 
Configuration System.

    CERN Document Server

    Pérez Cavalcanti, Tiago; The ATLAS collaboration

    2011-01-01

    The ATLAS High Level Trigger Steering Framework and the Trigger 
Configuration System.
 
The ATLAS detector system installed in the Large Hadron Collider (LHC) 
at CERN is designed to study proton-proton and nucleus-nucleus 
collisions with a maximum center of mass energy of 14 TeV at a bunch 
collision rate of 40MHz.  In March 2010 the four LHC experiments saw 
the first proton-proton collisions at 7 TeV. Still within the year a 
collision rate of nearly 10 MHz is expected. At ATLAS, events of 
potential interest for ATLAS physics are selected by a three-level 
trigger system, with a final recording rate of about 200 Hz. The first 
level (L1) is implemented in custom hardware; the two levels of 
the high level trigger (HLT) are software triggers, running on large 
farms of standard computers and network devices. 

Within the ATLAS physics program more than 500 trigger signatures are 
defined. The HLT tests each signature on each L1-accepted event; the 
test outcome is recor...

  3. The Run-2 ATLAS Trigger System: Design, Performance and Plan

    CERN Document Server

    zur Nedden, Martin; The ATLAS collaboration

    2016-01-01

    In high-energy physics experiments, online selection is crucial to select interesting collisions from the large data volume. The ATLAS experiment at the Large Hadron Collider (LHC) utilizes the trigger system that consists of a hardware Level-1 (L1) and a software based high-level trigger (HLT), reducing the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of about 1000 Hz. In the LHC Run-2 starting from in 2015, the LHC operates at centre-of-mass energy of 13 TeV providing a luminosity up to $1.2 \\cdot 10^{34} {\\rm cm^{-2}s^{-1}}$. The ATLAS trigger system has to cope with these challenges, while maintaining or even improving the efficiency to select relevant physics processes. In this paper, the ATLAS trigger system for LHC Run-2 is reviewed. Secondly, the impressive performance improvements in the HLT trigger algorithms used to identify leptons, hadrons and global event quantities like missing transverse energy is shown. Electron, muon and photon triggers covering trans...

  4. ATLAS Review Office

    CERN Multimedia

    Szeless, B

    The ATLAS internal reviews, be it the mandatory Production Readiness Reviews, the now newly installed Production Advancement Reviews, or the more and more requested different Design Reviews, have become a part of our ATLAS culture over the past years. The Activity Systems Status Overviews are, for the time being, a one in time event and should be held for each system as soon as possible to have some meaning. There seems to a consensus that the reviews have become a useful project tool for the ATLAS management but even more so for the sub-systems themselves making achievements as well as possible shortcomings visible. One other recognized byproduct is the increasing cross talk between the systems, a very important ingredient to make profit all the systems from the large collective knowledge we dispose of in ATLAS. In the last two months, the first two PARs were organized for the MDT End Caps and the TRT Barrel Modules, both part of the US contribution to the ATLAS Project. Furthermore several different design...

  5. ATLAS Tile calorimeter calibration and monitoring systems

    CERN Document Server

    Boumediene, Djamel Eddine; The ATLAS collaboration

    2017-01-01

    The ATLAS Tile Calorimeter (TileCal) is the central section of the hadronic calorimeter of the ATLAS experiment and provides important information for reconstruction of hadrons, jets, hadronic decays of tau leptons and missing transverse energy. This sampling calorimeter uses steel plates as absorber and scintillating tiles as active medium. The light produced by the passage of charged particles is transmitted by wavelength shifting fibres to photomultiplier tubes (PMTs). PMT signals are then digitized at 40 MHz and stored on detector and are only transferred off detector once the first level trigger acceptance has been confirmed. The readout is segmented into about 5000 cells (longitudinally and transversally), each of them being read out by two PMTs in parallel. To calibrate and monitor the stability and performance of each part of the readout chain, a set of calibration systems is used. The TileCal calibration system comprises Cesium radioactive sources, laser, charge injection elements and an integrator b...

  6. ATLAS Tile calorimeter calibration and monitoring systems

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00445232; The ATLAS collaboration

    2016-01-01

    The ATLAS Tile Calorimeter (TileCal) is the central section of the hadronic calorimeter of the ATLAS experiment and provides important information for reconstruction of hadrons, jets, hadronic decays of tau leptons and missing transverse energy. This sampling calorimeter uses steel plates as absorber and scintillating tiles as active medium. The light produced by the passage of charged particles is transmitted by wavelength shifting fibres to photomultiplier tubes (PMTs), located on the outside of the calorimeter. The readout is segmented into about 5000 cells (longitudinally and transversally), each of them being read out by two PMTs in parallel. To calibrate and monitor the stability and performance of each part of the readout chain during the data taking, a set of calibration systems is used. The TileCal calibration system comprises Cesium radioactive sources, laser and charge injection elements and it allows to monitor and equalize the calorimeter response at each stage of the signal production, from scin...

  7. ATLAS Tile calorimeter calibration and monitoring systems

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00445232; The ATLAS collaboration

    2017-01-01

    The ATLAS Tile Calorimeter (TileCal) is the central section of the hadronic calorimeter of the ATLAS experiment and provides important information for reconstruction of hadrons, jets, hadronic decays of tau leptons and missing transverse energy. This sampling calorimeter uses steel plates as absorber and scintillating tiles as active medium. The light produced by the passage of charged particles is transmitted by wavelength shifting fibres to photomultiplier tubes (PMTs), located on the outside of the calorimeter. The readout is segmented into about 5000 cells (longitudinally and transversally), each of them being read out by two PMTs in parallel. To calibrate and monitor the stability and performance of each part of the readout chain during the data taking, a set of calibration systems is used. The TileCal calibration system comprises cesium radioactive sources, Laser and charge injection elements, and allows for monitoring and equalization of the calorimeter response at each stage of the signal production, ...

  8. In-beam experience with a highly granular DAQ and control network: TrbNet

    International Nuclear Information System (INIS)

    Michel, J; Korcyl, G; Maier, L; Traxler, M

    2013-01-01

    Virtually all Data Acquisition Systems (DAQ) for nuclear and particle physics experiments use a large number of Field Programmable Gate Arrays (FPGAs) for data transport and more complex tasks as pattern recognition and data reduction. All these FPGAs in a large system have to share a common state like a trigger number or an epoch counter to keep the system synchronized for a consistent event/epoch building. Additionally, the collected data has to be transported with high bandwidth, optionally via the ubiquitous Ethernet protocol. Furthermore, the FPGAs' internal states and configuration memories have to be accessed for control and monitoring purposes. Another requirement for a modern DAQ-network is the fault-tolerance for intermittent data errors in the form of automatic retransmission of faulty data. As FPGAs suffer from Single Event Effects when exposed to ionizing particles, the system has to deal with failing FPGAs. The TrbNet protocol was developed taking all these requirements into account. Three virtual channels are merged on one physical medium: The trigger/epoch information is transported with the highest priority. The data channel is second in the priority order, while the control channel is the last. Combined with a small frame size of 80 bit this guarantees a low latency data transport: A system with 100 front-ends can be built with a one-way latency of 2.2 us. The TrbNet-protocol was implemented in each of the 550 FPGAs of the HADES upgrade project and has been successfully used during the Au+Au campaign in April 2012. With 2⋅10 6 /s Au-ions and 3% interaction ratio the accepted trigger rate is 10 kHz while data is written to storage with 150 MBytes/s. Errors are reliably mitigated via the implemented retransmission of packets and auto-shut-down of individual links. TrbNet was also used for full monitoring of the FEE status. The network stack is written in VHDL and was successfully deployed on various Lattice and Xilinx devices. The TrbNet is also

  9. In-beam experience with a highly granular DAQ and control network: TrbNet

    Science.gov (United States)

    Michel, J.; Korcyl, G.; Maier, L.; Traxler, M.

    2013-02-01

    Virtually all Data Acquisition Systems (DAQ) for nuclear and particle physics experiments use a large number of Field Programmable Gate Arrays (FPGAs) for data transport and more complex tasks as pattern recognition and data reduction. All these FPGAs in a large system have to share a common state like a trigger number or an epoch counter to keep the system synchronized for a consistent event/epoch building. Additionally, the collected data has to be transported with high bandwidth, optionally via the ubiquitous Ethernet protocol. Furthermore, the FPGAs' internal states and configuration memories have to be accessed for control and monitoring purposes. Another requirement for a modern DAQ-network is the fault-tolerance for intermittent data errors in the form of automatic retransmission of faulty data. As FPGAs suffer from Single Event Effects when exposed to ionizing particles, the system has to deal with failing FPGAs. The TrbNet protocol was developed taking all these requirements into account. Three virtual channels are merged on one physical medium: The trigger/epoch information is transported with the highest priority. The data channel is second in the priority order, while the control channel is the last. Combined with a small frame size of 80 bit this guarantees a low latency data transport: A system with 100 front-ends can be built with a one-way latency of 2.2 us. The TrbNet-protocol was implemented in each of the 550 FPGAs of the HADES upgrade project and has been successfully used during the Au+Au campaign in April 2012. With 2ṡ106/s Au-ions and 3% interaction ratio the accepted trigger rate is 10 kHz while data is written to storage with 150 MBytes/s. Errors are reliably mitigated via the implemented retransmission of packets and auto-shut-down of individual links. TrbNet was also used for full monitoring of the FEE status. The network stack is written in VHDL and was successfully deployed on various Lattice and Xilinx devices. The TrbNet is also

  10. ZEXP - expert system for ZEUS

    International Nuclear Information System (INIS)

    Behrens, U.; Flasinski, M.; Hagge, L.

    1992-10-01

    The proper and timely reactions to errors occurring in the online data-acquisition (DAQ) system are necessary conditions of smooth data taking during the experiment runs. Since the Eventbuilder (EVB) is a central part of the ZEUS DAQ system, it is the best place for monitoring, detecting, and recognizing erroneous behaviour. ZEXP is a software tool for upgrading the DAQ system performance. The pattern recognition methodology used for designing one of its two main modules is discussed. The general design ideas of the system and some preliminary results from the summarizing run module are presented, as well. (orig.)

  11. EPICS based DAQ system

    International Nuclear Information System (INIS)

    Cheng Weixing; Chen Yongzhong; Zhou Weimin; Ye Kairong; Liu Dekang

    2002-01-01

    EPICS is the most popular developing platform to build control system and beam diagnostic system in modern physics experiment facilities. An EPICS based data acquisition system was built in Redhat 6.2 operation system. The system is successfully used in the beam position monitor mapping, it improves the mapping process a lot

  12. The ATLAS Level-1 Trigger System with 13TeV nominal LHC collisions

    CERN Document Server

    Helary, Louis; The ATLAS collaboration

    2017-01-01

    The Level-1 (L1) Trigger system of the ATLAS experiment at CERN's Large Hadron Collider (LHC) plays a key role in the ATLAS detector data-taking. It is a hardware system that selects in real time events containing physics-motivated signatures. Selection is purely based on calorimetry energy depositions and hits in the muon chambers consistent with muon candidates. The L1 Trigger system has been upgraded to cope with the more challenging run-II LHC beam conditions, including increased centre-of-mass energy, increased instantaneous luminosity and higher levels of pileup. This talk summarises the improvements, commissioning and performance of the L1 ATLAS Trigger for the LHC run-II data period. The acceptance of muon triggers has been improved by increasing the hermiticity of the muon spectrometer. New strategies to obtain a better muon trigger signal purity were designed for certain geometrically difficult transition regions by using the ATLAS hadronic calorimeter. Algorithms to reduce noise spikes in muon trig...

  13. Controlling and Monitoring the Data Flow of the LHCb Read-out and DAQ Network

    CERN Multimedia

    Schwemmer, R; Neufeld, N; Svantesson, D

    2011-01-01

    The LHCb readout uses a set of 320 FPGA based boards as interface between the on-detector hardware and the GBE DAQ network. The boards are the logical Level 1 (L1) read-out electronics and aggregate the experiment's raw data into event fragments that are sent to the DAQ network. To control the many parameters of the read-out boards, an embedded PC is included on each board, connecting to the boards ICs and FPGAs. The data from the L1 boards is sent through an aggregation network into the High Level Trigger farm. The farm comprises approximately 1500 PCs which at first assemble the fragments from the L1 boards and then do a partial reconstruction and selection of the events. In total there are approximately 3500 network connections. Data is pushed through the network and there is no mechanism for resending packets. Loss of data on a small scale is acceptable but care has to be taken to avoid data loss if possible. To monitor and debug losses, different probes are inserted throughout the entire read-out chain t...

  14. The Run-2 ATLAS Trigger System

    International Nuclear Information System (INIS)

    Martínez, A Ruiz

    2016-01-01

    The ATLAS trigger successfully collected collision data during the first run of the LHC between 2009-2013 at different centre-of-mass energies between 900 GeV and 8TeV. The trigger system consists of a hardware Level-1 and a software-based high level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. In Run-2, the LHC will operate at centre-of-mass energies of 13 and 14 TeV and higher luminosity, resulting in up to five times higher rates of processes of interest. A brief review of the ATLAS trigger system upgrades that were implemented between Run-1 and Run-2, allowing to cope with the increased trigger rates while maintaining or even improving the efficiency to select physics processes of interest, will be given. This includes changes to the Level-1 calorimeter and muon trigger systems, the introduction of a new Level-1 topological trigger module and the merging of the previously two-level HLT system into a single event processing farm. A few examples will be shown, such as the impressive performance improvements in the HLT trigger algorithms used to identify leptons, hadrons and global event quantities like missing transverse energy. Finally, the status of the commissioning of the trigger system and its performance during the 2015 run will be presented. (paper)

  15. The BELLE DAQ system

    Science.gov (United States)

    Suzuki, Soh Yamagata; Yamauchi, Masanori; Nakao, Mikihiko; Itoh, Ryosuke; Fujii, Hirofumi

    2000-10-01

    We built a data acquisition system for the BELLE experiment. The system was designed to cope with the average trigger rate up to 500 Hz at the typical event size of 30 kB. This system has five components: (1) the readout sequence controller, (2) the FASTBUS-TDC readout systems using charge-to-time conversion, (3) the barrel shifter event builder, (4) the parallel online computing farm, and (5) the data transfer system to the mass storage. This system has been in operation for physics data taking since June 1999 without serious problems.

  16. The BELLE DAQ system

    International Nuclear Information System (INIS)

    Suzuki, Soh Yamagata; Yamauchi, Masanori; Nakao, Mikihiko; Itoh, Ryosuke; Fujii, Hirofumi

    2000-01-01

    We built a data acquisition system for the BELLE experiment. The system was designed to cope with the average trigger rate up to 500 Hz at the typical event size of 30 kB. This system has five components: (1) the readout sequence controller, (2) the FASTBUS-TDC readout systems using charge-to-time conversion, (3) the barrel shifter event builder, (4) the parallel online computing farm, and (5) the data transfer system to the mass storage. This system has been in operation for physics data taking since June 1999 without serious problems

  17. Integration of Titan supercomputer at OLCF with ATLAS Production System

    CERN Document Server

    AUTHOR|(SzGeCERN)643806; The ATLAS collaboration; De, Kaushik; Klimentov, Alexei; Nilsson, Paul; Oleynik, Danila; Padolski, Siarhei; Panitkin, Sergey; Wenaus, Torre

    2017-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for jo...

  18. Integration of Titan supercomputer at OLCF with ATLAS production system

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration

    2016-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this talk we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job...

  19. A high dynamic range data acquisition system for a solid-state electron electric dipole moment experiment

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Young Jin; Kunkler, Brandon; Liu, Chen-Yu; Visser, Gerard [CEEM, Physics Department, Indiana University, Bloomington, Indiana 47408 (United States)

    2012-01-15

    We have built a high precision (24-bit) data acquisition (DAQ) system capable of simultaneously sampling eight input channels for the measurement of the electric dipole moment of the electron. The DAQ system consists of two main components: a master board for DAQ control and eight individual analog-to-digital converter (ADC) boards for signal processing. This custom DAQ system provides galvanic isolation of the ADC boards from each other and the master board using fiber optic communication to reduce the possibility of ground loop pickup and attain ultimate low levels of channel cross-talk. In this paper, we describe the implementation of the DAQ system and scrutinize its performance.

  20. The Error Reporting in the ATLAS TDAQ system

    CERN Document Server

    Kolos, S; The ATLAS collaboration; Papaevgeniou, L

    2014-01-01

    The ATLAS Error Reporting feature, which is used in the TDAQ environment, provides a service that allows experts and shift crew to track and address errors relating to the data taking components and applications. This service, called the Error Reporting Service(ERS), gives software applications the opportunity to collect and send comprehensive data about errors, happening at run-time, to a place where it can be intercepted in real-time by any other system component. Other ATLAS online control and monitoring tools use the Error Reporting service as one of their main inputs to address system problems in a timely manner and to improve the quality of acquired data. The actual destination of the error messages depends solely on the run-time environment, in which the online applications are operating. When applications send information to ERS, depending on the actual configuration the information may end up in a local file, in a database, in distributed middle-ware, which can transport it to an expert system or dis...

  1. The Error Reporting in the ATLAS TDAQ System

    CERN Document Server

    Kolos, S; The ATLAS collaboration; Papaevgeniou, L

    2015-01-01

    The ATLAS Error Reporting feature, which is used in the TDAQ environment, provides a service that allows experts and shift crew to track and address errors relating to the data taking components and applications. This service, called the Error Reporting Service(ERS), gives software applications the opportunity to collect and send comprehensive data about errors, happening at run-time, to a place where it can be intercepted in real-time by any other system component. Other ATLAS online control and monitoring tools use the Error Reporting service as one of their main inputs to address system problems in a timely manner and to improve the quality of acquired data. The actual destination of the error messages depends solely on the run-time environment, in which the online applications are operating. When applications send information to ERS, depending on the actual configuration the information may end up in a local file, in a database, in distributed middle-ware, which can transport it to an expert system or dis...

  2. Integration of Titan supercomputer at OLCF with ATLAS Production System

    Science.gov (United States)

    Barreiro Megino, F.; De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Padolski, S.; Panitkin, S.; Wells, J.; Wenaus, T.; ATLAS Collaboration

    2017-10-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job submission to Titan’s batch queues and local data management, with lightweight MPI wrappers to run single node workloads in parallel on Titan’s multi-core worker nodes. It provides for running of standard ATLAS production jobs on unused resources (backfill) on Titan. The system already allowed ATLAS to collect on Titan millions of core-hours per month, execute hundreds of thousands jobs, while simultaneously improving Titans utilization efficiency. We will discuss the details of the implementation, current experience with running the system, as well as future plans aimed at improvements in scalability and efficiency. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to

  3. ATLAS Distributed Computing Automation

    CERN Document Server

    Schovancova, J; The ATLAS collaboration; Borrego, C; Campana, S; Di Girolamo, A; Elmsheuser, J; Hejbal, J; Kouba, T; Legger, F; Magradze, E; Medrano Llamas, R; Negri, G; Rinaldi, L; Sciacca, G; Serfon, C; Van Der Ster, D C

    2012-01-01

    The ATLAS Experiment benefits from computing resources distributed worldwide at more than 100 WLCG sites. The ATLAS Grid sites provide over 100k CPU job slots, over 100 PB of storage space on disk or tape. Monitoring of status of such a complex infrastructure is essential. The ATLAS Grid infrastructure is monitored 24/7 by two teams of shifters distributed world-wide, by the ATLAS Distributed Computing experts, and by site administrators. In this paper we summarize automation efforts performed within the ATLAS Distributed Computing team in order to reduce manpower costs and improve the reliability of the system. Different aspects of the automation process are described: from the ATLAS Grid site topology provided by the ATLAS Grid Information System, via automatic site testing by the HammerCloud, to automatic exclusion from production or analysis activities.

  4. Study of an on-line filtering system for the ATLAS detector

    International Nuclear Information System (INIS)

    Fede, E.

    2001-01-01

    The first chapter presents today's knowledge about particle physics and a description of the main decay channels and physical signatures associated to the Higgs boson is given. The second chapter is dedicated to the LHC accelerator with a focus on the ATLAS detector and its sub-detectors. The third chapter presents ATLAS triggering system and its data acquisition system. In the fourth chapter the functionalities required for an adequate event filtering system concerning physics issues and data managing are described. The design of a prototype based on a fleet of PC computers linked through an Ethernet network is presented in the fifth chapter

  5. Experience commissioning the ATLAS distributed data management system on top of the WLCG service

    International Nuclear Information System (INIS)

    Campana, S

    2010-01-01

    The ATLAS experiment at CERN developed an automated system for distribution of simulated and detector data. Such system, which partially consists of various ATLAS specific services, strongly relies on the WLCG infrastructure, both at the level of middleware components, service deployment and operations. Because of the complexity of the system and its highly distributed nature, a dedicated effort was put in place to deliver a reliable service for ATLAS data distribution, offering the necessary performance, high availability and accommodating the main use cases. This contribution will describe the various challenges and activities carried on in 2008 for the commissioning of the system, together with the experience distributing simulated data and detector data. The main commissioning activity was concentrated in two Combined Computing Resource Challenges, in February and May 2008, where it was demonstrated that the WLCG service and the ATLAS system could sustain the peak load of data transfer according to the computing model, for several days in a row, concurrently with other LHC experiment activities. This dedicated effort led to the consequential improvements of ATLAS and WLCG services and to daily operation activities throughout the last year. The system has been delivering to WLCG tiers many hundreds of terabytes of simulated data and, since the summer of 2008, more than two petabytes of cosmic and beam data.

  6. ATLAS Distributed Computing

    CERN Document Server

    Schovancova, J; The ATLAS collaboration

    2011-01-01

    The poster details the different aspects of the ATLAS Distributed Computing experience after the first year of LHC data taking. We describe the performance of the ATLAS distributed computing system and the lessons learned during the 2010 run, pointing out parts of the system which were in a good shape, and also spotting areas which required improvements. Improvements ranged from hardware upgrade on the ATLAS Tier-0 computing pools to improve data distribution rates, tuning of FTS channels between CERN and Tier-1s, and studying data access patterns for Grid analysis to improve the global processing rate. We show recent software development driven by operational needs with emphasis on data management and job execution in the ATLAS production system.

  7. ATLAS Facility Description Report

    International Nuclear Information System (INIS)

    Kang, Kyoung Ho; Moon, Sang Ki; Park, Hyun Sik; Cho, Seok; Choi, Ki Yong

    2009-04-01

    A thermal-hydraulic integral effect test facility, ATLAS (Advanced Thermal-hydraulic Test Loop for Accident Simulation), has been constructed at KAERI (Korea Atomic Energy Research Institute). The ATLAS has the same two-loop features as the APR1400 and is designed according to the well-known scaling method suggested by Ishii and Kataoka to simulate the various test scenarios as realistically as possible. It is a half-height and 1/288-volume scaled test facility with respect to the APR1400. The fluid system of the ATLAS consists of a primary system, a secondary system, a safety injection system, a break simulating system, a containment simulating system, and auxiliary systems. The primary system includes a reactor vessel, two hot legs, four cold legs, a pressurizer, four reactor coolant pumps, and two steam generators. The secondary system of the ATLAS is simplified to be of a circulating loop-type. Most of the safety injection features of the APR1400 and the OPR1000 are incorporated into the safety injection system of the ATLAS. In the ATLAS test facility, about 1300 instrumentations are installed to precisely investigate the thermal-hydraulic behavior in simulation of the various test scenarios. This report describes the scaling methodology, the geometric data of the individual component, and the specification and the location of the instrumentations in detail

  8. The ATLAS PanDA Monitoring System and its Evolution

    CERN Document Server

    Klimentov, A; The ATLAS collaboration; Potekhin, M; Wenaus, T

    2011-01-01

    The PanDA (Production and Distributed Analysis) Workload Management System is used for ATLAS distributed production and analysis worldwide. The needs of ATLAS global computing imposed challenging requirements on PanDA design in areas such as scalability, robustness, automation, diagnostics, and usability for both production shifters and analysis users. Important to meeting these and other requirements is a comprehensive monitoring system. Through a system-wide job database, the PanDA monitor provides a comprehensive and coherent view of the system and job execution, from high level summaries to detailed drill-down job diagnostics. It is (like the rest of PanDA) an Apache-based Python application backed by Oracle. The presentation layer is HTML code generated on the fly in the Python application which is also responsible for managing database queries. However, this approach is lacking in user interface flexibility, simplicity of communication with external systems, and ease of maintenance. We decided to migrat...

  9. The ATLAS PanDA Monitoring System and its Evolution

    CERN Document Server

    Klimentov, A; The ATLAS collaboration; Potekhin, M; Wenaus, T

    2010-01-01

    The PanDA (Production and Distributed Analysis) Workload Management System is used for ATLAS distributed production and analysis worldwide. The needs of ATLAS global computing imposed challenging requirements on PanDA design in areas such as scalability, robustness, automation, diagnostics, and usability for both production shifters and analysis users. Important to meeting these and other requirements is a comprehensive monitoring system. Through a system-wide job database, the PanDA monitor provides a comprehensive and coherent view of the system and job execution, from high level summaries to detailed drill-down job diagnostics. It is (like the rest of PanDA) an Apache-based Python application backed by Oracle. The presentation layer is HTML code generated on the fly in the Python application which is also responsible for managing database queries. However, this approach is lacking in user interface flexibility, simplicity of communication with external systems, and ease of maintenance. We decided to migrat...

  10. The ATLAS Trigger System: Ready for Run-2

    CERN Document Server

    Maeda, Junpei; The ATLAS collaboration

    2015-01-01

    The ATLAS trigger has been successfully collecting collision data during the first run of the LHC between 2009-2013 at a centre-of-mass energy between 900 GeV and 8 TeV. The trigger system consists of a hardware Level-1 and a software based high-level trigger that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. During the data-taking period of Run-2 the LHC will operate at a centre-of-mass energy of about 13 TeV resulting in roughly five times higher trigger rates. In these proceedings, we briefly review the ATLAS trigger system upgrades that were implemented during the shutdown, allowing us to cope with the increased trigger rates while maintaining or even improving our efficiency to select relevant physics processes. This includes changes to the Level-1 calorimeter and muon trigger system, the introduction of a new Level-1 topological trigger module and themerging of the previously two-level higher-level trigger system into a single even...

  11. The ATLAS Trigger System : Ready for Run-2

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00211007; The ATLAS collaboration

    2016-01-01

    The ATLAS trigger has been successfully collecting collision data during the first run of the LHC between 2009-2013 at a centre-of-mass energy between 900 GeV and 8 TeV. The trigger system consists of a hardware based Level-1 (L1) and a software based high-level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. During the course of the ongoing Run-2 data-taking campaign at 13 TeV centre-of-mass energy the trigger rates will be approximately 5 times higher compared to Run-1. In these proceedings we briefly review the ATLAS trigger system upgrades that were implemented during the shutdown, allowing us to cope with the increased trigger rates while maintaining or even improving our efficiency to select relevant physics processes. This includes changes to the L1 calorimeter and muon trigger system, the introduction of a new L1 topological trigger subsystem and the merging of the previously two-level HLT system into a single ev...

  12. ATLAS Tile calorimeter calibration and monitoring systems

    Science.gov (United States)

    Chomont, Arthur; ATLAS Collaboration

    2017-11-01

    The ATLAS Tile Calorimeter (TileCal) is the central section of the hadronic calorimeter of the ATLAS experiment and provides important information for reconstruction of hadrons, jets, hadronic decays of tau leptons and missing transverse energy. This sampling calorimeter uses steel plates as absorber and scintillating tiles as active medium. The light produced by the passage of charged particles is transmitted by wavelength shifting fibres to photomultiplier tubes (PMTs), located on the outside of the calorimeter. The readout is segmented into about 5000 cells (longitudinally and transversally), each of them being read out by two PMTs in parallel. To calibrate and monitor the stability and performance of each part of the readout chain during the data taking, a set of calibration systems is used. The TileCal calibration system comprises cesium radioactive sources, Laser and charge injection elements, and allows for monitoring and equalization of the calorimeter response at each stage of the signal production, from scintillation light to digitization. Based on LHC Run 1 experience, several calibration systems were improved for Run 2. The lessons learned, the modifications, and the current LHC Run 2 performance are discussed.

  13. First Results from the Online Radiation Dose Monitoring System in ATLAS experiment

    CERN Document Server

    Mandić, I; The ATLAS collaboration; Deliyergiyev, M; Gorišek, A; Kramberger, G; Mikuž, M; Franz, S; Hartert, J; Dawson, I; Miyagawa, P S; Nicolas, L

    2011-01-01

    High radiation doses which will accumulate in components of ATLAS experiment during data taking will cause damage to detectors and readout electronics. It is therefore important to continuously monitor the doses to estimate the level of degradation caused by radiation. Online radiation monitoring system measures ionizing dose in SiO2 and fluences of 1-MeV(Si) equivalent neutrons and thermal neutrons at several locations in ATLAS detector. In this paper measurements collected during two years of ATLAS data taking are presented and compared to predictions from radiation background simulations.

  14. Integration of Globus Online with the ATLAS PanDA Workload Management System

    CERN Document Server

    Contreras, C; The ATLAS collaboration; Maeno, T; Nilsson, P; Potekhin, M

    2012-01-01

    The PanDA Workload Management System is the basis for distributed production and analysis for the ATLAS experiment at the LHC. In this role, it relies on sophisticated dynamic data movement facilities developed in ATLAS. In certain scenarios, such as small research teams in ATLAS Tier-3 sites and non-ATLAS Virtual Organizations, the overhead of installation and operation of these components makes their use not very cost effective. Globus Online is an emerging new tool from the Globus Alliance, which already proved popular within the research community. It provides the users with fast and robust file transfer capabilities that can also be managed from a Web interface, and in addition to grid sites, can have individual workstations and laptops serving as data transmission endpoints. We will describe the integration of the Globus Online functionality into the PanDA suite of software, in order to give more flexibility in choosing the method of data transfer to ATLAS Tier-3 and OSG users.

  15. Integration of Globus Online with the ATLAS PanDA Workload Management System

    International Nuclear Information System (INIS)

    Contreras, C; Deng, W; Maeno, T; Potekhin, M; Nilsson, P

    2012-01-01

    The PanDA Workload Management System is the basis for distributed production and analysis for the ATLAS experiment at the LHC. In this role, it relies on sophisticated dynamic data movement facilities developed in ATLAS. In certain scenarios, such as small research teams in ATLAS Tier-3 sites and non-ATLAS Virtual Organizations, the overhead of installation and operation of these components makes their use not very cost effective. Globus Online is an emerging new tool from the Globus Alliance, which already proved popular within the research community. It provides the users with fast and robust file transfer capabilities that can also be managed from a Web interface, and in addition to grid sites, can have individual workstations and laptops serving as data transmission endpoints. We will describe the integration of the Globus Online functionality into the PanDA suite of software, in order to give more flexibility in choosing the method of data transfer to ATLAS Tier-3 and Open Science Grid (OSG) users.

  16. DEAP-3600 Data Acquisition System

    Science.gov (United States)

    Lindner, Thomas

    2015-12-01

    DEAP-3600 is a dark matter experiment using liquid argon to detect Weakly Interacting Massive Particles (WIMPs). The DEAP-3600 Data Acquisition (DAQ) has been built using a combination of commercial and custom electronics, organized using the MIDAS framework. The DAQ system needs to suppress a high rate of background events from 39Ar beta decays. This suppression is implemented using a combination of online firmware and software-based event filtering. We will report on progress commissioning the DAQ system, as well as the development of the web-based user interface.

  17. ATLAS silicon microstrip detector system (SCT)

    International Nuclear Information System (INIS)

    Unno, Y.

    2003-01-01

    The S CT together with the pixel and the transition radiation tracker systems and with a central solenoid forms the central tracking system of the ATLAS detector at LHC. Series production of SCT Silicon microstrip sensors is near completion. The sensors have been shown to be robust against high voltage operation to the 500 V required after fluences of 3x10 14 protons/cm 2 . SCT barrel modules are in series production. A low-noise CCD camera has been used to debug the onset of leakage currents

  18. Report on container technology for the ATLAS TDAQ system

    CERN Document Server

    Gadirov, Hamid

    2016-01-01

    My summer student project "Container technology for the Upgrade of the ATLAS Trigger and Data Acquisition (TDAQ) system" focused on the research of container-based (operating system-level) virtualization for TDAQ software. Several tests were performed on Docker platform, all of them showed compatibility for TDAQ software.

  19. The ATLAS Trigger system upgrade and performance in Run 2

    CERN Document Server

    Shaw, Savanna Marie; The ATLAS collaboration

    2017-01-01

    The ATLAS trigger has been used very successfully for the online event selection during the first part of the LHC Run-2 in 2015/16 at a centre-of-mass energy of 13 TeV. The trigger system is composed of a hardware Level-1 trigger and a software-based high-level trigger; it reduces the event rate from the bunch-crossing rate of 40 MHz to an average recording rate of about 1 kHz. The excellent performance of the ATLAS trigger has been vital for the ATLAS physics program of Run-2, selecting interesting collision events for wide variety of physics signatures with high efficiency. The trigger selection capabilities of ATLAS during Run-2 have been significantly improved compared to Run-1, in order to cope with the higher event rates and pile-up which are the result of the almost doubling of the center-of-mass collision energy and the increase in the instantaneous luminosity of the LHC. In order to prepare for the anticipated further luminosity increase of the LHC in 2017/18, improving the trigger performance remain...

  20. The ATLAS Trigger system upgrade and performance in Run 2

    CERN Document Server

    Shaw, Savanna Marie; The ATLAS collaboration

    2018-01-01

    The ATLAS trigger has been used very successfully for the online event selection during the first part of the second LHC run (Run-2) in 2015/16 at a centre-of-mass energy of 13 TeV. The trigger system is composed of a hardware Level-1 trigger and a software-based high-level trigger; it reduces the event rate from the bunch-crossing rate of 40 MHz to an average recording rate of about 1 kHz. The excellent performance of the ATLAS trigger has been vital for the ATLAS physics program of Run-2, selecting interesting collision events for wide variety of physics signatures with high efficiency. The trigger selection capabilities of ATLAS during Run-2 have been significantly improved compared to Run-1, in order to cope with the higher event rates and pile-up which are the result of the almost doubling of the center-of-mass collision energy and the increase in the instantaneous luminosity of the LHC. At the Level-1 trigger the undertaken improvements resulted in more pile-up robust selection efficiencies and event ra...

  1. System Description of the Electrical Power Supply System for the ATLAS Integral Test Loop

    International Nuclear Information System (INIS)

    Moon, S. K.; Park, J. K.; Kim, Y. S.; Song, C. H.; Baek, W. P.

    2007-02-01

    An integral effect test loop for pressurized water reactors (PWRs), the ATLAS (Advanced Thermal-hydraulic Test Loop for Accident Simulation), is constructed by Thermal-Hydraulics Safety Research Team in Korea Atomic Energy Research Institute (KAERI). The ATLAS facility has been designed to have the length scale of 1/2 and area scale of 1/144 compared with the reference plant, APR1400. This report describes the design and technical specifications of the electrical power supply system which supplies the electrical powers to core heater rods, other heaters, various pumps and other systems. The electrical power supply system had acquired the final approval on the operation from the Korea Electrical Safety Corporation. During performance tests for the operation and control, the electrical power supply system showed completely acceptable operation and control performance

  2. Role Based Access Control system in the ATLAS experiment

    CERN Document Server

    Valsan, M L; The ATLAS collaboration; Lehmann Miotto, G; Scannicchio, D A; Schlenker, S; Filimonov, V; Khomoutnikov, V; Dumitru, I; Zaytsev, A S; Korol, A A; Bogdantchikov, A; Caramarcu, C; Ballestrero, S; Darlea, G L; Twomey, M; Bujor, F; Avolio, G

    2011-01-01

    The complexity of the ATLAS experiment motivated the deployment of an integrated Access Control System in order to guarantee safe and optimal access for a large number of users to the various software and hardware resources. Such an integrated system was foreseen since the design of the infrastructure and is now central to the operations model. In order to cope with the ever growing needs of restricting access to all resources used within the experiment, the Roles Based Access Control (RBAC) previously developed has been extended and improved. The paper starts with a short presentation of the RBAC design, implementation and the changes made to the system to allow the management and usage of roles to control access to the vast and diverse set of resources. The paper continues with a detailed description of the integration across all areas of the system: local Linux and Windows nodes in the ATLAS Control Network (ATCN), the Linux application gateways offering remote access inside ATCN, the Windows Terminal Serv...

  3. Role Based Access Control System in the ATLAS Experiment

    CERN Document Server

    Valsan, M L; The ATLAS collaboration; Lehmann Miotto, G; Scannicchio, D A; Schlenker, S; Filimonov, V; Khomoutnikov, V; Dumitru, I; Zaytsev, A S; Korol, A A; Bogdantchikov, A; Avolio, G; Caramarcu, C; Ballestrero, S; Darlea, G L; Twomey, M; Bujor, F

    2010-01-01

    The complexity of the ATLAS experiment motivated the deployment of an integrated Access Control System in order to guarantee safe and optimal access for a large number of users to the various software and hardware resources. Such an integrated system was foreseen since the design of the infrastructure and is now central to the operations model. In order to cope with the ever growing needs of restricting access to all resources used within the experiment, the Roles Based Access Control (RBAC) previously developed has been extended and improved. The paper starts with a short presentation of the RBAC design, implementation and the changes made to the system to allow the management and usage of roles to control access to the vast and diverse set of resources. The paper continues with a detailed description of the integration across all areas of the system: local Linux and Windows nodes in the ATLAS Control Network (ATCN), the Linux application gateways offering remote access inside ATCN, the Windows Terminal Serv...

  4. The ATLAS High Level Trigger Steering Framework and the Trigger Configuration System.

    CERN Document Server

    Perez Cavalcanti, Tiago; The ATLAS collaboration

    2011-01-01

    The ATLAS detector system installed in the Large Hadron Collider (LHC) at CERN is designed to study proton-proton and nucleus-nucleus collisions with a maximum centre of mass energy of 14 TeV at a bunch collision rate of 40MHz. In March 2010 the four LHC experiments saw the first proton-proton collisions at 7 TeV. Still within the year a collision rate of nearly 10 MHz is expected. At ATLAS, events of potential interest for ATLAS physics are selected by a three-level trigger system, with a final recording rate of about 200 Hz. The first level (L1) is implemented in custom hardware; the two levels of the high level trigger (HLT) are software triggers, running on large farms of standard computers and network devices. Within the ATLAS physics program more than 500 trigger signatures are defined. The HLT tests each signature on each L1-accepted event; the test outcome is recorded for later analysis. The HLT-Steering is responsible for this. It foremost ensures the independent test of each signature, guarantying u...

  5. Atlas Pulsed Power System: a Driver for Multi-Megagauss Fields

    International Nuclear Information System (INIS)

    Cochrane, J.C.; Bartsch, R.R.; Bennett, G.A.; Bowman, D.W.; Davis, H.A.; Ekdahl, C.A.; Gribble, R.F.; Kimerly, H.J.; Nielsen, K.E.; Parsons, W.M.; Paul, J.D.; Scudder, D.W.; Trainor, R.J.; Thompson, M.C.; Watt, R.G.

    1998-01-01

    Atlas is a pulsed power machine designed for hydrodynamic experiments for the Los Alamos High Energy Density Physics Experimental program. It is presently under construction and should be operational in late 2000. Atlas will store 23 MJ at an erected voltage of 240 kV. This will produce a current of 30 MA into a static load and as much as 32 MA into a dynamic load. The current pulse will have a rise time of approximately5micros and will produce a magnetic field driving the impactor liner of several hundred Tesla at the target radius of one to two centimeters. The collision can produce shock pressures of approximately15 megabars. Design of the pulsed power system will be presented along with data obtained from the Atlas prototype Marx module

  6. ATLAS' major cooling project

    CERN Multimedia

    2005-01-01

    In 2005, a considerable effort has been put into commissioning the various units of ATLAS' complex cryogenic system. This is in preparation for the imminent cooling of some of the largest components of the detector in their final underground configuration. The liquid helium and nitrogen ATLAS refrigerators in USA 15. Cryogenics plays a vital role in operating massive detectors such as ATLAS. In many ways the liquefied argon, nitrogen and helium are the life-blood of the detector. ATLAS could not function without cryogens that will be constantly pumped via proximity systems to the superconducting magnets and subdetectors. In recent weeks compressors at the surface and underground refrigerators, dewars, pumps, linkages and all manner of other components related to the cryogenic system have been tested and commissioned. Fifty metres underground The helium and nitrogen refrigerators, installed inside the service cavern, are an important part of the ATLAS cryogenic system. Two independent helium refrigerators ...

  7. DAQ cards for the Compact Muon Solenoid: a successful technology transfer case

    CERN Document Server

    Barone, M; Geralis, T; Mastroyiannopoulos, N; Tzamarias, S; Zachariadou, K; Tsoussis, L

    2002-01-01

    In this paper we give the description of a project accomplished by a collaboration of researchers, engineers and managers from a Greek medium-size company Hourdakis Electronics S.A and the research laboratories CERN in Geneva and DEMOKRITOS in Athens. The project involved the production of 22 input-output DAQ electronic modules to be used for R&D purposes in the Compact Muon Solenoid experiment of LHC at CERN. This project can be considered a successful technology transfer. (3 refs).

  8. Controlling and Monitoring the Data Flow of the LHCb Read-out and DAQ Network

    CERN Document Server

    Schwemmer, Rainer; Neufeld, N; Svantesson, D

    2011-01-01

    The LHCb read-out uses a set of 320 FPGA based boards as interface between the on-detector hardware and the GBE DAQ network. The boards are the logical Level 1 (L1) read-out electronics and aggregate the experiment’s raw data into event fragments that are sent to the DAQ network. To control the many parameters of the read-out boards, an embedded PC is included on each board, connecting to the boards ICs and FPGAs. The data from the L1 boards is sent through an aggregation network into the High Level Trigger farm. The farm comprises approximately 1500 PCs which at first assemble the fragments from the L1 boards and then do a partial reconstruction and selection of the events. In total there are approximately 3500 network connections. Data is pushed through the network and there is no mechanism for resending packets. Loss of data on a small scale is acceptable but care has to be taken to avoid data loss if possible. To monitor and debug losses, different probes are inserted throughout the entire read-out cha...

  9. Danish heat atlas as a support tool for energy system models

    International Nuclear Information System (INIS)

    Petrovic, Stefan N.; Karlsson, Kenneth B.

    2014-01-01

    Highlights: • The GIS method for calculating costs of district heating expansion is presented. • High socio-economic potential for district heating is identified within urban areas. • The method for coupling a heat atlas and TIMES optimization model is proposed. • Presented methods can be used for any geographical region worldwide. - Abstract: In the past four decades following the global oil crisis in 1973, Denmark has implemented remarkable changes in its energy sector, mainly due to the energy conservation measures on the demand side and the energy efficiency improvements on the supply side. Nowadays, the capital intensive infrastructure investments, such as the expansion of district heating networks and the introduction of significant heat saving measures require highly detailed decision-support tool. A Danish heat atlas provides highly detailed database with extensive information about more than 2.5 million buildings in Denmark. Energy system analysis tools incorporate environmental, economic, energy and engineering analysis of future energy systems and are considered crucial for the quantitative assessment of transitional scenarios towards future milestones, such as EU 2020 goals and Denmark’s goal of achieving fossil free society after 2050. The present paper shows how a Danish heat atlas can be used for providing inputs to energy system models, especially related to the analysis of heat saving measures within building stock and expansion of district heating networks. As a result, marginal cost curves are created, approximated and prepared for the use in optimization energy system model. Moreover, it is concluded that heat atlas can contribute as a tool for data storage and visualisation of results

  10. Supporting ATLAS

    CERN Multimedia

    maximilien brice

    2003-01-01

    Eighteen feet made of stainless steel will support the barrel ATLAS detector in the cavern at Point 1. In total, the ATLAS feet system will carry approximately 6000 tons, and will give the same inclination to the detector as the LHC accelerator.

  11. The ATLAS Trigger System: Ready for Run II

    CERN Document Server

    Czodrowski, Patrick; The ATLAS collaboration

    2015-01-01

    The ATLAS trigger system has been used successfully for data collection in the 2009-2013 Run 1 operation cycle of the CERN Large Hadron Collider (LHC) at center-of-mass energies of up to 8 TeV. With the restart of the LHC for the new Run 2 data-taking period at 13 TeV, the trigger rates are expected to rise by approximately a factor of 5. The trigger system consists of a hardware-based first level (L1) and a software-based high-level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of ~ 1kHz. This presentation will give an overview of the upgrades to the ATLAS trigger system that have been implemented during the LHC shutdown period in order to deal with the increased trigger rates while efficiently selecting the physics processes of interest. These upgrades include changes to the L1 calorimeter trigger, the introduction of a new L1 topological trigger module, improvements in the L1 muon system, and the merging of the previously two-level HLT ...

  12. The ATLAS Trigger System: Ready for Run-2

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00211007; The ATLAS collaboration

    2015-01-01

    The ATLAS trigger has been successfully collecting collision data during the first run of the LHC between 2009-2013 at a centre-of-mass energy between 900 GeV and 8 TeV. The trigger system consists of a hardware Level-1 (L1) and a software based high-level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. During the next data-taking period starting in 2015 (Run-2) the LHC will operate at a centre-of-mass energy of about 13 TeV resulting in roughly five times higher trigger rates. We will briefly review the ATLAS trigger system upgrades that were implemented during the shutdown, allowing us to cope with the increased trigger rates while maintaining or even improving our efficiency to select relevant physics processes. This includes changes to the L1 calorimeter and muon trigger system, the introduction of a new L1 topological trigger module and the merging of the previously two-level HLT system into a single event filter fa...

  13. A thermosiphon for ATLAS

    CERN Multimedia

    Rosaria Marraffino

    2013-01-01

    A new thermosiphon cooling system, designed for the ATLAS silicon detectors by CERN’s EN-CV team in collaboration with the experiment, will replace the current system in the next LHC run in 2015. Using the basic properties of density difference and making gravity do the hard work, the thermosiphon promises to be a very reliable solution that will ensure the long-term stability of the whole system.   Former compressor-based cooling system of the ATLAS inner detectors. The system is currently being replaced by the innovative thermosiphon. (Photo courtesy of Olivier Crespo-Lopez). Reliability is the major issue for the present cooling system of the ATLAS silicon detectors. The system was designed 13 years ago using a compressor-based cooling cycle. “The current cooling system uses oil-free compressors to avoid fluid pollution in the delicate parts of the silicon detectors,” says Michele Battistin, EN-CV-PJ section leader and project leader of the ATLAS thermosiphon....

  14. Recent ATLAS Articles on WLAP

    CERN Multimedia

    J. Herr

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project is a system for the archiving and publishing of multimedia presentations, using the Web as medium. We list here newly available WLAP items relating to ATLAS: Atlas Physics Workshop 6-11 June 2005 June 2005 ATLAS Week Plenary Session Click here to browse WLAP for all ATLAS lectures.

  15. ATLAS Detector Control System Data Viewer

    CERN Document Server

    Tsarouchas, Charilaos; Roe, S; Bitenc, U; Fehling-Kaschek, ML; Winkelmann, S; D’Auria, S; Hoffmann, D; Pisano, O

    2011-01-01

    The ATLAS experiment at CERN is one of the four Large Hadron Collider experiments. DCS Data Viewer (DDV) is a web interface application that provides access to historical data of ATLAS Detector Control System [1] (DCS) parameters written to the database (DB). It has a modular andflexible design and is structured using a clientserver architecture. The server can be operated stand alone with a command-line interface to the data while the client offers a user friendly, browser independent interface. The selection of the metadata of DCS parameters is done via a column-tree view or with a powerful search engine. The final visualisation of the data is done using various plugins such as “value over time” charts, data tables, raw ASCII or structured export to ROOT. Excessive access or malicious use of the database is prevented by dedicated protection mechanisms, allowing the exposure of the tool to hundreds of inexperienced users. The metadata selection and data output features can be used separately by XML con...

  16. A Fast Hardware Tracker for the ATLAS Trigger System

    CERN Document Server

    Neubauer, Mark S

    2011-01-01

    In hadron collider experiments, triggering the detector to store interesting events for offline analysis is a challenge due to the high rates and multiplicities of particles produced. Maintaining high trigger efficiency for the physics we are most interested in while at the same time suppressing high rate physics from inclusive QCD processes is a difficult but important problem. It is essential that the trigger system be flexible and robust, with sufficient redundancy and operating margin. Providing high quality track reconstruction over the full ATLAS detector by the start of processing at LVL2 is an important element to achieve these needs. As the instantaneous luminosity increases, the computational load on the LVL2 system will significantly increase due to the need for more sophisticated algorithms to suppress backgrounds. The Fast Tracker (FTK) is a proposed upgrade to the ATLAS trigger system. It is designed to enable early rejection of background events and thus leave more LVL2 execution time by moving...

  17. Rucio, the next-generation Data Management system in ATLAS

    CERN Document Server

    Serfon, C; The ATLAS collaboration; Beermann, T; Garonne, V; Goossens, L; Lassnig, M; Nairz, A; Vigne, R

    2014-01-01

    Rucio is the next-generation of Distributed Data Management (DDM) system benefiting from recent advances in cloud and "Big Data" computing to address HEP experiments scaling requirements. Rucio is an evolution of the ATLAS DDM system Don Quijote 2 (DQ2), which has demonstrated very large scale data management capabilities with more than 160 petabytes spread worldwide across 130 sites, and accesses from 1,000 active users. However, DQ2 is reaching its limits in terms of scalability, requiring a large number of support staff to operate and being hard to extend with new technologies. Rucio addresses these issues by relying on new technologies to ensure system scalability, cover new user requirements and employ new automation framework to reduce operational overheads. In this talk, we will present the history of the DDM project and the experience of data management operation in ATLAS computing. Thus, We will show the key concepts of Rucio, including its data organization. The Rucio design, and the technology it e...

  18. The Linux based distributed data acquisition system for the ISTRA+ experiment

    International Nuclear Information System (INIS)

    Filin, A.; Inyakin, A.; Novikov, V.; Obraztsov, V.; Smirnov, N.; Vlassov, E.; Yuschenko, O.

    2001-01-01

    The DAQ hardware of the ISTRA+ experiment consists of the VME system crate that contains two PCI-VME bridges interfacing two PCs with VME, external interrupts receiver, the readout controller for dedicated front-end electronics, the readout controller buffer memory module, the VME-CAMAC interface, and additional control modules. The DAQ computing consist of 6 PCs running the Linux operating system and linked into LAN. The first PC serves the external interrupts and acquires the data from front-end electronic. The second one is the slow control computer. The remaining PCs host the monitoring and data analysis software. The Linux based DAQ software provides the external interrupts processing, the data acquisition, recording, and distribution between monitoring and data analysis tasks running at DAQ PCs. The monitoring programs are based on two packages for data visualization: home-written one and the ROOT system. MySQL is used as a DAQ database

  19. Software Validation in ATLAS

    International Nuclear Information System (INIS)

    Hodgkinson, Mark; Seuster, Rolf; Simmons, Brinick; Sherwood, Peter; Rousseau, David

    2012-01-01

    The ATLAS collaboration operates an extensive set of protocols to validate the quality of the offline software in a timely manner. This is essential in order to process the large amounts of data being collected by the ATLAS detector in 2011 without complications on the offline software side. We will discuss a number of different strategies used to validate the ATLAS offline software; running the ATLAS framework software, Athena, in a variety of configurations daily on each nightly build via the ATLAS Nightly System (ATN) and Run Time Tester (RTT) systems; the monitoring of these tests and checking the compilation of the software via distributed teams of rotating shifters; monitoring of and follow up on bug reports by the shifter teams and periodic software cleaning weeks to improve the quality of the offline software further.

  20. The Error Reporting in the ATLAS TDAQ System

    Science.gov (United States)

    Kolos, Serguei; Kazarov, Andrei; Papaevgeniou, Lykourgos

    2015-05-01

    The ATLAS Error Reporting provides a service that allows experts and shift crew to track and address errors relating to the data taking components and applications. This service, called the Error Reporting Service (ERS), gives to software applications the opportunity to collect and send comprehensive data about run-time errors, to a place where it can be intercepted in real-time by any other system component. Other ATLAS online control and monitoring tools use the ERS as one of their main inputs to address system problems in a timely manner and to improve the quality of acquired data. The actual destination of the error messages depends solely on the run-time environment, in which the online applications are operating. When an application sends information to ERS, depending on the configuration, it may end up in a local file, a database, distributed middleware which can transport it to an expert system or display it to users. Thanks to the open framework design of ERS, new information destinations can be added at any moment without touching the reporting and receiving applications. The ERS Application Program Interface (API) is provided in three programming languages used in the ATLAS online environment: C++, Java and Python. All APIs use exceptions for error reporting but each of them exploits advanced features of a given language to simplify the end-user program writing. For example, as C++ lacks language support for exceptions, a number of macros have been designed to generate hierarchies of C++ exception classes at compile time. Using this approach a software developer can write a single line of code to generate a boilerplate code for a fully qualified C++ exception class declaration with arbitrary number of parameters and multiple constructors, which encapsulates all relevant static information about the given type of issues. When a corresponding error occurs at run time, the program just need to create an instance of that class passing relevant values to one

  1. A Fast hardware Tracker for the ATLAS Trigger system

    CERN Document Server

    Pandini, Carlo Enrico; The ATLAS collaboration

    2015-01-01

    The trigger system at the ATLAS experiment is designed to lower the event rate occurring from the nominal bunch crossing at 40 MHz to about 1 kHz for a designed LHC luminosity of 10$^{34}$ cm$^{-2}$ s$^{-1}$. After a very successful data taking run the LHC is expected to run starting in 2015 with much higher instantaneous luminosities and this will increase the load on the High Level Trigger system. More sophisticated algorithms will be needed to achieve higher background rejection while maintaining good efficiency for interesting physics signals, which requires a more extensive use of tracking information. The Fast Tracker (FTK) trigger system, part of the ATLAS trigger upgrade program, is a highly parallel hardware device designed to perform full-scan track-finding at the event rate of 100 kHz. FTK is a dedicated processor based on a mixture of advanced technologies. Modern, powerful, Field Programmable Gate Arrays form an important part of the system architecture, and the combinatorial problem of pattern r...

  2. The GNAM system in the ATLAS online monitoring framework

    Energy Technology Data Exchange (ETDEWEB)

    Salvatore, D. [INFN Cosenza and Dip. di Fisica, Universita della Calabria, ponte P. Bucci 31 C, 87036 Rende (Italy)], E-mail: daniela.salvatore@cern.ch; Adragna, P. [Queen Mary, University of London, London (United Kingdom); Bosman, M. [IFAE, Institut de Fisica de Altes Energies, UAB/Barcelona (Spain); Burckhart, D. [CERN, Geneva (Switzerland); Caprini, M. [National Institute for Physics and Nuclear Engineering, Bucharest (Romania); Corso-Radu, A. [University of California Irvine, Irvine, California (United States); Costa, M.J. [CERN, Geneva (Switzerland); Della Pietra, M. [INFN Sezione diNapoli, Napoli (Italy); Dotti, A. [Universita and INFN Pisa, Pisa (Italy); Eschrich, I. [University of California Irvine, Irvine, California (United States); Ferrari, R. [INFN Sezione di Pavia, Pavia (Italy); Ferrer, M.L. [INFN Laboratori Nazionali di Frascati, Frascati (Italy); Gaudio, G. [INFN Sezione di Pavia, Pavia (Italy); Hadavand, H. [Southern Methodist University, Dallas (United States); Hauschild, M. [CERN, Geneva (Switzerland); Hillier, S. [University of Birmingham, Birmingham (United Kingdom); Kehoe, B. [Southern Methodist University, Dallas (United States); Kolos, S. [University of California Irvine, Irvine, California (United States); Kordas, K. [INFN Laboratori Nazionali di Frascati, Frascati (Italy); Mcpherson, R. [University of Victoria, Vancouver (Canada)] (and others)

    2007-10-15

    ATLAS [ATLAS Collaboration, 'ATLAS Technical Proposal', CERN/LHHCC/94-43, LHCC/P2, CERN, Geneva, Switzerland, 1994] is one of the four experiments under construction along the Large Hadron Collider (LHC) ring, which will produce interactions at a center of mass energy of 14 TeV at 40 MHz rate. The detector consists of more than 140 million electronic channels. The challenging experimental environment and the extreme detector complexity impose the necessity of a common scalable distributed monitoring framework, which can be tuned for the optimal use by different ATLAS detectors at the various levels of the ATLAS data flow.

  3. The GNAM system in the ATLAS online monitoring framework

    Energy Technology Data Exchange (ETDEWEB)

    Salvatore, D. [INFN Cosenza and Dip. di Fisica, Universita della Calabria, ponte P. Bucci 31 C, 87036 Rende (Italy)], E-mail: daniela.salvatore@cern.ch; Adragna, P [Queen Mary, University of London, London (United Kingdom); Bosman, M [IFAE, Institut de Fisica de Altes Energies, UAB/Barcelona (Spain); Burckhart, D [CERN, Geneva (Switzerland); Caprini, M [National Institute for Physics and Nuclear Engineering, Bucharest (Romania); Corso-Radu, A [University of California Irvine, Irvine, California (United States); Costa, M J [CERN, Geneva (Switzerland); Della Pietra, M [INFN Sezione diNapoli, Napoli (Italy); Dotti, A [Universita and INFN Pisa, Pisa (Italy); Eschrich, I [University of California Irvine, Irvine, California (United States); Ferrari, R [INFN Sezione di Pavia, Pavia (Italy); Ferrer, M L [INFN Laboratori Nazionali di Frascati, Frascati (Italy); Gaudio, G [INFN Sezione di Pavia, Pavia (Italy); Hadavand, H [Southern Methodist University, Dallas (United States); Hauschild, M [CERN, Geneva (Switzerland); Hillier, S [University of Birmingham, Birmingham (United Kingdom); Kehoe, B [Southern Methodist University, Dallas (United States); Kolos, S [University of California Irvine, Irvine, California (United States); Kordas, K [INFN Laboratori Nazionali di Frascati, Frascati (Italy); Mcpherson, R [University of Victoria, Vancouver (Canada)

    2007-10-15

    ATLAS [ATLAS Collaboration, 'ATLAS Technical Proposal', CERN/LHHCC/94-43, LHCC/P2, CERN, Geneva, Switzerland, 1994] is one of the four experiments under construction along the Large Hadron Collider (LHC) ring, which will produce interactions at a center of mass energy of 14 TeV at 40 MHz rate. The detector consists of more than 140 million electronic channels. The challenging experimental environment and the extreme detector complexity impose the necessity of a common scalable distributed monitoring framework, which can be tuned for the optimal use by different ATLAS detectors at the various levels of the ATLAS data flow.

  4. The GNAM system in the ATLAS online monitoring framework

    International Nuclear Information System (INIS)

    Salvatore, D.; Adragna, P.; Bosman, M.; Burckhart, D.; Caprini, M.; Corso-Radu, A.; Costa, M.J.; Della Pietra, M.; Dotti, A.; Eschrich, I.; Ferrari, R.; Ferrer, M.L.; Gaudio, G.; Hadavand, H.; Hauschild, M.; Hillier, S.; Kehoe, B.; Kolos, S.; Kordas, K.; Mcpherson, R.

    2007-01-01

    ATLAS [ATLAS Collaboration, 'ATLAS Technical Proposal', CERN/LHHCC/94-43, LHCC/P2, CERN, Geneva, Switzerland, 1994] is one of the four experiments under construction along the Large Hadron Collider (LHC) ring, which will produce interactions at a center of mass energy of 14 TeV at 40 MHz rate. The detector consists of more than 140 million electronic channels. The challenging experimental environment and the extreme detector complexity impose the necessity of a common scalable distributed monitoring framework, which can be tuned for the optimal use by different ATLAS detectors at the various levels of the ATLAS data flow

  5. Technical Design Report for the Phase-I Upgrade of the ATLAS TDAQ System

    CERN Document Server

    AUTHOR|(CDS)2069742; Abbott, Brad; Abdallah, Jalal; Abdel Khalek, Samah; Abdinov, Ovsat; Aben, Rosemarie; Abi, Babak; AbouZeid, Ossama; Abramowicz, Halina; Abreu, Henso; Abreu, Ricardo; Abulaiti, Yiming; Acharya, Bobby Samir; Achenbach, Ralf; Adamczyk, Leszek; Adams, David; Adelman, Jahred; Adomeit, Stefanie; Adye, Tim; Aefsky, Scott; Agatonovic-Jovin, Tatjana; Aguilar-Saavedra, Juan Antonio; Agustoni, Marco; Ahlen, Steven; Ahmad, Ashfaq; Ahmadov, Faig; Aielli, Giulio; Åkesson, Torsten Paul Ake; Akimoto, Ginga; Akimov, Andrei; Alam, Muhammad Aftab; Albert, Justin; Albrand, Solveig; Alconada Verzini, Maria Josefina; Aleksa, Martin; Aleksandrov, Igor; Alexa, Calin; Alexander, Gideon; Alexandre, Gauthier; Alexandrov, Evgeny; Alexopoulos, Theodoros; Alhroob, Muhammad; Alimonti, Gianluca; Alio, Lion; Alison, John; Allbrooke, Benedict; Allison, Lee John; Allport, Phillip; Allwood-Spiers, Sarah; Almond, John; Aloisio, Alberto; Alon, Raz; Alonso, Alejandro; Alonso, Francisco; Altheimer, Andrew David; Alvarez Gonzalez, Barbara; Alviggi, Mariagrazia; Amaral Coutinho, Yara; Amelung, Christoph; Amor Dos Santos, Susana Patricia; Amoroso, Simone; Amram, Nir; Amundsen, Glenn; Anastopoulos, Christos; Ancu, Lucian Stefan; Andari, Nansi; Andeen, Timothy; Anders, Christoph Falk; Anders, Gabriel; Anderson, John Thomas; Anderson, Kelby; Andreazza, Attilio; Andrei, George Victor; Anduaga, Xabier; Angelidakis, Stylianos; Angelozzi, Ivan; Anger, Philipp; Angerami, Aaron; Anisenkov, Alexey; Anjos, Nuno; Annovi, Alberto; Antonaki, Ariadni; Antonelli, Mario; Antonov, Alexey; Antos, Jaroslav; Anulli, Fabio; Aoki, Masato; Aperio Bella, Ludovica; Apolle, Rudi; Arabidze, Giorgi; Aracena, Ignacio; Arai, Yasuo; Araujo Ferraz, Victor; Arce, Ayana; Arguin, Jean-Francois; Argyropoulos, Spyridon; Arik, Metin; Armbruster, Aaron James; Arnaez, Olivier; Arnal, Vanessa; Arslan, Ozan; Artamonov, Andrei; Artoni, Giacomo; Asai, Shoji; Asbah, Nedaa; Ask, Stefan; Åsman, Barbro; Asquith, Lily; Assamagan, Ketevi; Astalos, Robert; Atkinson, Markus; Atlay, Naim Bora; Auerbach, Benjamin; Augsten, Kamil; Augusto, José; Aurousseau, Mathieu; Avolio, Giuseppe; Azuelos, Georges; Azuma, Yuya; Baak, Max; Baas, Alessandra; Bach, Andre; Bachacou, Henri; Bachas, Konstantinos; Backes, Moritz; Backhaus, Malte; Backus Mayes, John; Badescu, Elisabeta; Bagiacchi, Paolo; Bagnaia, Paolo; Bai, Yu; Bain, Travis; Baines, John; Baker, Oliver Keith; Baker, Sarah; Balek, Petr; Ballestrero, Sergio; Balli, Fabrice; Banas, Elzbieta; Banerjee, Swagato; Bangert, Andrea Michelle; Bansal, Vikas; Bansil, Hardeep Singh; Barak, Liron; Barber, Tom; Barberio, Elisabetta Luigia; Barberis, Dario; Barbero, Marlon; Barillari, Teresa; Barisonzi, Marcello; Barklow, Timothy; Barlow, Nick; Barnett, Bruce; Barnett, Michael; Baroncelli, Antonio; Barone, Gaetano; Barr, Alan; Barreiro, Fernando; Barreiro Guimarães da Costa, João; Bartoldus, Rainer; Barton, Adam Edward; Bartos, Pavol; Bartsch, Valeria; Bassalat, Ahmed; Basye, Austin; Bates, Richard; Batkova, Lucia; Batley, Richard; Batraneanu, Silvia; Battistin, Michele; Bauer, Florian; Bauss, Bruno; Bawa, Harinder Singh; Beacham, James Baker; Beau, Tristan; Beauchemin, Pierre-Hugues; Bechtle, Philip; Beck, Hans Peter; Becker, Anne Kathrin; Becker, Sebastian; Beckingham, Matthew; Beddall, Andrew; Beddall, Ayda; Bedikian, Sourpouhi; Bednyakov, Vadim; Bee, Christopher; Beemster, Lars; Beermann, Thomas; Begel, Michael; Behr, Katharina; Belanger-Champagne, Camille; Bell, Paul; Bell, William; Bella, Gideon; Bellagamba, Lorenzo; Bellerive, Alain; Bellomo, Massimiliano; Belloni, Alberto; Belotskiy, Konstantin; Beltramello, Olga; Benary, Odette; Benchekroun, Driss; Bendtz, Katarina; Benekos, Nektarios; Benhammou, Yan; Benhar Noccioli, Eleonora; Benitez Garcia, Jorge-Armando; Benjamin, Douglas; Bensinger, James; Bentvelsen, Stan; Beretta, Matteo; Berge, David; Bergeaas Kuutmann, Elin; Berger, Nicolas; Berghaus, Frank; Berglund, Elina; Beringer, Jürg; Bernard, Clare; Bernat, Pauline; Bernius, Catrin; Bernlochner, Florian Urs; Berry, Tracey; Berta, Peter; Bertella, Claudia; Bertelsen, Henrik; Bertolucci, Federico; Besana, Maria Ilaria; Besjes, Geert-Jan; Bessidskaia Bylund, Olga; Besson, Nathalie; Betancourt, Christopher; Bethke, Siegfried; Bhimji, Wahid; Bianchi, Riccardo-Maria; Bianchini, Louis; Bianco, Michele; Biebel, Otmar; Bieniek, Stephen Paul; Bierwagen, Katharina; Biesiada, Jed; Biglietti, Michela; Bilbao De Mendizabal, Javier; Bilokon, Halina; Bindi, Marcello; Binet, Sebastien; Bingul, Ahmet; Bini, Cesare; Bittner, Bernhard; Black, Curtis; Black, James; Black, Kevin; Blackburn, Daniel; Blair, Robert; Blanchard, Jean-Baptiste; Blazek, Tomas; Bloch, Ingo; Blocker, Craig; Blum, Walter; Blumenschein, Ulrike; Bobbink, Gerjan; Bobrovnikov, Victor; Bocchetta, Simona Serena; Bocci, Andrea; Boddy, Christopher Richard; Boehler, Michael; Boek, Jennifer; Boek, Thorsten Tobias; Bogdan, Mircea Arghir; Bogdanchikov, Alexander; Bohm, Christian; Boisvert, Veronique; Bold, Tomasz; Boldyrev, Alexey; Bolnet, Nayanka Myriam; Bomben, Marco; Bona, Marcella; Boonekamp, Maarten; Borga, Andrea; Borisov, Anatoly; Borissov, Guennadi; Borri, Marcello; Borroni, Sara; Bortfeldt, Jonathan; Bortolotto, Valerio; Bos, Kors; Boscherini, Davide; Bosman, Martine; Boterenbrood, Hendrik; Bouchami, Jihene; Boudreau, Joseph; Bouhova-Thacker, Evelina Vassileva; Boumediene, Djamel Eddine; Bourdarios, Claire; Bousson, Nicolas; Boutouil, Sara; Boveia, Antonio; Boyd, James; Boyko, Igor; Bozovic-Jelisavcic, Ivanka; Bracinik, Juraj; Brandt, Andrew; Brandt, Gerhard; Brandt, Oleg; Bratzler, Uwe; Brau, Benjamin; Brau, James; Brawn, Ian; Brazzale, Simone Federico; Brelier, Bertrand; Brendlinger, Kurt; Brennan, Amelia Jean; Brenner, Richard; Bressler, Shikma; Bristow, Kieran; Bristow, Timothy Michael; Britton, Dave; Brochu, Frederic; Brock, Ian; Brock, Raymond; Bronner, Johanna; Brooijmans, Gustaaf; Brooks, Timothy; Brooks, William; Brosamer, Jacquelyn; Brost, Elizabeth; Brown, Gareth; Brown, Jonathan; Bruckman de Renstrom, Pawel; Bruncko, Dusan; Bruneliere, Renaud; Brunet, Sylvie; Bruni, Alessia; Bruni, Graziano; Bruschi, Marco; Bryngemark, Lene; Buanes, Trygve; Buat, Quentin; Bucci, Francesca; Buchholz, Peter; Buckingham, Ryan; Buckley, Andrew; Buda, Stelian Ioan; Budagov, Ioulian; Budick, Burton; Buehrer, Felix; Bugge, Lars; Bugge, Magnar Kopangen; Bulekov, Oleg; Bundock, Aaron Colin; Bunse, Moritz; Burdin, Sergey; Burghgrave, Blake; Burke, Stephen; Burmeister, Ingo; Busato, Emmanuel; Büscher, Volker; Bussey, Peter; Buszello, Claus-Peter; Butler, Bart; Butler, John; Butt, Aatif Imtiaz; Buttar, Craig; Butterworth, Jonathan; Buttinger, William; Buzatu, Adrian; Byszewski, Marcin; Cabrera Urbán, Susana; Caforio, Davide; Cakir, Orhan; Calafiura, Paolo; Calderini, Giovanni; Calfayan, Philippe; Calkins, Robert; Caloba, Luiz; Calvet, David; Calvet, Samuel; Camacho Toro, Reina; Cameron, David; Caminada, Lea Michaela; Caminal Armadans, Roger; Campana, Simone; Campanelli, Mario; Campoverde, Angel; Canale, Vincenzo; Canelli, Florencia; Canepa, Anadi; Cantero, Josu; Cantrill, Robert; Cao, Tingting; Capeans Garrido, Maria Del Mar; Caprini, Irinel; Caprini, Mihai; Capua, Marcella; Caputo, Regina; Cardarelli, Roberto; Carli, Tancredi; Carlino, Gianpaolo; Carminati, Leonardo; Caron, Sascha; Carquin, Edson; Carrillo-Montoya, German D; Carter, Janet; Casadei, Diego; Casado, Maria Pilar; Castaneda-Miranda, Elizabeth; Castelli, Angelantonio; Castillo Gimenez, Victoria; Castro, Nuno Filipe; Catastini, Pierluigi; Catinaccio, Andrea; Catmore, James; Cattai, Ariella; Cattani, Giordano; Caughron, Seth; Cavaliere, Viviana; Cavalli, Donatella; Cavalli-Sforza, Matteo; Cavasinni, Vincenzo; Ceradini, Filippo; Cerio, Benjamin; Cerny, Karel; Cerqueira, Augusto Santiago; Cerri, Alessandro; Cerrito, Lucio; Cerutti, Fabio; Cerv, Matevz; Cervelli, Alberto; Cetin, Serkant Ali; Chafaq, Aziz; Chakraborty, Dhiman; Chalupkova, Ina; Chan, Kevin; Chang, Philip; Chapleau, Bertrand; Chapman, John Derek; Charfeddine, Driss; Charlton, Dave; Chavda, Vikash; Chavez Barajas, Carlos Alberto; Cheatham, Susan; Chekanov, Sergei; Chekulaev, Sergey; Chelkov, Gueorgui; Chelstowska, Magda Anna; Chen, Chunhui; Chen, Hucheng; Chen, Karen; Chen, Liming; Chen, Shenjian; Chen, Xin; Chen, Yujiao; Cheng, Hok Chuen; Cheng, Yangyang; Cheplakov, Alexander; Cherkaoui El Moursli, Rajaa; Chernyatin, Valeriy; Cheu, Elliott; Chevalier, Laurent; Chiarella, Vitaliano; Chiefari, Giovanni; Childers, John Taylor; Chilingarov, Alexandre; Chiodini, Gabriele; Chisholm, Andrew; Chislett, Rebecca Thalatta; Chitan, Adrian; Chizhov, Mihail; Chouridou, Sofia; Chow, Bonnie Kar Bo; Christidi, Ilektra-Athanasia; Chudoba, Jiri; Ciapetti, Guido; Ciftci, Abbas Kenan; Ciftci, Rena; Cinca, Diane; Cindro, Vladimir; Ciocio, Alessandra; Ciodaro Xavier, Thiago; Cirkovic, Predrag; Citraro, Saverio; Citron, Zvi Hirsh; Citterio, Mauro; Ciubancan, Mihai; Clark, Allan G; Clark, Philip James; Clarke, Robert; Clemens, Jean-Claude; Clement, Benoit; Clement, Christophe; Coadou, Yann; Cobal, Marina; Coccaro, Andrea; Cochran, James H; Coffey, Laurel; Cogan, Joshua Godfrey; Coggeshall, James; Cole, Brian; Cole, Stephen; Colijn, Auke-Pieter; Collins-Tooth, Christopher; Collot, Johann; Colombo, Tommaso; Colon, German; Compostella, Gabriele; Conde Muiño, Patricia; Coniavitis, Elias; Conidi, Maria Chiara; Connelly, Ian; Consonni, Sofia Maria; Consorti, Valerio; Constantinescu, Serban; Conti, Geraldine; Conventi, Francesco; Cooke, Mark; Cooper, Ben; Cooper-Sarkar, Amanda; Cooper-Smith, Neil; Copic, Katherine; Cornelissen, Thijs; Corradi, Massimo; Corriveau, Francois; Corso-Radu, Alina; Cortes-Gonzalez, Arely; Cortiana, Giorgio; Costa, Giuseppe; Costa, María José; Costanzo, Davide; Côté, David; Cottin, Giovanna; Coura Torres, Rodrigo; Cowan, Glen; Cox, Brian; Cranmer, Kyle; Cree, Graham; Crépé-Renaudin, Sabine; Crescioli, Francesco; Crispin Ortuzar, Mireia; Cristinziani, Markus; Crone, Gordon Jeremy; Crosetti, Giovanni; Cuciuc, Constantin-Mihai; Cuenca Almenar, Cristóbal; Cuhadar Donszelmann, Tulay; Cummings, Jane; Curatolo, Maria; Cuthbert, Cameron; Czirr, Hendrik; Czodrowski, Patrick; Czyczula, Zofia; D'Auria, Saverio; D'Onofrio, Monica; D'Orazio, Alessia; Da Cunha Sargedas De Sousa, Mario Jose; Da Via, Cinzia; Dabrowski, Wladyslaw; Dafinca, Alexandru; Dai, Tiesheng; Dale, Orjan; Dallaire, Frederick; Dallapiccola, Carlo; Dam, Mogens; Daniells, Andrew Christopher; Dano Hoffmann, Maria; Dao, Valerio; Darbo, Giovanni; Darlea, Georgiana Lavinia; Darmora, Smita; Dassoulas, James; Davey, Will; David, Claire; Davidek, Tomas; Davies, Eleanor; Davies, Merlin; Davignon, Olivier; Davison, Adam; Davygora, Yuriy; Dawe, Edmund; Dawson, Ian; Daya-Ishmukhametova, Rozmin; De, Kaushik; de Asmundis, Riccardo; De Castro, Stefano; De Cecco, Sandro; de Graat, Julien; De Groot, Nicolo; de Jong, Paul; De la Torre, Hector; De Lorenzi, Francesco; De Nooij, Lucie; De Pedis, Daniele; De Salvo, Alessandro; De Sanctis, Umberto; De Santo, Antonella; De Vivie De Regie, Jean-Baptiste; Dearnaley, William James; Debbe, Ramiro; Debenedetti, Chiara; Dechenaux, Benjamin; Dedovich, Dmitri; Degenhardt, James; Deigaard, Ingrid; Del Peso, Jose; Del Prete, Tarcisio; Delemontex, Thomas; Deliot, Frederic; Deliyergiyev, Maksym; Dell'Acqua, Andrea; Dell'Asta, Lidia; Dell'Orso, Mauro; Della Pietra, Massimo; della Volpe, Domenico; Delmastro, Marco; Delsart, Pierre-Antoine; Deluca, Carolina; Demers, Sarah; Demichev, Mikhail; Demilly, Aurelien; Denisov, Sergey; Derendarz, Dominik; Derkaoui, Jamal Eddine; Derue, Frederic; Dervan, Paul; Desch, Klaus Kurt; Deviveiros, Pier-Olivier; Dewhurst, Alastair; Dhaliwal, Saminder; Di Ciaccio, Anna; Di Ciaccio, Lucia; Di Domenico, Antonio; Di Donato, Camilla; Di Girolamo, Alessandro; Di Girolamo, Beniamino; Di Mattia, Alessandro; Di Micco, Biagio; Di Nardo, Roberto; Di Simone, Andrea; Di Sipio, Riccardo; Di Valentino, David; Diaz, Marco Aurelio; Diehl, Edward; Dietrich, Janet; Dietzsch, Thorsten; Diglio, Sara; Dimitrievska, Aleksandra; Dingfelder, Jochen; Dionisi, Carlo; Dita, Petre; Dita, Sanda; Dittus, Fridolin; Djama, Fares; Djobava, Tamar; do Vale, Maria Aline Barros; Doan, Thi Kieu Oanh; Dobos, Daniel; Dobson, Ellie; Doglioni, Caterina; Doherty, Tom; Dohmae, Takeshi; Dolejsi, Jiri; Dolezal, Zdenek; Donadelli, Marisilvia; Donati, Simone; Dondero, Paolo; Donini, Julien; Dopke, Jens; Doria, Alessandra; Dova, Maria-Teresa; Doyle, Tony; Drake, Gary; Dris, Manolis; Dubbert, Jörg; Dube, Sourabh; Dubreuil, Emmanuelle; Duchovni, Ehud; Duckeck, Guenter; Ducu, Otilia Anamaria; Duda, Dominik; Dudarev, Alexey; Dudziak, Fanny; Duflot, Laurent; Duguid, Liam; Dührssen, Michael; Dunford, Monica; Duran Yildiz, Hatice; Düren, Michael; Dwuznik, Michal; Ebke, Johannes; Edmunds, Daniel; Edson, William; Edwards, Clive; Edwards, Nicholas Charles; Ehrenfeld, Wolfgang; Eifert, Till; Eigen, Gerald; Einsweiler, Kevin; Ekelof, Tord; El Kacimi, Mohamed; Ellert, Mattias; Elles, Sabine; Ellinghaus, Frank; Ellis, Nicolas; Elmsheuser, Johannes; Elsing, Markus; Emeliyanov, Dmitry; Enari, Yuji; Endner, Oliver Chris; Endo, Masaki; Erdmann, Johannes; Ereditato, Antonio; Ermoline, Iouri; Ernis, Gunar; Ernst, Jesse; Ernst, Michael; Ernwein, Jean; Errede, Deborah; Errede, Steven; Ertel, Eugen; Escalier, Marc; Esch, Hendrik; Escobar, Carlos; Esposito, Bellisario; Etienvre, Anne-Isabelle; Etzion, Erez; Evangelakou, Despoina; Evans, Hal; Fabbri, Laura; Facini, Gabriel; Fakhrutdinov, Rinat; Falciano, Speranza; Faltova, Jana; Fang, Yaquan; Fanti, Marcello; Farbin, Amir; Farilla, Addolorata; Farooque, Trisha; Farrell, Steven; Farrington, Sinead; Farthouat, Philippe; Fassi, Farida; Fassnacht, Patrick; Fassouliotis, Dimitrios; Fatholahzadeh, Baharak; Faulkner, Peter; Favareto, Andrea; Fayard, Louis; Federic, Pavol; Fedin, Oleg; Fedorko, Wojciech; Fehling-Kaschek, Mirjam; Feigl, Simon; Feligioni, Lorenzo; Feng, Cunfeng; Feng, Eric; Feng, Haolu; Fenyuk, Alexander; Fernandez Perez, Sonia; Ferrag, Samir; Ferrando, James; Ferrara, Valentina; Ferrari, Arnaud; Ferrari, Pamela; Ferrari, Roberto; Ferreira de Lima, Danilo Enoque; Ferrer, Antonio; Ferrere, Didier; Ferretti, Claudio; Ferretto Parodi, Andrea; Fiascaris, Maria; Fiedler, Frank; Filipčič, Andrej; Filipuzzi, Marco; Filthaut, Frank; Fincke-Keeler, Margret; Finelli, Kevin Daniel; Fiolhais, Miguel; Fiorini, Luca; Firan, Ana; Fischer, Julia; Fisher, Matthew; Fitzgerald, Eric Andrew; Flechl, Martin; Fleck, Ivor; Fleischmann, Philipp; Fleischmann, Sebastian; Fletcher, Gareth Thomas; Fletcher, Gregory; Flick, Tobias; Floderus, Anders; Flores Castillo, Luis; Florez Bustos, Andres Carlos; Flowerdew, Michael; Formica, Andrea; Forti, Alessandra; Fortin, Dominique; Fournier, Daniel; Fox, Harald; Francavilla, Paolo; Franchini, Matteo; Franchino, Silvia; Francis, David; Franklin, Melissa; Franz, Sebastien; Fraternali, Marco; Fratina, Sasa; French, Sky; Friedrich, Conrad; Friedrich, Felix; Froidevaux, Daniel; Front, David Moris; Frost, James; Fukunaga, Chikara; Fullana Torregrosa, Esteban; Fulsom, Bryan Gregory; Fusayasu, Takahiro; Fuster, Juan; Gabaldon, Carolina; Gabizon, Ofir; Gabrielli, Alessandro; Gabrielli, Andrea; Gadatsch, Stefan; Gadfort, Thomas; Gadomski, Szymon; Gagliardi, Guido; Gagnon, Pauline; Galea, Cristina; Galhardo, Bruno; Gallas, Elizabeth; Gallo, Valentina Santina; Gallop, Bruce; Gallus, Petr; Galster, Gorm Aske Gram Krohn; Gan, KK; Gandrajula, Reddy Pratap; Gao, Jun; Gao, Yongsheng; Garay Walls, Francisca; Garberson, Ford; García, Carmen; García Navarro, José Enrique; Garcia-Sciveres, Maurice; Gardner, Robert; Garelli, Nicoletta; Garonne, Vincent; Gatti, Claudio; Gaudio, Gabriella; Gaur, Bakul; Gauthier, Lea; Gavrilenko, Igor; Gay, Colin; Gaycken, Goetz; Gazis, Evangelos; Ge, Peng; Gecse, Zoltan; Gee, Norman; Geerts, Daniël Alphonsus Adrianus; Geich-Gimbel, Christoph; Gellerstedt, Karl; Gemme, Claudia; Gemmell, Alistair; Genest, Marie-Hélène; Gentile, Simonetta; Gentsos, Christos; George, Matthias; George, Simon; Gerbaudo, Davide; Gershon, Avi; Ghibaudi, Marco; Ghodbane, Nabil; Giacobbe, Benedetto; Giagu, Stefano; Giangiobbe, Vincent; Giannetti, Paola; Gianotti, Fabiola; Gibson, Stephen; Gillam, Thomas; Gillberg, Dag; Gingrich, Douglas; Giokaris, Nikos; Giordani, MarioPaolo; Giordano, Raffaele; Giorgi, Francesco Michelangelo; Giovannini, Paola; Giraud, Pierre-Francois; Giugni, Danilo; Giuliani, Claudia; Giulini, Maddalena; Giunta, Michele; Gjelsten, Børge Kile; Gkaitatzis, Stamatios; Gkialas, Ioannis; Gladilin, Leonid; Glasman, Claudia; Glatzer, Julian; Glazov, Alexandre; Glonti, George; Goblirsch-Kolb, Maximilian; Goddard, Jack Robert; Godfrey, Jennifer; Godlewski, Jan; Goeringer, Christian; Goldfarb, Steven; Golling, Tobias; Golubkov, Dmitry; Gomes, Agostinho; Gomez Fajardo, Luz Stella; Gonçalo, Ricardo; Goncalves Gama, Rafael; Goncalves Pinto Firmino Da Costa, Joao; Gonella, Laura; González de la Hoz, Santiago; Gonzalez Parra, Garoe; Gonzalez Silva, Laura; Gonzalez-Sevilla, Sergio; Goossens, Luc; Gorbounov, Petr Andreevich; Gordon, Howard; Gorelov, Igor; Gorini, Benedetto; Gorini, Edoardo; Gorišek, Andrej; Gornicki, Edward; Goshaw, Alfred; Gössling, Claus; Gostkin, Mikhail Ivanovitch; Gouighri, Mohamed; Goujdami, Driss; Goulette, Marc Phillippe; Goussiou, Anna; Goy, Corinne; Gozpinar, Serdar; Grabas, Herve Marie Xavier; Graber, Lars; Grabowska-Bold, Iwona; Grafström, Per; Grahn, Karl-Johan; Gramling, Johanna; Gramstad, Eirik; Grancagnolo, Sergio; Grassi, Valerio; Gratchev, Vadim; Gray, Heather; Gray, Julia Ann; Graziani, Enrico; Grebenyuk, Oleg; Green, Barry; Greenwood, Zeno Dixon; Gregersen, Kristian; Gregor, Ingrid-Maria; Grenier, Philippe; Griffiths, Justin; Grillo, Alexander; Grimm, Kathryn; Grinstein, Sebastian; Gris, Philippe Luc Yves; Grishkevich, Yaroslav; Grivaz, Jean-Francois; Grohs, Johannes Philipp; Grohsjean, Alexander; Gross, Eilam; Grosse-Knetter, Joern; Grossi, Giulio Cornelio; Groth-Jensen, Jacob; Grout, Zara Jane; Grybel, Kai; Guan, Liang; Guescini, Francesco; Guest, Daniel; Gueta, Orel; Guicheney, Christophe; Guido, Elisa; Guillemin, Thibault; Guindon, Stefan; Gul, Umar; Gumpert, Christian; Gunther, Jaroslav; Guo, Jun; Gupta, Shaun; Gutierrez, Phillip; Gutierrez Ortiz, Nicolas Gilberto; Gutschow, Christian; Guttman, Nir; Guyot, Claude; Gwenlan, Claire; Gwilliam, Carl; Haas, Andy; Haas, Stefan; Haber, Carl; Hadavand, Haleh Khani; Haefner, Petra; Hageböck, Stephan; Hakobyan, Hrachya; Haleem, Mahsana; Hall, David; Halladjian, Garabed; Hamacher, Klaus; Hamal, Petr; Hamano, Kenji; Hamer, Matthias; Hamilton, Andrew; Hamilton, Samuel; Han, Liang; Hanagaki, Kazunori; Hanawa, Keita; Hance, Michael; Hanke, Paul; Hansen, Jørgen Beck; Hansen, Jorn Dines; Hansen, Peter Henrik; Hansson, Per; Hara, Kazuhiko; Hard, Andrew; Harenberg, Torsten; Harkusha, Siarhei; Harper, Devin; Harrington, Robert; Harris, Orin; Harrison, Paul Fraser; Hartjes, Fred; Hasegawa, Satoshi; Hasegawa, Yoji; Hassani, Samira; Haug, Sigve; Hauschild, Michael; Hauser, Reiner; Havranek, Miroslav; Hawkes, Christopher; Hawkings, Richard John; Hawkins, Anthony David; Hayashi, Takayasu; Hayden, Daniel; Hays, Chris; Hayward, Helen; Haywood, Stephen; Head, Simon; Heck, Tobias; Hedberg, Vincent; Heelan, Louise; Heim, Sarah; Heim, Timon; Heinemann, Beate; Heinrich, Lukas; Heisterkamp, Simon; Hejbal, Jiri; Helary, Louis; Heller, Claudio; Heller, Matthieu; Hellman, Sten; Hellmich, Dennis; Helsens, Clement; Henderson, James; Henderson, Robert; Hengler, Christopher; Henrichs, Anna; Henriques Correia, Ana Maria; Henrot-Versille, Sophie; Hensel, Carsten; Herbert, Geoffrey Henry; Hernández Jiménez, Yesenia; Herrberg-Schubert, Ruth; Herten, Gregor; Hertenberger, Ralf; Hervas, Luis; Hesketh, Gavin Grant; Hessey, Nigel; Hickling, Robert; Higón-Rodriguez, Emilio; Higuchi, Kota; Hill, John; Hiller, Karl Heinz; Hillert, Sonja; Hillier, Stephen; Hinchliffe, Ian; Hines, Elizabeth; Hirose, Minoru; Hirschbuehl, Dominic; Hobbs, John; Hod, Noam; Hodgkinson, Mark; Hodgson, Paul; Hoecker, Andreas; Hoeferkamp, Martin; Hoffman, Julia; Hoffmann, Dirk; Hofmann, Julia Isabell; Hohlfeld, Marc; Holmes, Tova Ray; Hong, Tae Min; Hooft van Huysduynen, Loek; Hostachy, Jean-Yves; Hou, Suen; Hoummada, Abdeslam; Howard, Jacob; Howarth, James; Hrabovsky, Miroslav; Hristova, Ivana; Hrivnac, Julius; Hryn'ova, Tetiana; Hsu, Pai-hsien Jennifer; Hsu, Shih-Chieh; Hu, Diedi; Hu, Xueye; Huang, Yanping; Hubacek, Zdenek; Hubaut, Fabrice; Huegging, Fabian; Huettmann, Antje; Huffman, Todd Brian; Hughes, Emlyn; Hughes, Gareth; Huhtinen, Mika; Hülsing, Tobias Alexander; Hurwitz, Martina; Huseynov, Nazim; Huston, Joey; Huth, John; Iacobucci, Giuseppe; Iakovidis, Georgios; Ibragimov, Iskander; Iconomidou-Fayard, Lydia; Ideal, Emma; Iengo, Paolo; Igonkina, Olga; Iizawa, Tomoya; Ikegami, Yoichi; Ikematsu, Katsumasa; Ikeno, Masahiro; Ilchenko, Iurii; Iliadis, Dimitrios; Ilic, Nikolina; Inamaru, Yuki; Ince, Tayfun; Ioannou, Pavlos; Iodice, Mauro; Iordanidou, Kalliopi; Ippolito, Valerio; Irles Quiles, Adrian; Isaksson, Charlie; Ishino, Masaya; Ishitsuka, Masaki; Ishmukhametov, Renat; Issever, Cigdem; Istin, Serhat; Iturbe Ponce, Julia Mariana; Ivashin, Anton; Iwanski, Wieslaw; Iwasaki, Hiroyuki; Izen, Joseph; Izzo, Vincenzo; Jackson, Brett; Jackson, John; Jackson, Matthew; Jackson, Paul; Jaekel, Martin; Jain, Vivek; Jakobi, Katharina Bianca; Jakobs, Karl; Jakobsen, Sune; Jakoubek, Tomas; Jakubek, Jan; Jamin, David Olivier; Jana, Dilip; Jansen, Eric; Jansen, Hendrik; Janssen, Jens; Jansweijer, Peter Paul Maarten; Janus, Michel; Jarlskog, Göran; Jeanty, Laura; Jeng, Geng-yuan; Jennens, David; Jenni, Peter; Jentzsch, Jennifer; Jeske, Carl; Jézéquel, Stéphane; Jha, Manoj Kumar; Ji, Haoshuang; Ji, Weina; Jia, Jiangyong; Jiang, Yi; Jimenez Belenguer, Marcos; Jin, Shan; Jinaru, Adam; Jinnouchi, Osamu; Joergensen, Morten Dam; Johansson, Erik; Johansson, Per; Johns, Kenneth; Jon-And, Kerstin; Jones, Graham; Jones, Roger; Jones, Tim; Joos, Markus; Jorge, Pedro; Joshi, Kiran Daniel; Jovicevic, Jelena; Ju, Xiangyang; Jung, Christian; Jungst, Ralph Markus; Jussel, Patrick; Juste Rozas, Aurelio; Kaci, Mohammed; Kaczmarska, Anna; Kado, Marumi; Kagan, Harris; Kagan, Michael; Kahra, Christian; Kajomovitz, Enrique; Kaluza, Adam; Kama, Sami; Kanaya, Naoko; Kaneda, Michiru; Kaneti, Steven; Kanno, Takayuki; Kantserov, Vadim; Kanzaki, Junichi; Kaplan, Benjamin; Kapliy, Anton; Kar, Deepak; Karakostas, Konstantinos; Karastathis, Nikolaos; Karnevskiy, Mikhail; Karpov, Sergey; Karthik, Krishnaiyengar; Kartvelishvili, Vakhtang; Karyukhin, Andrey; Kashif, Lashkar; Kasieczka, Gregor; Kass, Richard; Kastanas, Alex; Kataoka, Yousuke; Katre, Akshay; Katzy, Judith; Kaushik, Venkatesh; Kawagoe, Kiyotomo; Kawamoto, Tatsuo; Kawamura, Gen; Kazama, Shingo; Kazanin, Vassili; Kazarinov, Makhail; Kazarov, Andrei; Keeler, Richard; Kehoe, Robert; Keil, Markus; Keller, John; Kempster, Jacob Julian; Keoshkerian, Houry; Kepka, Oldrich; Kerševan, Borut Paul; Kersten, Susanne; Kessoku, Kohei; Keung, Justin; Keyes, Robert; Khalil-zada, Farkhad; Khandanyan, Hovhannes; Khanov, Alexander; Kharchenko, Dmitri; Khodinov, Alexander; Khoo, Teng Jian; Khoriauli, Gia; Khoroshilov, Andrey; Khovanskiy, Valery; Khramov, Evgeniy; Khubua, Jemal; Kiese, Patric Karl; Kim, Hyeon Jin; Kim, Shinhong; Kimura, Naoki; Kind, Oliver; King, Barry; King, Matthew; King, Robert Steven Beaufoy; King, Samuel Burton; Kirk, Julie; Kiryunin, Andrey; Kishimoto, Tomoe; Kisielewska, Danuta; Kitamura, Takumi; Kiuchi, Kenji; Kladiva, Eduard; Klein, Max; Klein, Uta; Kleinknecht, Konrad; Klimek, Pawel; Klimentov, Alexei; Klimkovich, Tatsiana; Klingenberg, Reiner; Klinger, Joel Alexander; Klioutchnikova, Tatiana; Klok, Peter; Kluge, Eike-Erik; Kluit, Peter; Kluth, Stefan; Kneringer, Emmerich; Knoops, Edith; Knue, Andrea; Kobayashi, Dai; Kobayashi, Tomio; Kobel, Michael; Kocian, Martin; Kodys, Peter; Koevesarki, Peter; Koffas, Thomas; Koffeman, Els; Kogan, Lucy Anne; Kohlmann, Simon; Kohout, Zdenek; Kohriki, Takashi; Koi, Tatsumi; Kolanoski, Hermann; Koletsou, Iro; Koll, James; Kolos, Serguei; Komar, Aston; Komori, Yuto; Kondo, Takahiko; Köneke, Karsten; König, Adriaan; K{ö}nig, Sebastian; Kono, Takanori; Konoplich, Rostislav; Konstantinidis, Nikolaos; Kopeliansky, Revital; Koperny, Stefan; Köpke, Lutz; Kopp, Anna Katharina; Korcyl, Krzysztof; Kordas, Kostantinos; Korn, Andreas; Korol, Aleksandr; Korolkov, Ilya; Korolkova, Elena; Korotkov, Vladislav; Kortner, Oliver; Kortner, Sandra; Kostyukhin, Vadim; Kotov, Vladislav; Kotwal, Ashutosh; Kourkoumelis, Christine; Kouskoura, Vasiliki; Koutsman, Alex; Kowalewski, Robert Victor; Kowalski, Tadeusz; Kozanecki, Witold; Kozhin, Anatoly; Kral, Vlastimil; Kramarenko, Viktor; Kramberger, Gregor; Krasnopevtsev, Dimitriy; Krasny, Mieczyslaw Witold; Krasznahorkay, Attila; Kraus, Jana; Kravchenko, Anton; Kreiss, Sven; Kretzschmar, Jan; Kreutzfeldt, Kristof; Krieger, Nina; Krieger, Peter; Kroeninger, Kevin; Kroha, Hubert; Kroll, Joe; Kroseberg, Juergen; Krstic, Jelena; Kruchonak, Uladzimir; Krüger, Hans; Kruker, Tobias; Krumnack, Nils; Krumshteyn, Zinovii; Kruse, Amanda; Kruse, Mark; Kruskal, Michael; Kubota, Takashi; Kuday, Sinan; Kuehn, Susanne; Kugel, Andreas; Kuhl, Andrew; Kuhl, Thorsten; Kukhtin, Victor; Kulchitsky, Yuri; Kuleshov, Sergey; Kuna, Marine; Kunigo, Takuto; Kunkle, Joshua; Kupco, Alexander; Kurashige, Hisaya; Kurochkin, Yurii; Kurumida, Rie; Kus, Vlastimil; Kuwertz, Emma Sian; Kuze, Masahiro; Kvita, Jiri; La Rosa, Alessandro; La Rotonda, Laura; Lablak, Said; Lacasta, Carlos; Lacava, Francesco; Lacey, James; Lacker, Heiko; Lacour, Didier; Lacuesta, Vicente Ramón; Ladygin, Evgueni; Lafaye, Remi; Laforge, Bertrand; Lagouri, Theodota; Lai, Stanley; Laier, Heiko; Laisne, Emmanuel; Lambourne, Luke; Lampen, Caleb; Lampl, Walter; Lançon, Eric; Landgraf, Ulrich; Landon, Murrough; Lang, Valerie Susanne; Lange, Clemens; Lankford, Andrew; Lanni, Francesco; Lantzsch, Kerstin; Lanza, Agostino; Laplace, Sandrine; Lapoire, Cecile; Laporte, Jean-Francois; Lari, Tommaso; Larner, Aimee; Lassnig, Mario; Laurelli, Paolo; Laurens, Philippe; Lavorini, Vincenzo; Lavrijsen, Wim; Laycock, Paul; Le, Bao Tran; Le Dortz, Olivier; Le Guirriec, Emmanuel; Le Menedeu, Eve; LeCompte, Thomas; Ledroit-Guillon, Fabienne Agnes Marie; Lee, Claire, Alexandra; Lee, Hurng-Chun; Lee, Jason; Lee, Shih-Chang; Lee, Lawrence; Lefebvre, Guillaume; Lefebvre, Michel; Legger, Federica; Leggett, Charles; Lehan, Allan; Lehmacher, Marc; Lehmann Miotto, Giovanna; Lei, Xiaowen; Leister, Andrew Gerard; Leite, Marco Aurelio Lisboa; Leitner, Rupert; Lellouch, Daniel; Lemmer, Boris; Leney, Katharine; Lenz, Tatjana; Lenzi, Bruno; Leone, Robert; Leonhardt, Kathrin; Leonidopoulos, Christos; Leontsinis, Stefanos; Leroy, Claude; Lester, Christopher; Lester, Christopher Michael; Levêque, Jessica; Levin, Daniel; Levinson, Lorne; Lewis, Adrian; Lewis, George; Leyko, Agnieszka; Leyton, Michael; Li, Bing; Li, Bo; Li, Haifeng; Li, Ho Ling; Li, Shu; Li, Xuefei; Liang, Zhijun; Liao, Hongbo; Liberali, Valentino; Liberti, Barbara; Lie, Ki; Liebal, Jessica; Liebig, Wolfgang; Limbach, Christian; Limosani, Antonio; Lin, Simon; Linde, Frank; Lindquist, Brian Edward; Linnemann, James; Lipeles, Elliot; Lipniacka, Anna; Lisovyi, Mykhailo; Liss, Tony; Lister, Alison; Litke, Alan; Liu, Bo; Liu, Dong; Liu, Jianbei; Liu, Kun; Liu, Lulu; Liu, Miaoyuan; Liu, Minghui; Liu, Yanwen; Livan, Michele; Livermore, Sarah; Lleres, Annick; Llorente Merino, Javier; Lloyd, Stephen; Lo Sterzo, Francesco; Lobodzinska, Ewelina; Loch, Peter; Lockman, William; Loddenkoetter, Thomas; Loebinger, Fred; Loevschall-Jensen, Ask Emil; Loginov, Andrey; Loh, Chang Wei; Lohse, Thomas; Lohwasser, Kristin; Lokajicek, Milos; Lombardo, Vincenzo Paolo; Long, Brian Alexander; Long, Jonathan; Long, Robin Eamonn; Lopes, Lourenco; Lopez Mateos, David; Lopez Paredes, Brais; Lorenz, Jeanette; Lorenzo Martinez, Narei; Losada, Marta; Loscutoff, Peter; Lou, XinChou; Lounis, Abdenour; Love, Jeremy; Love, Peter; Lowe, Andrew; Lu, Feng; Lubatti, Henry; Luci, Claudio; Luciano, Pierluigi; Lucotte, Arnaud; Ludwig, Dörthe; Luehring, Frederick; Lukas, Wolfgang; Luminari, Lamberto; Lundberg, Johan; Lundberg, Olof; Lund-Jensen, Bengt; Lungwitz, Matthias; Luongo, Carmela; Lupu, Nachman; Lynn, David; Lysak, Roman; Lytken, Else; Ma, Hong; Ma, Lian Liang; Maccarrone, Giovanni; Macchiolo, Anna; Maček, Boštjan; Macey, Tom; Machado Miguens, Joana; Macina, Daniela; Madar, Romain; Maddocks, Harvey Jonathan; Mader, Wolfgang; Madsen, Alexander; Maeno, Mayuko; Maeno, Tadashi; Magnoni, Luca; Magradze, Erekle; Mahboubi, Kambiz; Mahlstedt, Joern; Mahmoud, Sara; Maiani, Camilla; Maidantchik, Carmen; Maio, Amélia; Majewski, Stephanie; Makida, Yasuhiro; Makovec, Nikola; Mal, Prolay; Malaescu, Bogdan; Maldaner, Stephan; Malecki, Pawel; Maleev, Victor; Malek, Fairouz; Mallik, Usha; Malon, David; Malone, Caitlin; Maltezos, Stavros; Malyshev, Vladimir; Malyukov, Sergei; Mamuzic, Judita; Mandelli, Beatrice; Mandelli, Luciano; Mandić, Igor; Mandrysch, Rocco; Maneira, José; Manfredini, Alessandro; Manhaes de Andrade Filho, Luciano; Manjarres Ramos, Joany Andreina; Mann, Alexander; Manning, Peter; Manousakis-Katsikakis, Arkadios; Mansoulie, Bruno; Mantifel, Rodger; Mapelli, Livio; March, Luis; Marchand, Jean-Francois; Marchese, Fabrizio; Marchiori, Giovanni; Marcisovsky, Michal; Marino, Christopher; Marroquim, Fernando; Marshall, Zach; Marti, Lukas Fritz; Marti-Garcia, Salvador; Martin, Brian; Martin, Brian; Martin, Tim; Martin, Victoria Jane; Martin dit Latour, Bertrand; Martinez, Homero; Martinez, Mario; Martin-Haugh, Stewart; Martyniuk, Alex; Marx, Marilyn; Marzin, Antoine; Masetti, Lucia; Mashimo, Tetsuro; Mashinistov, Ruslan; Masik, Jiri; Maslennikov, Alexey; Massa, Ignazio; Massa, Lorenzo; Massol, Nicolas; Mastrandrea, Paolo; Mastroberardino, Anna; Masubuchi, Tatsuya; Matsunaga, Hiroyuki; Matsushita, Takashi; Mättig, Peter; Mättig, Stefan; Mattmann, Johannes; Mattravers, Carly; Maurer, Julien; Maxfield, Stephen; Maximov, Dmitriy; Mazini, Rachid; Mazzaferro, Luca; Mc Goldrick, Garrin; Mc Kee, Shawn Patrick; McCarn, Allison; McCarthy, Robert; McCarthy, Tom; Mcfayden, Josh; Mchedlidze, Gvantsa; Mclaughlan, Tom; McMahon, Steve; McPherson, Robert; Meade, Andrew; Mechnich, Joerg; Medinnis, Michael; Meehan, Samuel; Meera-Lebbai, Razzak; Meessen, Christophe; Mehlhase, Sascha; Mehta, Andrew; Meier, Karlheinz; Meineck, Christian; Meirose, Bernhard; Melachrinos, Constantinos; Mellado Garcia, Bruce Rafael; Meloni, Federico; Mendoza Navas, Luis; Mengarelli, Alberto; Menke, Sven; Meoni, Evelin; Mercurio, Kevin Michael; Mergelmeyer, Sebastian; Meric, Nicolas; Mermod, Philippe; Merola, Leonardo; Meroni, Chiara; Merritt, Frank; Merritt, Hayes; Messina, Andrea; Metcalfe, Jessica; Mete, Alaettin Serhan; Meyer, Carsten; Meyer, Christopher; Meyer, Jean-Pierre; Meyer, Jochen; Meyer, Joerg; Middleton, Robin; Migas, Sylwia; Mijović, Liza; Mikenberg, Giora; Mikestikova, Marcela; Mikuž, Marko; Miller, David; Mills, Corrinne; Milov, Alexander; Milstead, David; Milstein, Dmitry; Minaenko, Andrey; Miñano Moya, Mercedes; Minashvili, Irakli; Mincer, Allen; Mindur, Bartosz; Mineev, Mikhail; Ming, Yao; Mir, Lluisa-Maria; Mitani, Takashi; Mitrevski, Jovan; Mitsou, Vasiliki A; Mitsui, Shingo; Miucci, Antonio; Miyagawa, Paul; Mjörnmark, Jan-Ulf; Moa, Torbjoern; Moeller, Victoria; Mohapatra, Soumya; Molander, Simon; Moles-Valls, Regina; Mönig, Klaus; Monini, Caterina; Monk, James; Monnier, Emmanuel; Montejo Berlingen, Javier; Monticelli, Fernando; Monzani, Simone; Moore, Roger; Mora Herrera, Clemencia; Moraes, Arthur; Morange, Nicolas; Morel, Julien; Moreno, Deywis; Moreno Llácer, María; Morettini, Paolo; Morgenstern, Marcus; Morii, Masahiro; Moritz, Sebastian; Morley, Anthony Keith; Mornacchi, Giuseppe; Morris, John; Morvaj, Ljiljana; Moser, Hans-Guenther; Mosidze, Maia; Moss, Josh; Mount, Richard; Mountricha, Eleni; Moyse, Edward; Muanza, Steve; Mudd, Richard; Mueller, Felix; Mueller, James; Mueller, Klemens; Mueller, Thibaut; Mueller, Timo; Muenstermann, Daniel; Munwes, Yonathan; Murillo Garcia, Raul; Murillo Quijada, Javier Alberto; Murray, Bill; Mussche, Ido; Musto, Elisa; Myagkov, Alexey; Myska, Miroslav; Nackenhorst, Olaf; Nadal, Jordi; Nagai, Koichi; Nagai, Ryo; Nagai, Yoshikazu; Nagano, Kunihiro; Nagarkar, Advait; Nagasaka, Yasushi; Nagel, Martin; Nairz, Armin Michael; Nakahama, Yu; Nakamura, Koji; Nakamura, Tomoaki; Nakano, Itsuo; Namasivayam, Harisankar; Nanava, Gizo; Napier, Austin; Narayan, Rohin; Nash, Michael; Nattermann, Till; Naumann, Thomas; Navarro, Gabriela; Nayyar, Ruchika; Neal, Homer; Nechaeva, Polina; Neep, Thomas James; Negri, Andrea; Negri, Guido; Negrini, Matteo; Nektarijevic, Snezana; Nelson, Andrew; Nelson, Timothy Knight; Nemecek, Stanislav; Nemethy, Peter; Nepomuceno, Andre Asevedo; Nessi, Marzio; Neubauer, Mark; Neumann, Manuel; Neusiedl, Andrea; Neves, Ricardo; Nevski, Pavel; Newman, Paul; Nguyen, Duong Hai; Nguyen Thi Hong, Van; Nickerson, Richard; Nicolaidou, Rosy; Nielsen, Jason; Nikiforou, Nikiforos; Nikiforov, Andriy; Nikolaenko, Vladimir; Nikolaidis, Spyridon; Nikolic-Audit, Irena; Nikolics, Katalin; Nikolopoulos, Konstantinos; Nilsson, Paul; Ninomiya, Yoichi; Nisati, Aleandro; Nisius, Richard; Nobe, Takuya; Nodulman, Lawrence; Nomachi, Masaharu; Nomidis, Ioannis; Norberg, Scarlet; Nordberg, Markus; Nozaki, Mitsuaki; Nozka, Libor; Ntekas, Konstantinos; Nuncio-Quiroz, Adriana-Elizabeth; Nunes Hanninger, Guilherme; Nunnemann, Thomas; Nurse, Emily; Nuti, Francesco; O'Brien, Brendan Joseph; O'grady, Fionnbarr; O'Neil, Dugan; O'Shea, Val; Oakes, Louise Beth; Oakham, Gerald; Oberlack, Horst; Ocariz, Jose; Ochi, Atsuhiko; Ochoa, Ines; Oda, Susumu; Odaka, Shigeru; Ogren, Harold; Oh, Alexander; Oh, Seog; Ohm, Christian; Ohshima, Takayoshi; Ohshita, Hidetoshi; Okamura, Wataru; Okawa, Hideki; Okumura, Yasuyuki; Okuyama, Toyonobu; Olariu, Albert; Olivares Pino, Sebastian Andres; Oliveira Damazio, Denis; Oliver Garcia, Elena; Olivito, Dominick; Olszewski, Andrzej; Olszowska, Jolanta; Onofre, António; Onyisi, Peter; Oreglia, Mark; Oren, Yona; Orestano, Domizia; Orlando, Nicola; Oropeza Barrera, Cristina; Orr, Robert; Osculati, Bianca; Ospanov, Rustem; Otero y Garzon, Gustavo; Otono, Hidetoshi; Ouchrif, Mohamed; Ouellette, Eric; Ould-Saada, Farid; Ouraou, Ahmimed; Oussoren, Koen Pieter; Ouyang, Qun; Ovcharova, Ana; Owen, Mark; Owen, Simon; Ozcan, Veysi Erkcan; Ozturk, Nurcan; Pachal, Katherine; Pacheco Pages, Andres; Padilla Aranda, Cristobal; Pagan Griso, Simone; Paganis, Efstathios; Pahl, Christoph; Paige, Frank; Pais, Preema; Pajchel, Katarina; Palacino, Gabriel; Palestini, Sandro; Pallin, Dominique; Palma, Alberto; Palmer, Jody; Pan, Yibin; Panagiotopoulou, Evgenia; Panduro Vazquez, William; Panes, Boris; Pani, Priscilla; Panikashvili, Natalia; Panitkin, Sergey; Pantea, Dan; Papadopoulou, Theodora; Papageorgiou, Konstantinos; Paramonov, Alexander; Paredes Hernandez, Daniela; Parker, Michael Andrew; Parodi, Fabrizio; Parsons, John; Parzefall, Ulrich; Pasqualucci, Enrico; Passaggio, Stefano; Pastore, Francesca; Pásztor, Gabriella; Pataraia, Sophio; Patel, Nikhul; Pater, Joleen; Patricelli, Sergio; Pauly, Thilo; Pearce, James; Pedersen, Maiken; Pedraza Lopez, Sebastian; Pedro, Rute; Peleganchuk, Sergey; Pelikan, Daniel; Peng, Haiping; Penning, Bjoern; Penwell, John; Perepelitsa, Dennis; Perez Codina, Estel; Pérez García-Estañ, María Teresa; Perini, Laura; Pernegger, Heinz; Perrella, Sabrina; Peschke, Richard; Peshekhonov, Vladimir; Peters, Krisztian; Peters, Yvonne; Petersen, Brian; Petersen, Troels; Petit, Elisabeth; Petridis, Andreas; Petridou, Chariclia; Petrolo, Emilio; Petrucci, Fabrizio; Petteni, Michele; Pezoa, Raquel; Phillips, Peter William; Piacquadio, Giacinto; Pianori, Elisabetta; Picazio, Attilio; Piccaro, Elisa; Piccinini, Maurizio; Piec, Sebastian Marcin; Piegaia, Ricardo; Piendibene, Marco; Pignotti, David; Pilcher, James; Pilkington, Andrew; Pinamonti, Michele; Pinder, Alex; Pinfold, James; Pingel, Almut; Pinto, Belmiro; Pizio, Caterina; Pleier, Marc-Andre; Pleskot, Vojtech; Plotnikova, Elena; Plucinski, Pawel; Poddar, Sahill; Podlyski, Fabrice; Poettgen, Ruth; Poggioli, Luc; Pohl, David-leon; Pohl, Martin; Polesello, Giacomo; Policicchio, Antonio; Polifka, Richard; Polini, Alessandro; Pollard, Christopher Samuel; Polychronakos, Venetios; Pomeroy, Daniel; Pommès, Kathy; Pontecorvo, Ludovico; Pope, Bernard; Popeneciu, Gabriel Alexandru; Popovic, Dragan; Portell Bueso, Xavier; Pospisil, Stanislav; Potamianos, Karolos; Potrap, Igor; Potter, Christina; Potter, Christopher; Poveda, Joaquin; Pozdnyakov, Valery; Pozo Astigarraga, Mikel Eukeni; Prabhu, Robindra; Pralavorio, Pascal; Pranko, Aliaksandr; Prasad, Srivas; Pravahan, Rishiraj; Prell, Soeren; Price, Darren; Price, Joe; Price, Lawrence; Primavera, Margherita; Proissl, Manuel; Prokofiev, Kirill; Prokoshin, Fedor; Protopapadaki, Eftychia-sofia; Protopopescu, Serban; Proudfoot, James; Prudent, Xavier; Przybycien, Mariusz; Przysiezniak, Helenka; Psoroulas, Serena; Ptacek, Elizabeth; Pueschel, Elisa; Puldon, David; Purohit, Milind; Puzo, Patrick; Pylypchenko, Yuriy; Qian, Jianming; Qian, Weiming; Quadt, Arnulf; Quarrie, David; Quayle, William; Quilty, Donnchadha; Quinonez, Fernando; Radescu, Voica; Radhakrishnan, Sooraj Krishnan; Radloff, Peter; Ragusa, Francesco; Rahal, Ghita; Rajagopalan, Srinivasan; Rammensee, Michael; Rammes, Marcus; Randle-Conde, Aidan Sean; Rangel-Smith, Camila; Rao, Kanury; Rauscher, Felix; Rave, Stefan; Rave, Tobias Christian; Ravenscroft, Thomas; Raymond, Michel; Read, Alexander Lincoln; Rebuzzi, Daniela; Redelbach, Andreas; Redlinger, George; Reece, Ryan; Reeves, Kendall; Rehnisch, Laura; Reinsch, Andreas; Reisin, Hernan; Reiss, Andreas; Relich, Matthew; Rembser, Christoph; Renaud, Adrien; Rescigno, Marco; Resconi, Silvia; Rezanova, Olga; Reznicek, Pavel; Rezvani, Reyhaneh; Richter, Robert; Ridel, Melissa; Rieck, Patrick; Rijssenbeek, Michael; Rimoldi, Adele; Rinaldi, Lorenzo; Ritsch, Elmar; Riu, Imma; Rizatdinova, Flera; Rizvi, Eram; Robertson, Steven; Robichaud-Veronneau, Andree; Robinson, Dave; Robinson, James; Robson, Aidan; Rocha de Lima, Jose Guilherme; Roda, Chiara; Roda Dos Santos, Denis; Rodrigues, Luis; Roe, Shaun; Røhne, Ole; Romaniouk, Anatoli; Romano, Marino; Romeo, Gaston; Romero Adam, Elena; Romero Maltrana, Diego; Rompotis, Nikolaos; Roos, Lydia; Ros, Eduardo; Rosati, Stefano; Rosbach, Kilian; Rose, Anthony; Rose, Matthew; Rosendahl, Peter Lundgaard; Rosenthal, Oliver; Rossetti, Valerio; Rossi, Elvira; Rossi, Leonardo Paolo; Rosten, Rachel; Rotaru, Marina; Roth, Itamar; Rothberg, Joseph; Rousseau, David; Royon, Christophe; Rozanov, Alexandre; Rozen, Yoram; Ruan, Xifeng; Rubbo, Francesco; Rubinskiy, Igor; Rud, Viacheslav; Rudolph, Christian; Rudolph, Matthew Scott; Rühr, Frederik; Ruiz-Martinez, Aranzazu; Rurikova, Zuzana; Rusakovich, Nikolai; Ruschke, Alexander; Rutherfoord, John; Ruthmann, Nils; Ruzicka, Pavel; Ryabov, Yury; Rybar, Martin; Rybkin, Grigori; Ryder, Nick; Saavedra, Aldo; Sacerdoti, Sabrina; Saddique, Asif; Sadeh, Iftach; Sadrozinski, Hartmut; Sadykov, Renat; Safai Tehrani, Francesco; Sakamoto, Hiroshi; Sakurai, Yuki; Salamanna, Giuseppe; Salamon, Andrea; Saleem, Muhammad; Salek, David; Sales De Bruin, Pedro Henrique; Salihagic, Denis; Salnikov, Andrei; Salt, José; Salvatore, Daniela; Salvatore, Pasquale Fabrizio; Salvucci, Antonio; Salzburger, Andreas; Sampsonidis, Dimitrios; Sanchez, Arturo; Sánchez, Javier; Sanchez Martinez, Victoria; Sandaker, Heidi; Sander, Heinz Georg; Sanders, Michiel; Sandhoff, Marisa; Sandoval, Tanya; Sandoval, Carlos; Sandstroem, Rikard; Sankey, Dave; Sansoni, Andrea; Santoni, Claudio; Santonico, Rinaldo; Santos, Helena; Santoyo Castillo, Itzebelt; Sapp, Kevin; Sapronov, Andrey; Saraiva, João; Sarkisyan-Grinbaum, Edward; Sarrazin, Bjorn; Sartisohn, Georg; Sasaki, Osamu; Sasaki, Yuichi; Sauvan, Emmanuel; Sauvan, Jean-Baptiste; Savage, Graham; Savard, Pierre; Savu, Dan Octavian; Sawyer, Craig; Sawyer, Lee; Saxon, James; Sbarra, Carla; Sbrizzi, Antonio; Scanlon, Tim; Scannicchio, Diana; Scarcella, Mark; Schaarschmidt, Jana; Schacht, Peter; Schaefer, Douglas; Schaeffer, Jan; Schaelicke, Andreas; Schaepe, Steffen; Schaetzel, Sebastian; Schäfer, Uli; Schaffer, Arthur; Schaile, Dorothee; Schamberger, R~Dean; Scharf, Veit; Schegelsky, Valery; Scheirich, Daniel; Schernau, Michael; Scherzer, Max; Schettino, Vinicius; Schiavi, Carlo; Schieck, Jochen; Schillo, Christian; Schioppa, Marco; Schlenker, Stefan; Schmidt, Evelyn; Schmieden, Kristof; Schmitt, Christian; Schmitt, Christopher; Schmitt, Klaus; Schmitt, Sebastian; Schneider, Basil; Schnellbach, Yan Jie; Schnoor, Ulrike; Schoeffel, Laurent; Schoening, Andre; Schoenrock, Bradley Daniel; Schorlemmer, Andre Lukas; Schott, Matthias; Schouten, Doug; Schovancova, Jaroslava; Schramm, Steven; Schreyer, Manuel; Schroeder, Christian; Schroer, Nicolai; Schuh, Natascha; Schultens, Martin Johannes; Schultz-Coulon, Hans-Christian; Schulz, Holger; Schumacher, Markus; Schumm, Bruce; Schune, Philippe; Schwartzman, Ariel; Schwegler, Philipp; Schwemling, Philippe; Schwienhorst, Reinhard; Schwindling, Jerome; Schwindt, Thomas; Schwoerer, Maud; Sciacca, Gianfranco; Scifo, Estelle; Sciolla, Gabriella; Scott, Bill; Scuri, Fabrizio; Scutti, Federico; Searcy, Jacob; Sedov, George; Sedykh, Evgeny; Seidel, Sally; Seiden, Abraham; Seifert, Frank; Seixas, José; Sekhniaidze, Givi; Sekula, Stephen; Selbach, Karoline Elfriede; Seliverstov, Dmitry; Sellers, Graham; Semprini-Cesari, Nicola; Serfon, Cedric; Serin, Laurent; Serkin, Leonid; Serre, Thomas; Seuster, Rolf; Severini, Horst; Sforza, Federico; Sfyrla, Anna; Shabalina, Elizaveta; Shamim, Mansoora; Shan, Lianyou; Shank, James; Shao, Qi Tao; Shapiro, Marjorie; Shatalov, Pavel; Shaw, Kate; Shehu, Ciwake Yusufu; Sherwood, Peter; Shimizu, Shima; Shimmin, Chase Owen; Shimojima, Makoto; Shiyakova, Mariya; Shmeleva, Alevtina; Shochet, Mel; Shooltz, Dean; Short, Daniel; Shrestha, Suyog; Shulga, Evgeny; Shupe, Michael; Shushkevich, Stanislav; Sicho, Petr; Sicoe, Alexandru Dan; Sidiropoulou, Ourania; Sidorov, Dmitri; Sidoti, Antonio; Siegert, Frank; Sijacki, Djordje; Silbert, Ohad; Silva, José; Silva Oliveira, Marcos Vinicius; Silver, Yiftah; Silverstein, Daniel; Silverstein, Samuel; Simak, Vladislav; Simard, Olivier; Simic, Ljiljana; Simion, Stefan; Simioni, Eduard; Simmons, Brinick; Simoniello, Rosa; Simonyan, Margar; Sinervo, Pekka; Sinev, Nikolai; Sipica, Valentin; Siragusa, Giovanni; Sircar, Anirvan; Sivoklokov, Serguei; Siyad, Mohamed Jimcaale; Sjölin, Jörgen; Sjursen, Therese; Skinnari, Louise Anastasia; Skottowe, Hugh Philip; Skovpen, Kirill; Skubic, Patrick; Slater, Mark; Slavicek, Tomas; Sliwa, Krzysztof; Smakhtin, Vladimir; Smart, Ben; Smestad, Lillian; Smirnov, Sergei; Smirnov, Yury; Smirnova, Lidia; Smirnova, Oxana; Smizanska, Maria; Smolek, Karel; Snesarev, Andrei; Snidero, Giacomo; Snow, Joel; Snyder, Scott; Sobie, Randall; Socher, Felix; Soffer, Abner; Soh, Dart-yin; Solans, Carlos; Solar, Michael; Solc, Jaroslav; Soldatov, Evgeny; Soldevila, Urmila; Solfaroli Camillocci, Elena; Solodkov, Alexander; Soloviev, Igor; Solovyanov, Oleg; Solovyev, Victor; Soni, Nitesh; Sood, Alexander; Sopko, Bruno; Sopko, Vit; Sorin, Veronica; Sosebee, Mark; Sotiropoulou, Calliope Louisa; Soualah, Rachik; Soueid, Paul; Soukharev, Andrey; South, David; Spagnolo, Stefania; Spanò, Francesco; Spearman, William Robert; Spighi, Roberto; Spigo, Giancarlo; Spiwoks, Ralf; Spousta, Martin; Spreitzer, Teresa; St Denis, Richard Dante; Stabile, Alberto; Stahlman, Jonathan; Staley, Richard; Stamen, Rainer; Stanecka, Ewa; Stanek, Robert; Stanescu, Cristian; Stanescu-Bellu, Madalina; Stanitzki, Marcel Michael; Stapnes, Steinar; Starchenko, Evgeny; Stark, Jan; Staroba, Pavel; Starovoitov, Pavel; Staszewski, Rafal; Steele, Genevieve; Steinbach, Peter; Steinberg, Peter; Stelzer, Bernd; Stelzer, Harald Joerg; Stelzer-Chilton, Oliver; Stenzel, Hasko; Stern, Sebastian; Stewart, Graeme; Stillings, Jan Andre; Stockton, Mark; Stoebe, Michael; Stoerig, Kathrin; Stoicea, Gabriel; Stonjek, Stefan; Stradling, Alden; Straessner, Arno; Strandberg, Jonas; Strandberg, Sara; Strauss, Emanuel; Strauss, Michael; Strizenec, Pavol; Ströhmer, Raimund; Strom, David; Stroynowski, Ryszard; Stucci, Stefania Antonia; Stugu, Bjarne; Stupak, John; Styles, Nicholas Adam; Su, Dong; Su, Jun; Subramania, Halasya Siva; Subramaniam, Rajivalochan; Succurro, Antonella; Sugaya, Yorihito; Suhr, Chad; Suk, Michal; Sulin, Vladimir; Sultansoy, Saleh; Sumida, Toshi; Sun, Xiaohu; Sundermann, Jan Erik; Suruliz, Kerim; Sutton, Mark; Suzuki, Yu; Svatos, Michal; Swedish, Stephen; Swiatlowski, Maximilian; Sykora, Ivan; Sykora, Tomas; Ta, Duc; Tackmann, Kerstin; Taenzer, Joe; Taffard, Anyes; Tafirout, Reda; Taghavirad, Saeed; Taiblum, Nimrod; Takahashi, Yuta; Takai, Helio; Takashima, Ryuichi; Takeda, Hiroshi; Takeshita, Tohru; Takubo, Yosuke; Talby, Mossadek; Talyshev, Alexey; Tam, Jason; Tamsett, Matthew; Tan, Kong Guan; Tanaka, Junichi; Tanaka, Reisaburo; Tanaka, Shuji; Tanasijczuk, Andres Jorge; Tani, Kazutoshi; Tannoury, Nancy; Tapprogge, Stefan; Tarem, Shlomit; Tarrade, Fabien; Tartarelli, Giuseppe Francesco; Tas, Petr; Tasevsky, Marek; Tashiro, Takuya; Tassi, Enrico; Tavares Delgado, Ademar; Tayalati, Yahya; Taylor, Christopher; Taylor, Frank; Taylor, Geoffrey; Taylor, Wendy; Teischinger, Florian Alfred; Teixeira Dias Castanheira, Matilde; Teixeira-Dias, Pedro; Temming, Kim Katrin; Ten Kate, Herman; Teng, Ping-Kun; Terada, Susumu; Terashi, Koji; Terron, Juan; Terzo, Stefano; Testa, Marianna; Teuscher, Richard; Therhaag, Jan; Theveneaux-Pelzer, Timothée; Thoma, Sascha; Thomas, Juergen; Thomas-Wilsker, Joshuha; Thompson, Emily; Thompson, Paul; Thompson, Peter; Thompson, Stan; Thomsen, Lotte Ansgaard; Thomson, Evelyn; Thomson, Mark; Thong, Wai Meng; Tian, Feng; Tibbetts, Mark James; Tikhomirov, Vladimir; Tikhonov, Yury; Timoshenko, Sergey; Tiouchichine, Elodie; Tipton, Paul; Tisserant, Sylvain; Todorov, Theodore; Todorova-Nova, Sharka; Toggerson, Brokk; Tojo, Junji; Tokár, Stanislav; Tokushuku, Katsuo; Tollefson, Kirsten; Tomlinson, Lee; Tomoto, Makoto; Tompkins, Lauren; Toms, Konstantin; Torrence, Eric; Torres, Heberth; Torró Pastor, Emma; Toth, Jozsef; Touchard, Francois; Tovey, Daniel; Tran, Huong Lan; Trefzger, Thomas; Tremblet, Louis; Tricoli, Alessandro; Trigger, Isabel Marian; Trincaz-Duvoid, Sophie; Tripiana, Martin; Triplett, Nathan; Trischuk, William; Trocmé, Benjamin; Troncon, Clara; Trottier-McDonald, Michel; Trovatelli, Monica; True, Patrick; Trzebinski, Maciej; Trzupek, Adam; Tsarouchas, Charilaos; Tseng, Jeffrey; Tsiareshka, Pavel; Tsionou, Dimitra; Tsipolitis, Georgios; Tsirintanis, Nikolaos; Tsiskaridze, Shota; Tsiskaridze, Vakhtang; Tskhadadze, Edisher; Tsukerman, Ilya; Tsulaia, Vakhtang; Tsung, Jieh-Wen; Tsuno, Soshi; Tsybychev, Dmitri; Tua, Alan; Tudorache, Alexandra; Tudorache, Valentina; Tuna, Alexander Naip; Tupputi, Salvatore; Turchikhin, Semen; Turecek, Daniel; Turra, Ruggero; Tuts, Michael; Twomey, Matthew Shaun; Tykhonov, Andrii; Tylmad, Maja; Uchida, Kirika; Ueda, Ikuo; Ueno, Ryuichi; Ughetto, Michael; Ugland, Maren; Uhlenbrock, Mathias; Ukegawa, Fumihiko; Unal, Guillaume; Undrus, Alexander; Unel, Gokhan; Ungaro, Francesca; Unno, Yoshinobu; Urbaniec, Dustin; Urquijo, Phillip; Urrejola, Pedro; Usai, Giulio; Usanova, Anna; Vacavant, Laurent; Vacek, Vaclav; Vachon, Brigitte; Valencic, Nika; Valentinetti, Sara; Valero, Alberto; Valery, Loic; Valkar, Stefan; Valladolid Gallego, Eva; Vallecorsa, Sofia; Valls Ferrer, Juan Antonio; Van Der Deijl, Pieter; van der Geer, Rogier; van der Graaf, Harry; Van Der Leeuw, Robin; van der Ster, Daniel; van Eldik, Niels; van Gemmeren, Peter; Van Nieuwkoop, Jacobus; van Vulpen, Ivo; van Woerden, Marius Cornelis; Vanadia, Marco; Vandelli, Wainer; Vaniachine, Alexandre; Vankov, Peter; Vannucci, Francois; Vardanyan, Gagik; Vari, Riccardo; Varnes, Erich; Varol, Tulin; Varouchas, Dimitris; Vartapetian, Armen; Varvell, Kevin; Vassilakopoulos, Vassilios; Vazeille, Francois; Vazquez Schroeder, Tamara; Veatch, Jason; Veloso, Filipe; Veneziano, Stefano; Ventura, Andrea; Ventura, Daniel; Venturi, Manuela; Venturi, Nicola; Venturini, Alessio; Vercesi, Valerio; Verducci, Monica; Verkerke, Wouter; Vermeulen, Jos; Vest, Anja; Vetterli, Michel; Viazlo, Oleksandr; Vichou, Irene; Vickey, Trevor; Vickey Boeriu, Oana Elena; Viehhauser, Georg; Vieira De Souza, Julio; Viel, Simon; Vigne, Ralph; Villa, Mauro; Villaplana Perez, Miguel; Vilucchi, Elisabetta; Vincter, Manuella; Vinogradov, Vladimir; Virzi, Joseph; Vitells, Ofer; Vivarelli, Iacopo; Vives Vaque, Francesc; Vlachos, Sotirios; Vladoiu, Dan; Vlasak, Michal; Vogel, Adrian; Vokac, Petr; Volpi, Guido; Volpi, Matteo; von der Schmitt, Hans; von Radziewski, Holger; von Toerne, Eckhard; Vorobel, Vit; Vos, Marcel; Voss, Rudiger; Vossebeld, Joost; Vranjes, Nenad; Vranjes Milosavljevic, Marija; Vrba, Vaclav; Vreeswijk, Marcel; Vu Anh, Tuan; Vuillermet, Raphael; Vukotic, Ilija; Vykydal, Zdenek; Wagner, Peter; Wagner, Wolfgang; Wahrmund, Sebastian; Wakabayashi, Jun; Walder, James; Walker, Rodney; Walkowiak, Wolfgang; Wall, Richard; Waller, Peter; Walsh, Brian; Wang, Chao; Wang, Chiho; Wang, Haichen; Wang, Hulin; Wang, Jike; Wang, Jin; Wang, Kuhan; Wang, Rui; Wang, Song-Ming; Wang, Tan; Wang, Xiaoxiao; Warburton, Andreas; Ward, Patricia; Wardrope, David Robert; Washbrook, Andrew; Wasicki, Christoph; Watanabe, Ippei; Watkins, Peter; Watson, Alan; Watson, Ian; Watson, Miriam; Watts, Gordon; Watts, Stephen; Waugh, Ben; Webb, Samuel; Weber, Michele; Weber, Stefan Wolf; Webster, Jordan S; Weidberg, Anthony; Weigell, Philipp; Weingarten, Jens; Weiser, Christian; Weits, Hartger; Wells, Phillippa; Wenaus, Torre; Wendland, Dennis; Weng, Zhili; Wengler, Thorsten; Wenig, Siegfried; Wenzel, Volker; Wermes, Norbert; Werner, Matthias; Werner, Per; Wessels, Martin; Wetter, Jeffrey; Whalen, Kathleen; White, Andrew; White, Martin; White, Ryan; Whiteson, Daniel; Whittington, Denver; Wickens, Fred; Wiedenmann, Werner; Wielers, Monika; Wienemann, Peter; Wiglesworth, Craig; Wiik-Fuchs, Liv Antje Mari; Wijeratne, Peter Alexander; Wildauer, Andreas; Wildt, Martin Andre; Wilkens, Henric George; Will, Jonas Zacharias; Williams, Hugh; Williams, Sarah; Willocq, Stephane; Wilson, Alan; Wilson, John; Wingerter-Seez, Isabelle; Winkelmann, Stefan; Winklmeier, Frank; Wittgen, Matthias; Wittig, Tobias; Wittkowski, Josephine; Wollstadt, Simon Jakob; Wolter, Marcin Wladyslaw; Wolters, Helmut; Wosiek, Barbara; Wotschack, Jorg; Woudstra, Martin; Wozniak, Krzysztof; Wraight, Kenneth; Wright, Michael; Wu, Sau Lan; Wu, Xin; Wu, Yusheng; Wulf, Evan; Wyatt, Terry Richard; Wynne, Benjamin; Xella, Stefania; Xiao, Meng; Xu, Da; Xu, Lailin; Yabsley, Bruce; Yacoob, Sahal; Yamada, Miho; Yamaguchi, Hiroshi; Yamaguchi, Yohei; Yamamoto, Akira; Yamamoto, Kyoko; Yamamoto, Shimpei; Yamamura, Taiki; Yamanaka, Takashi; Yamauchi, Katsuya; Yamazaki, Yuji; Yan, Zhen; Yang, Haijun; Yang, Hongtao; Yang, Un-Ki; Yang, Yi; Yanush, Serguei; Yao, Liwen; Yasu, Yoshiji; Yatsenko, Elena; Yau Wong, Kaven Henry; Ye, Jingbo; Ye, Shuwei; Yen, Andy L; Yildirim, Eda; Yilmaz, Metin; Yoosoofmiya, Reza; Yorita, Kohei; Yoshida, Rikutaro; Yoshihara, Keisuke; Young, Charles; Young, Christopher John; Youssef, Saul; Yu, David Ren-Hwa; Yu, Jaehoon; Yu, Jiaming; Yu, Jie; Yuan, Li; Yurkewicz, Adam; Zabinski, Bartlomiej; Zaidan, Remi; Zaitsev, Alexander; Zaman, Aungshuman; Zambito, Stefano; Zanello, Lucia; Zanzi, Daniele; Zaytsev, Alexander; Zeitnitz, Christian; Zeman, Martin; Zemla, Andrzej; Zengel, Keith; Zenin, Oleg; Ženiš, Tibor; Zerwas, Dirk; Zevi della Porta, Giovanni; Zhang, Dongliang; Zhang, Huaqiao; Zhang, Jinlong; Zhang, Lei; Zhang, Xueyao; Zhang, Zhiqing; Zhao, Zhengguo; Zhemchugov, Alexey; Zhong, Jiahang; Zhou, Bing; Zhou, Lei; Zhou, Ning; Zhu, Cheng Guang; Zhu, Hongbo; Zhu, Junjie; Zhu, Yingchun; Zhuang, Xuai; Zibell, Andre; Zieminska, Daria; Zimine, Nikolai; Zimmermann, Christoph; Zimmermann, Robert; Zimmermann, Simone; Zimmermann, Stephanie; Zinonos, Zinonas; Ziolkowski, Michael; Zobernig, Georg; Zoccoli, Antonio; zur Nedden, Martin; Zurzolo, Giovanni; Zutshi, Vishnu; Zwalinski, Lukasz; CERN. Geneva. The LHC experiments Committee; LHCC

    2013-01-01

    The Phase-I upgrade of the ATLAS Trigger and Data Acquisition (TDAQ) system is to allow the ATLAS experiment to efficiently trigger and record data at instantaneous luminosities that are up to three times that of the original LHC design while maintaining trigger thresholds close to those used in the initial run of the LHC.

  6. Upgrade of the ATLAS detectors and trigger at the High Luminosity LHC: tracking and timing for pile-up suppression

    CERN Document Server

    Testa, Marianna; The ATLAS collaboration

    2018-01-01

    The High Luminosity-Large Hadron Collider  is expected to start data-taking in 2026 and to provide an integrated luminosity of 3000 fb-1, giving a factor 10 more data than will be collected by 2023. This high statistics will make it possible to perform precise measurements in the Higgs sector and improve searches of new physics at the TeV scale. The luminosity is expected to be 7.5 ×1034 cm-2 s-1, corresponding to about 200 proton-proton pile-up interactions, which will increase the rates at each level of the trigger and degrade the reconstruction performance. To cope with such a harsh environment some sub-detectors of the ATLAS experiment will be upgraded or completely substituted and the Trigger-DAQ system will be upgraded. In this talk an overview of two new sub-detectors enabling powerful pile-up suppression, a new Inner Tracker and a proposed High Granularity Timing Detector, will be given, describing the two technologies, their performance, and their interplay. Emphasis will also be given to the possi...

  7. Upgrade of the ATLAS detectors and trigger at the High Luminosity LHC: tracking and timing for pile-up suppression

    CERN Document Server

    Testa, Marianna; The ATLAS collaboration

    2018-01-01

    The High Luminosity-Large Hadron Collider is expected to start data-taking in 2026 and to provide an integrated luminosity of 3000 fb^{-1}, giving a factor 10 more data than will be collected by 2023. This high statistics will make it possible to perform precise measurements in the Higgs sector and improve searches of new physics at the TeV scale. The luminosity is expected to be 7.5 \\times 10^{34} cm^{-2} s^{-1}, corresponding to about 200 proton-proton pile-up interactions, which will increase the rates at each level of the trigger and degrade the reconstruction performance. To cope with such a harsh environment some sub-detectors of the ATLAS experiment will be upgraded or completely substituted and the Trigger-DAQ system will be upgraded. In this talk an overview of two new sub-detectors enabling powerful pile-up suppression, a new Inner Tracker and a proposed High Granularity Timing Detector, will be given, describing the two technologies, their performance, and their interplay. Emphasis will also be giv...

  8. The ATLAS Data Acquisition and High Level Trigger system

    International Nuclear Information System (INIS)

    2016-01-01

    This paper describes the data acquisition and high level trigger system of the ATLAS experiment at the Large Hadron Collider at CERN, as deployed during Run 1. Data flow as well as control, configuration and monitoring aspects are addressed. An overview of the functionality of the system and of its performance is presented and design choices are discussed.

  9. Developments and applications of DAQ framework DABC v2

    International Nuclear Information System (INIS)

    Adamczewski-Musch, J; Kurz, N; Linev, S

    2015-01-01

    The Data Acquisition Backbone Core (DABC) is a software framework for distributed data acquisition. In 2013 Version 2 of DABC has been released with several improvements. For monitoring and control, an HTTP web server and a proprietary command channel socket have been provided. Web browser GUIs have been implemented for configuration and control of DABC and MBS DAQ nodes via such HTTP server. Several specific plug-ins, for example interfacing PEXOR/KINPEX optical readout PCIe boards, or HADES trbnet input and hld file output, have been further developed. In 2014, DABC v2 was applied for production data taking of the HADES collaboration's pion beam time at GSI. It fully replaced the functionality of the previous event builder software and added new features concerning online monitoring. (paper)

  10. The ATLAS Trigger System: Ready for Run-2

    CERN Document Server

    Nakahama, Yu; The ATLAS collaboration

    2015-01-01

    The ATLAS trigger has been used very successfully for the online event selection during the first run of the LHC between 2009-2013 at a centre-of-mass energy between 900 GeV and 8 TeV. The trigger system consists of a hardware Level-1 (L1) and a software based high-level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. During the next data-taking period starting in early 2015 (Run-2) the LHC will operate at a centre-of-mass energy of about 13 TeV resulting in roughly five times higher trigger rates. We will review the upgrades to the ATLAS Trigger system that have been implemented during the shutdown and that will allow us to cope with these increased trigger rates while maintaining or even improving our efficiency to select relevant physics processes. This includes changes to the L1 calorimeter trigger, the introduction of a new L1 topological trigger module, improvements in the L1 muon system and the merging of the prev...

  11. An introduction to LAMPF data acquisition system introduce

    International Nuclear Information System (INIS)

    Fu Saihong

    1993-01-01

    LAMPF Data Acquisition Systems are divided into general DAQ system and advanced MEGA DAQ system. The construct and future plan of general system are described. The second stage trigger has been implemented at LAMPF using a commercially available workstation and VME interface. The implementation is described and measurements of data transfer speeds are presented

  12. Software framework developed for the slice test of the ATLAS endcap muon trigger system

    CERN Document Server

    Komatsu, S; Ishida, Y; Tanaka, K; Hasuko, K; Kano, H; Matsumoto, Y; Yakamura, Y; Sakamoto, H; Ikeno, M; Nakayoshi, K; Sasaki, O; Yasu, Y; Hasegawa, Y; Totsuka, M; Tsuji, S; Maeno, T; Ichimiya, R; Kurashige, H

    2002-01-01

    A sliced system test of the ATLAS end cap muon level 1 trigger system has been done in 2001 and 2002 separately. We have developed an own software framework for property and run controls for the slice test in 2001. The system is described in C++ throughout. The multi-PC control system is accomplished using the CORBA system. We have then restructured the software system on top of the ATLAS online software framework, and used this one for the slice test in 2002. In this report we discuss two systems in detail with emphasizing the module property configuration and run control. (8 refs).

  13. Cross-compilation of ATLAS online software to the power PC-Vx works system

    International Nuclear Information System (INIS)

    Tian Yuren; Li Jin; Ren Zhengyu; Zhu Kejun

    2005-01-01

    BES III, selected ATLAS online software as a framework of its run-control system. BES III applied Power PC-VxWorks system on its front-end readout system, so it is necessary to cross-compile this software to PowerPC-VxWorks system. The article demonstrates several aspects related to this project, such as the structure and organization of the ATLAS online software, the application of CMT tool while cross-compiling, the selection and configuration of the cross-compiler, methods to solve various problems due to the difference of compiler and operating system etc. The software, after cross-compiling, can normally run, and makes up a complete run-control system with the software running on Linux system. (authors)

  14. Thermo-dynamical measurements for ATLAS Inner Detector (evaporative cooling system)

    CERN Document Server

    Bitadze, Alexander; Buttar, Craig

    During the construction, installation and initial operation of the Evaporative Cooling System for the ATLAS Inner Detector SCT Barrel Sub-detector, some performance characteristics were observed to be inconsistent with the original design specifications, therefore the assumptions made in the ATLAS Inner Detector TDR were revisited. The main concern arose because of unexpected pressure drops in the piping system from the end of the detector structure to the distribution racks. The author of this theses made a series of measurements of these pressure drops and the thermal behavior of SCT-Barrel cooling Stave. Tests were performed on the installed detector in the pit, and using a specially assembled full scale replica in the SR1 laboratory at CERN. This test setup has been used to perform extensive tests of the cooling performance of the system including measurements of pressure drops in different parts of system, studies of the thermal profile along the stave pipe for different running conditions / parameters a...

  15. Beam Test of the ATLAS Level-1 Calorimeter Trigger System

    CERN Document Server

    Garvey, J; Mahout, G; Moye, T H; Staley, R J; Thomas, J P; Typaldos, D; Watkins, P M; Watson, A; Achenbach, R; Föhlisch, F; Geweniger, C; Hanke, P; Kluge, E E; Mahboubi, K; Meier, K; Meshkov, P; Rühr, F; Schmitt, K; Schultz-Coulon, H C; Ay, C; Bauss, B; Belkin, A; Rieke, S; Schäfer, U; Tapprogge, T; Trefzger, T; Weber, GA; Eisenhandler, E F; Landon, M; Apostologlou, P; Barnett, B M; Brawn, I P; Davis, A O; Edwards, J; Gee, C N P; Gillman, A R; Mirea, A; Perera, V J O; Qian, W; Sankey, D P C; Bohm, C; Hellman, S; Hidvegi, A; Silverstein, S

    2005-01-01

    The Level-1 Calorimter Trigger consists of a Preprocessor (PP), a Cluster Processor (CP), and a Jet/Energy-sum Processor (JEP). The CP and JEP receive digitised trigger-tower data from the Preprocessor and produce Region-of-Interest (RoIs) and trigger multiplicities. The latter are sent in real time to the Central Trigger Processor (CTP) where the Level-1 decision is made. On receipt of a Level-1 Accept, Readout Driver Modules (RODs), provide intermediate results to the data acquisition (DAQ) system for monitoring and diagnostic purpose. RoI information is sent to the RoI builder (RoIB) to help reduce the amount of data required for the Level-2 Trigger The Level-1 Calorimeter Trigger System at the test beam consisted of 1 Preprocessor module, 1 Cluster Processor Module, 1 Jet/Energy Module and 2 Common Merger Modules. Calorimeter energies were sucessfully handled thourghout the chain and trigger object sent to the CTP. Level-1 Accepts were sucessfully produced and used to drive the readout path. Online diagno...

  16. ATLAS Facility and Instrumentation Description Report

    International Nuclear Information System (INIS)

    Kang, Kyoung Ho; Moon, Sang Ki; Park, Hyun Sik

    2009-06-01

    A thermal-hydraulic integral effect test facility, ATLAS (Advanced Thermal-hydraulic Test Loop for Accident Simulation), has been constructed at KAERI (Korea Atomic Energy Research Institute). The ATLAS is a half-height and 1/288-volume scaled test facility with respect to the APR1400. The fluid system of the ATLAS consists of a primary system, a secondary system, a safety injection system, a break simulating system, a containment simulating system, and auxiliary systems. The primary system includes a reactor vessel, two hot legs, four cold legs, a pressurizer, four reactor coolant pumps, and two steam generators. The secondary system of the ATLAS is simplified to be of a circulating looptype. Most of the safety injection features of the APR1400 and the OPR1000 are incorporated into the safety injection system of the ATLAS. In the ATLAS test facility, about 1300 instrumentations are installed to precisely investigate the thermal-hydraulic behavior in simulation of the various test scenarios. This report describes the scaling methodology, the geometric data of the individual component, and the specification and the location of the instrumentations which are specific to the simulation of 50% DVI line break accident of the APR1400 for supporting the 50 th OECD/NEA International Standard Problem Exercise (ISP-50)

  17. Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science

    OpenAIRE

    Maeno, T; De, K; Klimentov, A; Nilsson, P; Oleynik, D; Panitkin, S; Petrosyan, A; Schovancova, J; Vaniachine, A; Wenaus, T; Yu, D

    2013-01-01

    An important foundation underlying the impressive success of data processing and analysis in the ATLAS experiment [1] at the LHC [2] is the Production and Distributed Analysis (PanDA) workload management system [3]. PanDA was designed specifically for ATLAS and proved to be highly successful in meeting all the distributed computing needs of the experiment. However, the core design of PanDA is not experiment specific. The PanDA workload management system is capable of meeting the needs of othe...

  18. Pixel DAQ and trigger for HL-LHC

    International Nuclear Information System (INIS)

    Morettini, P.

    2017-01-01

    The read-out is one of the challenges in the design of a pixel detector for the High Luminosity upgrade of the Large Hadron Collider (HL-LHC), that is expected to operate from 2026 at a leveled luminosity of 5 × 10 34  cm −2  s −1 . This is especially true if tracking information is needed in a low latency trigger system. The difficulties of a fast read-out will be reviewed, and possible strategies explained. The solutions that are being evaluated by the ATLAS and CMS collaborations for the upgrade of their trackers will be outlined and ideas on possible development beyond HL-LHC will be presented.

  19. Recent ATLAS Articles on WLAP

    CERN Multimedia

    Goldfarb, S

    2005-01-01

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project is a system for the archiving and publishing of multimedia presentations, using the Web as medium. We list here newly available WLAP items relating to ATLAS: Atlas Software Week Plenary 6-10 December 2004 North American ATLAS Physics Workshop (Tucson) 20-21 December 2004 (17 talks) Physics Analysis Tools Tutorial (Tucson) 19 December 2004 Full Chain Tutorial 21 September 2004 ATLAS Plenary Sessions, 17-18 February 2005 (17 talks) Coming soon: ATLAS Tutorial on Electroweak Physics, 14 Feb. 2005 Software Workshop, 21-22 February 2005 Click here to browse WLAP for all ATLAS lectures.

  20. Supporting ATLAS

    CERN Multimedia

    2003-01-01

    Eighteen feet made of stainless steel will support the barrel ATLAS detector in the cavern at Point 1. In total, the ATLAS feet system will carry approximately 6000 tons, and will give the same inclination to the detector as the LHC accelerator. The installation of the feet is scheduled to finish during January 2004 with an installation precision at the 1 mm level despite their height of 5.3 metres. The manufacture was carried out in Russia (Company Izhorskiye Zavody in St. Petersburg), as part of a Russian and JINR Dubna in-kind contribution to ATLAS. Involved in the installation is a team from IHEP-Protvino (Russia), the ATLAS technical co-ordination team at CERN, and the CERN survey team. In all, about 15 people are involved. After the feet are in place, the barrel toroid magnet and the barrel calorimeters will be installed. This will keep the ATLAS team busy for the entire year 2004.

  1. ATLAS, an integrated structural analysis and design system. Volume 4: Random access file catalog

    Science.gov (United States)

    Gray, F. P., Jr. (Editor)

    1979-01-01

    A complete catalog is presented for the random access files used by the ATLAS integrated structural analysis and design system. ATLAS consists of several technical computation modules which output data matrices to corresponding random access file. A description of the matrices written on these files is contained herein.

  2. Development and Test of the Cooling System for the ATLAS Hadron Tile Calorimeter

    CERN Document Server

    Schlager, Gerolf

    2002-01-01

    The ATLAS detector is a general-purpose experiment for proton-proton collisions designed to investigate the full range of physical processes at the Large Hadron Collider (LHC). The ATLAS Tile Hadron Calorimeter is designed to measure the energies of jets with a resolution of E/E = 50%/pE 3%, for j j<3. This thesis presents the detailed studies which were carried out with prototypes of the Tilecal cooling system during my year as technical student at CERN. The results will be used to validate and to determine the nal design of the cooling system of the ATLAS Tile calorimeter. The performance of the cooling unit built for the calibration of Tilecal modules was evaluated for various parameters like temperature stability and safety conditions during operation. Additionally I contributed to the analysis of the calorimeter response for di erent cooling temperatures. These results determined the constraints on the operation conditions of the cooling system in terms of temperature stability that will be needed d...

  3. Report to users of ATLAS [Argonne Tandem-Line Accelerator System

    International Nuclear Information System (INIS)

    Ahmad, I.; Glagola, B.

    1987-03-01

    The operation and development of ATLAS are reported, including accelerator improvements. Particularly noted is an upgrade to extend the mass range of projectiles up to uranium and to increase the beam intensity by at least two orders of magnitude for all ions. Meetings are discussed, particularly of the Program Advisory Committee and the User Group Executive Committee. Some basic information is provided for users planning to run experiments at ATLAS, including a table of beams available. The data acquisition system for ATLAS, DAPHNE, is discussed, as are the following experimental facilities: the Argonne-Notre Dame Gamma Ray Facility, a proposal submitted for constructing a large-acceptance Fragment Mass Analyzer. Brief summaries are provided of some recent experiments for which data analysis is complete. Experiments performed during the period from June 1, 1986 to January 31, 1987 are tabulated, providing the experiment number, scientists, institution, experiment name, number of days, beam, and energy

  4. Report to users of ATLAS (Argonne Tandem-Line Accelerator System)

    Energy Technology Data Exchange (ETDEWEB)

    Ahmad, I.; Glagola, B. (eds.)

    1987-03-01

    The operation and development of ATLAS are reported, including accelerator improvements. Particularly noted is an upgrade to extend the mass range of projectiles up to uranium and to increase the beam intensity by at least two orders of magnitude for all ions. Meetings are discussed, particularly of the Program Advisory Committee and the User Group Executive Committee. Some basic information is provided for users planning to run experiments at ATLAS, including a table of beams available. The data acquisition system for ATLAS, DAPHNE, is discussed, as are the following experimental facilities: the Argonne-Notre Dame Gamma Ray Facility, a proposal submitted for constructing a large-acceptance Fragment Mass Analyzer. Brief summaries are provided of some recent experiments for which data analysis is complete. Experiments performed during the period from June 1, 1986 to January 31, 1987 are tabulated, providing the experiment number, scientists, institution, experiment name, number of days, beam, and energy. (LEW)

  5. 3D virtual world remote laboratory to assist in designing advanced user defined DAQ systems based on FlexRIO and EPICS

    Energy Technology Data Exchange (ETDEWEB)

    Carpeño, A., E-mail: antonio.cruiz@upm.es [Universidad Politécnica de Madrid UPM, Madrid (Spain); Contreras, D.; López, S.; Ruiz, M.; Sanz, D.; Arcas, G. de; Esquembri, S. [Universidad Politécnica de Madrid UPM, Madrid (Spain); Vega, J.; Castro, R. [Laboratorio Nacional de Fusión CIEMAT, Madrid (Spain)

    2016-11-15

    Highlights: • Assist in the design of FPGA-based data acquisition systems using EPICS and FlexRIO. • Virtual Reality technologies are highly effective at creating rich training scenarios. • Virtual actions simulate the behavior of a real system to enhance the training process. • Virtual actions can make real changes remotely in the physical ITER’s Fast Controller. - Abstract: iRIO-3DLab is a platform devised to assist developers in the design and implementation of intelligent and reconfigurable FPGA-based data acquisition systems using EPICS and FlexRIO technologies. Although these architectures are very powerful in defining the behavior of DAQ systems, this advantage comes at the price of greater difficulty in understanding how the system works, and how it should be configured and built according to the hardware available and the processing demanded by the requirements of the diagnostics. In this regard, Virtual Reality technologies are highly effective at creating rich training scenarios due to their ability to provide immersive training experiences and collaborative environments. The designed remote laboratory is based on a 3D virtual world developed in Opensim, which is accessible through a standard free 3D viewer. Using a client-server architecture, the virtual world connects with a service running in a Linux-based computer executing EPICS. Through their avatars, users interact with virtual replicas of this equipment as they would in real-life situations. Some actions can be used to simulate the behavior of a real system to enhance the training process, while others can be used to make real changes remotely in the physical system.

  6. 3D virtual world remote laboratory to assist in designing advanced user defined DAQ systems based on FlexRIO and EPICS

    International Nuclear Information System (INIS)

    Carpeño, A.; Contreras, D.; López, S.; Ruiz, M.; Sanz, D.; Arcas, G. de; Esquembri, S.; Vega, J.; Castro, R.

    2016-01-01

    Highlights: • Assist in the design of FPGA-based data acquisition systems using EPICS and FlexRIO. • Virtual Reality technologies are highly effective at creating rich training scenarios. • Virtual actions simulate the behavior of a real system to enhance the training process. • Virtual actions can make real changes remotely in the physical ITER’s Fast Controller. - Abstract: iRIO-3DLab is a platform devised to assist developers in the design and implementation of intelligent and reconfigurable FPGA-based data acquisition systems using EPICS and FlexRIO technologies. Although these architectures are very powerful in defining the behavior of DAQ systems, this advantage comes at the price of greater difficulty in understanding how the system works, and how it should be configured and built according to the hardware available and the processing demanded by the requirements of the diagnostics. In this regard, Virtual Reality technologies are highly effective at creating rich training scenarios due to their ability to provide immersive training experiences and collaborative environments. The designed remote laboratory is based on a 3D virtual world developed in Opensim, which is accessible through a standard free 3D viewer. Using a client-server architecture, the virtual world connects with a service running in a Linux-based computer executing EPICS. Through their avatars, users interact with virtual replicas of this equipment as they would in real-life situations. Some actions can be used to simulate the behavior of a real system to enhance the training process, while others can be used to make real changes remotely in the physical system.

  7. First Results from the Online Radiation Dose Monitoring System in ATLAS experiment

    CERN Document Server

    Mandić, I; The ATLAS collaboration; Deliyergiyev, M; Gorišek, A; Kramberger, G; Mikuž, M; Franz, S; Hartert, J; Dawson, I; Miyagawa, P; Nicolas, L

    2011-01-01

    High radiation doses which will accumulate in components of ATLAS experiment during data taking will causes damage to detectors and readout electronics. It is therefore important to continuously monitor the doses to estimate the level of degradation caused by radiation. Online radiation monitoring system measures ionizing dose in SiO2 , displacement damage in silicon in terms of 1-MeV(Si) equivalent neutron fluence and fluence of thermal neutrons at several locations in ATLAS detector. In this paper design of the system, results of measurements and comparison of measured integrated doses and fluences with predictions from FLUKA simulation will be shown.

  8. Improving the ATLAS physics potential with the Fast Track Trigger System

    CERN Document Server

    Cavaliere, Viviana; The ATLAS collaboration

    2015-01-01

    The ATLAS Fast TracKer (FTK) is a custom electronics system that will operate at the full Level-1 accept rate, 100 kHz, to provide high quality tracks as input to the High-Level Trigger. The event reconstruction is performed in hardware, thanks to the massive parallelism of associative memories (AM) and FPGAs. We present the advantages for the physics goals of the ATLAS experiment and the recent results on the design, technological advancements and testing of some of the core components used in the processor.

  9. ATLAS DataFlow Infrastructure: Recent results from ATLAS cosmic and first-beam data-taking

    Energy Technology Data Exchange (ETDEWEB)

    Vandelli, Wainer, E-mail: wainer.vandelli@cern.c

    2010-04-01

    The ATLAS DataFlow infrastructure is responsible for the collection and conveyance of event data from the detector front-end electronics to the mass storage. Several optimized and multi-threaded applications fulfill this purpose operating over a multi-stage Gigabit Ethernet network which is the backbone of the ATLAS Trigger and Data Acquisition System. The system must be able to efficiently transport event-data with high reliability, while providing aggregated bandwidths larger than 5 GByte/s and coping with many thousands network connections. Nevertheless, routing and streaming capabilities and monitoring and data accounting functionalities are also fundamental requirements. During 2008, a few months of ATLAS cosmic data-taking and the first experience with the LHC beams provided an unprecedented test-bed for the evaluation of the performance of the ATLAS DataFlow, in terms of functionality, robustness and stability. Besides, operating the system far from its design specifications helped in exercising its flexibility and contributed in understanding its limitations. Moreover, the integration with the detector and the interfacing with the off-line data processing and management have been able to take advantage of this extended data taking-period as well. In this paper we report on the usage of the DataFlow infrastructure during the ATLAS data-taking. These results, backed-up by complementary performance tests, validate the architecture of the ATLAS DataFlow and prove that the system is robust, flexible and scalable enough to cope with the final requirements of the ATLAS experiment.

  10. ATLAS DataFlow Infrastructure: Recent results from ATLAS cosmic and first-beam data-taking

    International Nuclear Information System (INIS)

    Vandelli, Wainer

    2010-01-01

    The ATLAS DataFlow infrastructure is responsible for the collection and conveyance of event data from the detector front-end electronics to the mass storage. Several optimized and multi-threaded applications fulfill this purpose operating over a multi-stage Gigabit Ethernet network which is the backbone of the ATLAS Trigger and Data Acquisition System. The system must be able to efficiently transport event-data with high reliability, while providing aggregated bandwidths larger than 5 GByte/s and coping with many thousands network connections. Nevertheless, routing and streaming capabilities and monitoring and data accounting functionalities are also fundamental requirements. During 2008, a few months of ATLAS cosmic data-taking and the first experience with the LHC beams provided an unprecedented test-bed for the evaluation of the performance of the ATLAS DataFlow, in terms of functionality, robustness and stability. Besides, operating the system far from its design specifications helped in exercising its flexibility and contributed in understanding its limitations. Moreover, the integration with the detector and the interfacing with the off-line data processing and management have been able to take advantage of this extended data taking-period as well. In this paper we report on the usage of the DataFlow infrastructure during the ATLAS data-taking. These results, backed-up by complementary performance tests, validate the architecture of the ATLAS DataFlow and prove that the system is robust, flexible and scalable enough to cope with the final requirements of the ATLAS experiment.

  11. Thermal Performance of ATLAS Laser Thermal Control System Demonstration Unit

    Science.gov (United States)

    Ku, Jentung; Robinson, Franklin; Patel, Deepak; Ottenstein, Laura

    2013-01-01

    The second Ice, Cloud, and Land Elevation Satellite mission currently planned by National Aeronautics and Space Administration will measure global ice topography and canopy height using the Advanced Topographic Laser Altimeter System {ATLAS). The ATLAS comprises two lasers; but only one will be used at a time. Each laser will generate between 125 watts and 250 watts of heat, and each laser has its own optimal operating temperature that must be maintained within plus or minus 1 degree Centigrade accuracy by the Laser Thermal Control System (LTCS) consisting of a constant conductance heat pipe (CCHP), a loop heat pipe (LHP) and a radiator. The heat generated by the laser is acquired by the CCHP and transferred to the LHP, which delivers the heat to the radiator for ultimate rejection. The radiator can be exposed to temperatures between minus 71 degrees Centigrade and minus 93 degrees Centigrade. The two lasers can have different operating temperatures varying between plus 15 degrees Centigrade and plus 30 degrees Centigrade, and their operating temperatures are not known while the LTCS is being designed and built. Major challenges of the LTCS include: 1) A single thermal control system must maintain the ATLAS at 15 degrees Centigrade with 250 watts heat load and minus 71 degrees Centigrade radiator sink temperature, and maintain the ATLAS at plus 30 degrees Centigrade with 125 watts heat load and minus 93 degrees Centigrade radiator sink temperature. Furthermore, the LTCS must be qualification tested to maintain the ATLAS between plus 10 degrees Centigrade and plus 35 degrees Centigrade. 2) The LTCS must be shut down to ensure that the ATLAS can be maintained above its lowest desirable temperature of minus 2 degrees Centigrade during the survival mode. No software control algorithm for LTCS can be activated during survival and only thermostats can be used. 3) The radiator must be kept above minus 65 degrees Centigrade to prevent ammonia from freezing using no more

  12. Using VME to leverage legacy CAMAC electronics into a high speed data acquisition system

    International Nuclear Information System (INIS)

    Anthony, P.L.

    1997-06-01

    The authors report on the first full scale implementation of a VME based Data Acquisition (DAQ) system at the Stanford Linear Accelerator Center (SLAC). This system was designed for use in the End Station A (ESA) fixed target program. It was designed to handle interrupts at rates up to 120 Hz and event sizes up to 10,000 bytes per interrupt. One of the driving considerations behind the design of this system was to make use of existing CAMAC based electronics and yet deliver a high performance DAQ system. This was achieved by basing the DAQ system in a VME backplane allowing parallel control and readout of CAMAC branches and VME DAQ modules. This system was successfully used in the Spin Physics research program at SLAC (E154 and E155)

  13. Data acquisition system issues for large experiments

    International Nuclear Information System (INIS)

    Siskind, E.J.

    2007-01-01

    This talk consists of personal observations on two classes of data acquisition ('DAQ') systems for Silicon trackers in large experiments with which the author has been concerned over the last three or more years. The first half is a classic 'lessons learned' recital based on experience with the high-level debug and configuration of the DAQ system for the GLAST LAT detector. The second half is concerned with a discussion of the promises and pitfalls of using modern (and future) generations of 'system-on-a-chip' ('SOC') or 'platform' field-programmable gate arrays ('FPGAs') in future large DAQ systems. The DAQ system pipeline for the 864k channels of Si tracker in the GLAST LAT consists of five tiers of hardware buffers which ultimately feed into the main memory of the (two-active-node) level-3 trigger processor farm. The data formats and buffer volumes of these tiers are briefly described, as well as the flow control employed between successive tiers. Lessons learned regarding data formats, buffer volumes, and flow control/data discard policy are discussed. The continued development of platform FPGAs containing large amounts of configurable logic fabric, embedded PowerPC hard processor cores, digital signal processing components, large volumes of on-chip buffer memory, and multi-gigabit serial I/O capability permits DAQ system designers to vastly increase the amount of data preprocessing that can be performed in parallel within the DAQ pipeline for detector systems in large experiments. The capabilities of some currently available FPGA families are reviewed, along with the prospects for next-generation families of announced, but not yet available, platform FPGAs. Some experience with an actual implementation is presented, and reconciliation between advertised and achievable specifications is attempted. The prospects for applying these components to space-borne Si tracker detectors are briefly discussed

  14. A triggerless digital data acquisition system for nuclear decay experiments

    Energy Technology Data Exchange (ETDEWEB)

    Agramunt, J.; Tain, J. L.; Albiol, F.; Algora, A.; Estevez, E.; Giubrone, G.; Jordan, M. D.; Molina, F.; Rubio, B.; Valencia, E. [Instituto de Fisica Corpuscular, Centro Mixto C.S.I.C. - Univ. Valencia, Apdo. Correos 22085, 46071 Valencia (Spain)

    2013-06-10

    In nuclear decay experiments an important goal of the Data Acquisition (DAQ) system is to allow the reconstruction of time correlations between signals registered in different detectors. Classically DAQ systems are based in a trigger that starts the event acquisition, and all data related with the event of that trigger are collected as one compact structure. New technologies and electronics developments offer new possibilities to nuclear experiments with the use of sampling ADC-s. This type of ADC-s is able to provide the pulse shape, height and a time stamp of the signal. This new feature (time stamp) allows new systems to run without an event trigger. Later, the event can be reconstructed using the time stamp information. In this work we present a new DAQ developed for {beta}-delayed neutron emission experiments. Due to the long moderation time of neutrons, we opted for a self-trigger DAQ based on commercial digitizers. With this DAQ a negligible acquisition dead time was achieved while keeping a maximum of event information and flexibility in time correlations.

  15. Predictive analytics tools to adjust and monitor performance metrics for the ATLAS Production System

    CERN Document Server

    Titov, Mikhail; The ATLAS collaboration

    2017-01-01

    Every scientific workflow involves an organizational part which purpose is to plan an analysis process thoroughly according to defined schedule, thus to keep work progress efficient. Having such information as an estimation of the processing time or possibility of system outage (abnormal behaviour) will improve the planning process, provide an assistance to monitor system performance and predict its next state. The ATLAS Production System is an automated scheduling system that is responsible for central production of Monte-Carlo data, highly specialized production for physics groups, as well as data pre-processing and analysis using such facilities as grid infrastructures, clouds and supercomputers. With its next generation (ProdSys2) the processing rate is around 2M tasks per year that is more than 365M jobs per year. ProdSys2 evolves to accommodate a growing number of users and new requirements from the ATLAS Collaboration, physics groups and individual users. ATLAS Distributed Computing in its current stat...

  16. The NeuARt II system: a viewing tool for neuroanatomical data based on published neuroanatomical atlases

    Directory of Open Access Journals (Sweden)

    Cheng Wei-Cheng

    2006-12-01

    Full Text Available Abstract Background Anatomical studies of neural circuitry describing the basic wiring diagram of the brain produce intrinsically spatial, highly complex data of great value to the neuroscience community. Published neuroanatomical atlases provide a spatial framework for these studies. We have built an informatics framework based on these atlases for the representation of neuroanatomical knowledge. This framework not only captures current methods of anatomical data acquisition and analysis, it allows these studies to be collated, compared and synthesized within a single system. Results We have developed an atlas-viewing application ('NeuARt II' in the Java language with unique functional properties. These include the ability to use copyrighted atlases as templates within which users may view, save and retrieve data-maps and annotate them with volumetric delineations. NeuARt II also permits users to view multiple levels on multiple atlases at once. Each data-map in this system is simply a stack of vector images with one image per atlas level, so any set of accurate drawings made onto a supported atlas (in vector graphics format could be uploaded into NeuARt II. Presently the database is populated with a corpus of high-quality neuroanatomical data from the laboratory of Dr Larry Swanson (consisting 64 highly-detailed maps of PHAL tract-tracing experiments, made up of 1039 separate drawings that were published in 27 primary research publications over 17 years. Herein we take selective examples from these data to demonstrate the features of NeuArt II. Our informatics tool permits users to browse, query and compare these maps. The NeuARt II tool operates within a bioinformatics knowledge management platform (called 'NeuroScholar' either as a standalone or a plug-in application. Conclusion Anatomical localization is fundamental to neuroscientific work and atlases provide an easily-understood framework that is widely used by neuroanatomists and non

  17. Detector Control System for the ATLAS Forward Proton detector

    CERN Document Server

    Czekierda, Sabina; The ATLAS collaboration

    2017-01-01

    The ATLAS Forward Proton (AFP) is a forward detector using a Roman Pot technique, recently installed in the LHC tunnel. It is aiming at registering protons that were diffractively or electromagnetically scattered in soft and hard processes. Infrastructure of the detector consists of hardware placed both in the tunnel and in the control room USA15 (about 330 meters from the Roman Pots). AFP detector, like the other detectors of the ATLAS experiment, uses the Detector Control System (DCS) to supervise the detector and to ensure its safe and coherent operation, since the incorrect detector performance may influence the physics results. The DCS continuously monitors the detector parameters, subset of which is stored in data bases. Crucial parameters are guarded by alarm system. A detector representation as a hierarchical tree-like structure of well-defined subsystems built with the use of the Finite State Machine (FSM) toolkit allows for overall detector operation and visualization. Every node in the hierarchy is...

  18. ATCA-based ATLAS FTK input interface system

    Energy Technology Data Exchange (ETDEWEB)

    Okumura, Yasuyuki [Chicago U., EFI; Liu, Tiehui Ted [Fermilab; Olsen, Jamieson [Fermilab; Iizawa, Tomoya [Waseda U.; Mitani, Takashi [Waseda U.; Korikawa, Tomohiro [Waseda U.; Yorita, Kohei [Waseda U.; Annovi, Alberto [Frascati; Beretta, Matteo [Frascati; Gatta, Maurizio [Frascati; Sotiropoulou, C-L. [Aristotle U., Thessaloniki; Gkaitatzis, Stamatios [Aristotle U., Thessaloniki; Kordas, Konstantinos [Aristotle U., Thessaloniki; Kimura, Naoki [Aristotle U., Thessaloniki; Cremonesi, Matteo [Chicago U., EFI; Yin, Hang [Fermilab; Xu, Zijun [Peking U.

    2015-04-27

    The first stage of the ATLAS Fast TracKer (FTK) is an ATCA-based input interface system, where hits from the entire silicon tracker are clustered and organized into overlapping eta-phi trigger towers before being sent to the tracking engines. First, FTK Input Mezzanine cards receive hit data and perform clustering to reduce data volume. Then, the ATCA-based Data Formatter system will organize the trigger tower data, sharing data among boards over full mesh backplanes and optic fibers. The board and system level design concepts and implementation details, as well as the operation experiences from the FTK full-chain testing, will be presented.

  19. ATCA-based ATLAS FTK input interface system

    CERN Document Server

    Okumura, Y; The ATLAS collaboration; Olsen, J; Iizawa, T; Mitani, T; Korikawa, T; Yorita, K; Annovi, A; Beretta, M; Gatta, M; Sotiropoulou, C; Gkaitatzis, S; Kordas, K; Kimura, N; Cremonesi, M; Yin, H; Xu, Z

    2014-01-01

    The first stage of the ATLAS Fast TracKer (FTK) is an ATCA-based input interface system, where hits from the entire silicon tracker must be clustered and organized into overlapping eta-phi trigger towers before being sent to the tracking processors. First, FTK Input Mezzanine cards receive hit data and perform clustering to reduce data volume. Then, the ATCA-based Data Formatter system will organize the trigger tower data, sharing data among boards over a full-mesh backplane. The board and system level performance studies and implementation details, as well as the operation experiences from the FTK full-chain testing, will be presented.

  20. The Resource Manager the ATLAS Trigger and Data Acquisition System

    CERN Document Server

    Aleksandrov, Igor; The ATLAS collaboration; Lehmann Miotto, Giovanna; Soloviev, Igor

    2016-01-01

    The Resource Manager of the ATLAS Trigger and Data Acquisition system The Resource Manager is one of the core components of the Data Acquisition system of the ATLAS experiment at the LHC. The Resource Manager marshals the right for applications to access resources which may exist in multiple but limited copies, in order to avoid conflicts due to program faults or operator errors. The access to resources is managed in a manner similar to what a lock manager would do in other software systems. All the available resources and their association to software processes are described in the Data Acquisition configuration database. The Resource Manager is queried about the availability of resources every time an application needs to be started. The Resource Manager’s design is based on a client-server model, hence it consists of two components: the Resource Manager "server" application and the "client" shared library. The Resource Manager server implements all the needed functionalities, while the Resource Manager c...

  1. Recent ATLAS Articles on WLAP

    CERN Multimedia

    Goldfarb, S.

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project is a system for the archiving and publishing of multimedia presentations, using the Web as medium. We list here newly available WLAP items relating to ATLAS: June ATLAS Plenary Meeting Tutorial on Physics EDM and Tools (June) Freiburg Overview Week Ketevi Assamagan's Tutorial on Analysis Tools Click here to browse WLAP for all ATLAS lectures.

  2. Control in the ATLAS TDAQ System

    CERN Document Server

    Liko, D; Flammer, J; Dobson, M; Jones, R; Mapelli, L; Alexandrov, I; Korobov, S; Kotov, V; Mineev, M; Amorim, A; Fiuza de Barros, N; Klose, D; Pedro, L; Badescu, E; Caprini, M; Kolos, S; Kazarov, A; Ryabov, Yu; Soloviev, I; Computing In High Energy Physics

    2005-01-01

    TDAQ system requires a comprehensive and flexible control system. Its role ranges from the so-called run-control, e.g. starting and stopping the data taking, to error handling and fault tolerance. It also includes initialization and verification of the overall system. Following the traditional approach a hierarchical system of customizable controllers has been proposed. For the final system all functionality will be therefore available in a distributed manner, with the possibility of local customization. After a technology survey the open source expert system CLIPS has been chosen as a basis for the implementation of the supervision and the verification system. The CLIPS interpreter has been extended to provide a general control framework. Other ATLAS Online software components have been integrated as plug-ins and provide the mechanism for configuration and communication. Several components have been implemented sharing this technology. The dynamic behavior of the individual component is fully described by th...

  3. Fine grained event processing on HPCs with the ATLAS Yoda system

    CERN Document Server

    Calafiura, Paolo; The ATLAS collaboration; Guan, Wen; Maeno, Tadashi; Nilsson, Paul; Oleynik, Danila; Panitkin, Sergey; Tsulaia, Vakhtang; van Gemmeren, Peter; Wenaus, Torre

    2015-01-01

    High performance computing facilities present unique challenges and opportunities for HENP event processing. The massive scale of many HPC systems means that fractionally small utilizations can yield large returns in processing throughput. Parallel applications which can dynamically and efficiently fill any scheduling opportunities the resource presents benefit both the facility (maximal utilization) and the (compute-limited) science. The ATLAS Yoda system provides this capability to HENP-like event processing applications by implementing event-level processing in an MPI-based master-client model that integrates seamlessly with the more broadly scoped ATLAS Event Service. Fine grained, event level work assignments are intelligently dispatched to parallel workers to sustain full utilization on all cores, with outputs streamed off to destination object stores in near real time with similarly fine granularity, such that processing can proceed until termination with full utilization. The system offers the efficie...

  4. ATLAS Tile Calorimeter calibration and monitoring systems

    Science.gov (United States)

    Cortés-González, Arely

    2018-01-01

    The ATLAS Tile Calorimeter is the central section of the hadronic calorimeter of the ATLAS experiment and provides important information for reconstruction of hadrons, jets, hadronic decays of tau leptons and missing transverse energy. This sampling calorimeter uses steel plates as absorber and scintillating tiles as active medium. The light produced by the passage of charged particles is transmitted by wavelength shifting fibres to photomultiplier tubes, located in the outer part of the calorimeter. Neutral particles may also produce a signal after interacting with the material and producing charged particles. The readout is segmented into about 5000 cells, each of them being read out by two photomultipliers in parallel. To calibrate and monitor the stability and performance of each part of the readout chain during the data taking, a set of calibration systems is used. This comprises Cesium radioactive sources, Laser, charge injection elements and an integrator based readout system. Information from all systems allows to monitor and equalise the calorimeter response at each stage of the signal production, from scintillation light to digitisation. Calibration runs are monitored from a data quality perspective and used as a cross-check for physics runs. The data quality efficiency achieved during 2016 was 98.9%. These calibration and stability of the calorimeter reported here show that the TileCal performance is within the design requirements and has given essential contribution to reconstructed objects and physics results.

  5. The liquid helium system of ATLAS

    International Nuclear Information System (INIS)

    Nixon, J.M.; Bollinger, L.M.

    1989-01-01

    Starting in 1978 with one small refrigerator and distribution line, the LHe system of ATLAS has gradually grown into a complex network, as required by several enlargements of the superconducting linac. The cryogenic system now comprises 3 refrigerators, 11 helium compressors, /approximately/340 ft. of coaxial LHe transfer line, 3 1000-l dewars, and /approximately/76 LHe valves that deliver steady-state flowing LHe to 16 beam-line cryostats. In normal operation, the 3 refrigerators are linked so as to provide cooling where needed. LHe heat exchangers in distribution lines play an important role. This paper discusses design features of the system, including the logic of the controls that permit the coupled refrigerators to operate stably in the presence of large and sudden changes in heat load. 8 refs., 3 figs

  6. ATLAS Fact Sheet : To raise awareness of the ATLAS detector and collaboration on the LHC

    CERN Multimedia

    ATLAS Outreach

    2010-01-01

    Facts on the Detector, Calorimeters, Muon System, Inner Detector, Pixel Detector, Semiconductor Tracker, Transition Radiation Tracker,, Surface hall, Cavern, Detector, Magnet system, Solenoid, Toroid, Event rates, Physics processes, Supersymmetric particles, Comparing LHC with Cosmic rays, Heavy ion collisions, Trigger and Data Acquisition TDAQ, Computing, the LHC and the ATLAS collaboration. This fact sheet also contains images of ATLAS and the collaboration as well as a short list of videos on ATLAS available for viewing.

  7. MBAT: A scalable informatics system for unifying digital atlasing workflows

    Directory of Open Access Journals (Sweden)

    Sane Nikhil

    2010-12-01

    Full Text Available Abstract Background Digital atlases provide a common semantic and spatial coordinate system that can be leveraged to compare, contrast, and correlate data from disparate sources. As the quality and amount of biological data continues to advance and grow, searching, referencing, and comparing this data with a researcher's own data is essential. However, the integration process is cumbersome and time-consuming due to misaligned data, implicitly defined associations, and incompatible data sources. This work addressing these challenges by providing a unified and adaptable environment to accelerate the workflow to gather, align, and analyze the data. Results The MouseBIRN Atlasing Toolkit (MBAT project was developed as a cross-platform, free open-source application that unifies and accelerates the digital atlas workflow. A tiered, plug-in architecture was designed for the neuroinformatics and genomics goals of the project to provide a modular and extensible design. MBAT provides the ability to use a single query to search and retrieve data from multiple data sources, align image data using the user's preferred registration method, composite data from multiple sources in a common space, and link relevant informatics information to the current view of the data or atlas. The workspaces leverage tool plug-ins to extend and allow future extensions of the basic workspace functionality. A wide variety of tool plug-ins were developed that integrate pre-existing as well as newly created technology into each workspace. Novel atlasing features were also developed, such as supporting multiple label sets, dynamic selection and grouping of labels, and synchronized, context-driven display of ontological data. Conclusions MBAT empowers researchers to discover correlations among disparate data by providing a unified environment for bringing together distributed reference resources, a user's image data, and biological atlases into the same spatial or semantic context

  8. Rucio - The next generation large scale distributed system for ATLAS Data Management

    CERN Document Server

    Beermann, T; The ATLAS collaboration; Lassnig, M; Barisits, M; Vigne, R; Serfon, C; Stewart, G A; Goossens, L; Nairz, A; Molfetas, A

    2014-01-01

    Rucio is the next-generation Distributed Data Management (DDM) system benefiting from recent advances in cloud and "Big Data" computing to address the ATLAS experiment scaling requirements. Rucio is an evolution of the ATLAS DDM system Don Quijote 2 (DQ2), which has demonstrated very large scale data management capabilities with more than 150 petabytes spread worldwide across 130 sites, and accesses from 1,000 active users. However, DQ2 is reaching its limits in terms of scalability, requiring a large number of support staff to operate and being hard to extend with new technologies. Rucio will deal with these issues by relying on new technologies to ensure system scalability, address new user requirements and employ a new automation framework to reduce operational overheads.

  9. ATLAS Calorimeter system: Run-2 performance, Phase-1 and Phase-2 upgrades

    CERN Document Server

    Starz, Steffen; The ATLAS collaboration

    2018-01-01

    The ATLAS detector was designed and built to study proton-proton collisions produced at the LHC at centre-of-mass energies up to 14 TeV and instantaneous luminosities up to 10^{34} cm^{−2} s^{−1}. A liquid argon-lead sampling calorimeter (LAr) is employed as electromagnetic calorimeter and hadronic calorimeter, except in the barrel region, where a scintillator-steel sampling calorimeter (TileCal) is used as hadronic calorimeter. ATLAS recorded 87 fb^{-1} of data at a center-of-mass energy of 13 TeV between 2015 and 2017. In order to achieve the level-1 acceptance rate of 100 kHz, certain adjustments have been performed. The calorimetry system performed accordingly to its design values and have played a crucial role in the ATLAS physics programme. This contribution will give an overview of the detector operation, monitoring and data quality, as well as the achieved performance, including the calibration and stability of the energy scale, noise level, response uniformity and time resolution of the ATLAS cal...

  10. The ATLAS/TILECAL Detector Control System

    CERN Document Server

    Santos, H; The ATLAS collaboration

    2010-01-01

    Tilecal, the barrel hadronic calorimeter of ATLAS, is a sampling calorimeter where scintillating tiles are embedded in an iron matrix. The tiles are optically coupled to wavelength shifting fibers that carry the optical signal to photo-multipliers. It has a cylindrical shape and is made out of 3 cylinders, the Long Barrel with the LBA and LBC partitions, and the two Extended Barrel with the EBA and EBC partitions. The main task of the Tile calorimeter Detector Control System (DCS) is to enable the coherent and safe operation of the calorimeter. All actions initiated by the operator, as well as all errors, warnings and alarms concerning the hardware of the detector are handled by DCS. The Tile calorimeter DCS controls and monitors mainly the low voltage and high voltage power supply systems, but it is also interfaced with the infrastructure (cooling system and racks), the laser and cesium calibration systems, the data acquisition system, configuration and conditions databases and the detector safety system. In...

  11. ATLAS Magnet System Nearing Completion

    CERN Document Server

    ten Kate, H H J

    2008-01-01

    The ATLAS Detector at the Large Hadron Collider at CERN is equipped with a superconducting magnet system that consists of a Barrel Toroid, two End-Cap Toroids and a Central Solenoid. The four magnets generate the magnetic field for the muon- and inner tracking detectors, respectively. After 10 years of construction in industry, integration and on-surface tests at CERN, the magnets are now in the underground cavern where they undergo the ultimate test before data taking in the detector can start during the course of next year. The system with outer dimensions of 25 m length and 22 m diameter is based on using conduction cooled aluminum stabilized NbTi conductors operating at 4.6 K and 20.5 kA maximum coil current with peak magnetic fields in the windings of 4.1 T and a system stored magnetic energy of 1.6 GJ. The Barrel Toroid and Central Solenoid were already successfully charged after installation to full current in autumn 2006. This year the system is completed with two End Cap Toroids. The ultimate test of...

  12. ATLAS Tier-2 monitoring system for the German cloud

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Joerg; Quadt, Arnulf; Weber, Pavel [II. Physikalisches Institut, Georg-August-Universitaet, Goettingen (Germany)

    2011-07-01

    The ATLAS tier centers in Germany provide their computing resources for the ATLAS experiment. The stable and sustainable operation of this so-called DE-cloud heavily relies on effective monitoring of the Tier-1 center GridKa and its associated Tier-2 centers. Central and local grid information services constantly collect and publish the status information from many computing resources and sites. The cloud monitoring system discussed in this presentation evaluates the information related to different cloud resources and provides a coherent and comprehensive view of the cloud. The main monitoring areas covered by the tool are data transfers, cloud software installation, site batch systems, Service Availability Monitoring (SAM). The cloud monitoring system consists of an Apache-based Python application, which retrieves the information and publishes it on the generated HTML web page. This results in an easy-to-use web interface for the limited number of sites in the cloud with fast and efficient access to the required information starting from a high level summary for the whole cloud to detailed diagnostics for the single site services. This approach provides the efficient identification of correlated site problems and simplifies the administration on both cloud and site level.

  13. Virtualization of the ATLAS software environment on a shared HPC system

    CERN Document Server

    Gamel, Anton Josef; The ATLAS collaboration

    2017-01-01

    The shared HPC cluster NEMO at the University of Freiburg has been made available to local ATLAS users through the provisioning of virtual machines incorporating the ATLAS software environment analogously to a WLCG center. This concept allows to run both data analysis and production on the HPC host system which is connected to the existing Tier2/Tier3 infrastructure. Schedulers of the two clusters were integrated in a dynamic, on-demand way. An automatically generated, fully functional virtual machine image provides access to the local user environment. The performance in the virtualized environment is evaluated for typical High-Energy Physics applications.

  14. Improving Security in the ATLAS PanDA System

    International Nuclear Information System (INIS)

    Caballero, J; Maeno, T; Potekhin, M; Wenaus, T; Nilsson, P; Stewart, G

    2011-01-01

    The security challenges faced by users of the grid are considerably different to those faced in previous environments. The adoption of pilot jobs systems by LHC experiments has mitigated many of the problems associated with the inhomogeneities found on the grid and has greatly improved job reliability; however, pilot jobs systems themselves must then address many security issues, including the execution of multiple users' code under a common 'grid' identity. In this paper we describe the improvements and evolution of the security model in the ATLAS PanDA (Production and Distributed Analysis) system. We describe the security in the PanDA server which is in place to ensure that only authorized members of the VO are allowed to submit work into the system and that jobs are properly audited and monitored. We discuss the security in place between the pilot code itself and the PanDA server, ensuring that only properly authenticated workload is delivered to the pilot for execution. When the code to be executed is from a 'normal' ATLAS user, as opposed to the production system or other privileged actor, then the pilot may use an EGEE developed identity switching tool called gLExec. This changes the grid proxy available to the job and also switches the UNIX user identity to protect the privileges of the pilot code proxy. We describe the problems in using this system and how they are overcome. Finally, we discuss security drills which have been run using PanDA and show how these improved our operational security procedures.

  15. The LUCID detector ATLAS luminosity monitor and its electronic system

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00378808; The ATLAS collaboration

    2016-01-01

    Starting from 2015 LHC is performing a new run, at higher center of mass energy (13 TeV) and with 25 ns bunch-spacing. The ATLAS luminosity monitor LUCID has been completely renewed, both on detector design and in the electronics, in order to cope with the new running conditions. The new detector electronics is presented, featuring a new read-out board (LUCROD), for signal acquisition and digitization, PMT-charge integration and single-side luminosity measurements, and the revisited LUMAT board for side-A-side-C combination. The contribution covers the new boards design, the firmware and software developments, the implementation of luminosity algorithms, the optical communication between boards and the integration into the ATLAS TDAQ system.

  16. Multi-Threaded Evolution of the Data-Logging System of the ATLAS Experiment at CERN

    CERN Document Server

    Colombo, T; The ATLAS collaboration

    2011-01-01

    The ATLAS experiment is currently observing proton-proton collisions delivered by the LHC accelerator at a centre of mass energy of 7 TeV with a peak luminosity of ~1033 cm-2 s-1. The ATLAS Trigger and Data Acquisition (TDAQ) system selects interesting events on-line in a three-level trigger system in order to store them at a budgeted rate of ~200 Hz for an event size of ~1.5 MB. This paper focuses on the TDAQ data-logging system. Its purpose is to receive events from the third level trigger, process them and stream the results into different raw data files according to the trigger decision. The data files are subsequently moved to the central mass storage facility at CERN. The system currently in production has been commissioned in 2007 and has been working smoothly since then. It is however based on an essentially single-threaded design that is anticipated not to cope with the increase in event rate and event size that is foreseen as part of the ATLAS and LHC upgrade programs. This design also severely limi...

  17. Multi-Threaded Evolution of the Data-Logging System of the ATLAS Experiment at CERN

    CERN Document Server

    Colombo, T; The ATLAS collaboration

    2011-01-01

    The ATLAS experiment observes proton-proton collisions delivered by the LHC accelerator at a centre of mass energy of 7 TeV with a peak luminosity of ~ 10^33 cm^-2 s^-1 in 2011. The ATLAS Trigger and Data Acquisition (TDAQ) system selects interesting events on-line in a three-level trigger system in order to store them at a budgeted average rate of ~ 400 Hz for an event size of ~1.2 MB. This paper focuses on the TDAQ data-logging system. Its purpose is to receive events from the third level trigger, process them and stream the data into different raw files according to the trigger decision. The system currently in production is based on an essentially single-threaded design that is anticipated not to cope with the increase in event rate and event size foreseen as part of the ATLAS and LHC upgrade programs. This design also severely limits the possibility of performing additional CPU-intensive tasks. Therefore, a novel design able to exploit the full power of multi-core architecture is needed. The main challen...

  18. NASA Technical Interchange Meeting (TIM): Advanced Technology Lifecycle Analysis System (ATLAS) Technology Tool Box

    Science.gov (United States)

    ONeil, D. A.; Craig, D. A.; Christensen, C. B.; Gresham, E. C.

    2005-01-01

    The objective of this Technical Interchange Meeting was to increase the quantity and quality of technical, cost, and programmatic data used to model the impact of investing in different technologies. The focus of this meeting was the Technology Tool Box (TTB), a database of performance, operations, and programmatic parameters provided by technologists and used by systems engineers. The TTB is the data repository used by a system of models known as the Advanced Technology Lifecycle Analysis System (ATLAS). This report describes the result of the November meeting, and also provides background information on ATLAS and the TTB.

  19. The ATLAS PanDA Monitoring System and its Evolution

    Science.gov (United States)

    Klimentov, A.; Nevski, P.; Potekhin, M.; Wenaus, T.

    2011-12-01

    The PanDA (Production and Distributed Analysis) Workload Management System is used for ATLAS distributed production and analysis worldwide. The needs of ATLAS global computing imposed challenging requirements on the design of PanDA in areas such as scalability, robustness, automation, diagnostics, and usability for both production shifters and analysis users. Through a system-wide job database, the PanDA monitor provides a comprehensive and coherent view of the system and job execution, from high level summaries to detailed drill-down job diagnostics. It is (like the rest of PanDA) an Apache-based Python application backed by Oracle. The presentation layer is HTML code generated on the fly in the Python application which is also responsible for managing database queries. However, this approach is lacking in user interface flexibility, simplicity of communication with external systems, and ease of maintenance. A decision was therefore made to migrate the PanDA monitor server to Django Web Application Framework and apply JSON/AJAX technology in the browser front end. This allows us to greatly reduce the amount of application code, separate data preparation from presentation, leverage open source for tools such as authentication and authorization mechanisms, and provide a richer and more dynamic user experience. We describe our approach, design and initial experience with the migration process.

  20. The ATLAS PanDA Monitoring System and its Evolution

    International Nuclear Information System (INIS)

    Klimentov, A; Nevski, P; Wenaus, T; Potekhin, M

    2011-01-01

    The PanDA (Production and Distributed Analysis) Workload Management System is used for ATLAS distributed production and analysis worldwide. The needs of ATLAS global computing imposed challenging requirements on the design of PanDA in areas such as scalability, robustness, automation, diagnostics, and usability for both production shifters and analysis users. Through a system-wide job database, the PanDA monitor provides a comprehensive and coherent view of the system and job execution, from high level summaries to detailed drill-down job diagnostics. It is (like the rest of PanDA) an Apache-based Python application backed by Oracle. The presentation layer is HTML code generated on the fly in the Python application which is also responsible for managing database queries. However, this approach is lacking in user interface flexibility, simplicity of communication with external systems, and ease of maintenance. A decision was therefore made to migrate the PanDA monitor server to Django Web Application Framework and apply JSON/AJAX technology in the browser front end. This allows us to greatly reduce the amount of application code, separate data preparation from presentation, leverage open source for tools such as authentication and authorization mechanisms, and provide a richer and more dynamic user experience. We describe our approach, design and initial experience with the migration process.

  1. The Tilecal/ATLAS detector control system

    CERN Document Server

    Tomasio Pina, João Antonio

    2004-01-01

    Tilecal is the barrel hadronic calorimeter of the ATLAS detector that is presently being built at CERN to operate at the LHC accelerator. The main task of the Tilecal detector control system (DCS) is to enable the coherent and safe operation of the detector. All actions initiated by the operator and all errors, warnings, and alarms concerning the hardware of the detector are handled by DCS. The DCS has to continuously monitor all operational parameters, give warnings and alarms concerning the hardware of the detector. The DCS architecture consists of a distributed back-end (BE) system running on PC's and different front-end (FE) systems. The implementation of the BE will he achieved with a commercial supervisory control and data acquisition system (SCADA) and the FE instrumentation will consist on a wide variety of equipment. The connection between the FE and BE is provided by fieldbus or L

  2. Design of a large remote seismic exploration data acquisition system, with the architecture of a distributed storage area network

    International Nuclear Information System (INIS)

    Cao, Ping; Song, Ke-zhu; Yang, Jun-feng; Ruan, Fu-ming

    2011-01-01

    Nowadays, seismic exploration data acquisition (DAQ) systems have been developed into remote forms with a large-scale coverage area. In this kind of application, some features must be mentioned. Firstly, there are many sensors which are placed remotely. Secondly, the total data throughput is high. Thirdly, optical fibres are not suitable everywhere because of cost control, harsh running environments, etc. Fourthly, the ability of expansibility and upgrading is a must for this kind of application. It is a challenge to design this kind of remote DAQ (rDAQ). Data transmission, clock synchronization, data storage, etc must be considered carefully. A fourth-hierarchy model of rDAQ is proposed. In this model, rDAQ is divided into four different function levels. From this model, a simple and clear architecture based on a distributed storage area network is proposed. rDAQs with this architecture have advantages of flexible configuration, expansibility and stability. This architecture can be applied to design and realize from simple single cable systems to large-scale exploration DAQs

  3. Asymmetric Data Acquisition System for an Endoscopic PET-US Detector

    Science.gov (United States)

    Zorraquino, Carlos; Bugalho, Ricardo; Rolo, Manuel; Silva, Jose C.; Vecklans, Viesturs; Silva, Rui; Ortigão, Catarina; Neves, Jorge A.; Tavernier, Stefaan; Guerra, Pedro; Santos, Andres; Varela, João

    2016-02-01

    According to current prognosis studies of pancreatic cancer, survival rate nowadays is still as low as 6% mainly due to late detections. Taking into account the location of the disease within the body and making use of the level of miniaturization in radiation detectors that can be achieved at the present time, EndoTOFPET-US collaboration aims at the development of a multimodal imaging technique for endoscopic pancreas exams that combines the benefits of high resolution metabolic information from time-of- flight (TOF) positron emission tomography (PET) with anatomical information from ultrasound (US). A system with such capabilities calls for an application-specific high-performance data acquisition system (DAQ) able to control and readout data from different detectors. The system is composed of two novel detectors: a PET head extension for a commercial US endoscope placed internally close to the region-of-interest (ROI) and a PET plate placed over the patient's abdomen in coincidence with the PET head. These two detectors will send asymmetric data streams that need to be handled by the DAQ system. The approach chosen to cope with these needs goes through the implementation of a DAQ capable of performing multi-level triggering and which is distributed across two different on-detector electronics and the off-detector electronics placed inside the reconstruction workstation. This manuscript provides an overview on the design of this innovative DAQ system and, based on results obtained by means of final prototypes of the two detectors and DAQ, we conclude that a distributed multi-level triggering DAQ system is suitable for endoscopic PET detectors and it shows potential for its application in different scenarios with asymmetric sources of data.

  4. The Evolution of the Trigger and Data Acquisition System in the ATLAS Experiment

    CERN Document Server

    Garelli, N; The ATLAS collaboration

    2014-01-01

    The ATLAS experiment, aimed at recording the results of LHC proton-proton collisions, is upgrading its Trigger and Data Acquisition (TDAQ) system during the current LHC first long shutdown. The purpose of such upgrade is to add robustness and flexibility to the selection and the conveyance of the physics data, simplify the maintenance of the infrastructure, exploit new technologies and, overall, make ATLAS data-taking capable of dealing with increasing event rates. \

  5. Virtualization of the ATLAS software environment on a shared HPC system

    CERN Document Server

    Schnoor, Ulrike; The ATLAS collaboration

    2017-01-01

    High-Performance Computing (HPC) and other research cluster computing resources provided by universities can be useful supplements to the collaboration’s own WLCG computing resources for data analysis and production of simulated event samples. The shared HPC cluster "NEMO" at the University of Freiburg has been made available to local ATLAS users through the provisioning of virtual machines incorporating the ATLAS software environment analogously to a WLCG center. The talk describes the concept and implementation of virtualizing the ATLAS software environment to run both data analysis and production on the HPC host system which is connected to the existing Tier-3 infrastructure. Main challenges include the integration into the NEMO and Tier-3 schedulers in a dynamic, on-demand way, the scalability of the OpenStack infrastructure, as well as the automatic generation of a fully functional virtual machine image providing access to the local user environment, the dCache storage element and the parallel file sys...

  6. Experience from a pilot based system for ATLAS

    International Nuclear Information System (INIS)

    Nilsson, P

    2008-01-01

    The PanDA software provides a highly performing distributed production and distributed analysis system. It is the first system in the ATLAS experiment to use a pilot based late job delivery technique. This paper describes the architecture of the pilot system used in PanDA. Unique features have been implemented for high reliability automation in a distributed environment. Performance of PanDA is analyzed from one and a half years of experience of performing distributed computing on the Open Science Grid (OSG) infrastructure. Experience with pilot delivery mechanism using Condor-G, and a glide-in factory developed under OSG will be described

  7. Studies for the detector control system of the ATLAS pixel at the HL-LHC

    International Nuclear Information System (INIS)

    Püllen, L; Becker, K; Boek, J; Kersten, S; Kind, P; Mättig, P; Zeitnitz, C

    2012-01-01

    In the context of the LHC upgrade to the HL-LHC the inner detector of the ATLAS experiment will be replaced completely. As part of this redesign there will also be a new pixel detector. This new pixel detector requires a control system which meets the strict space requirements for electronics in the ATLAS experiment. To accomplish this goal we propose a DCS (Detector Control System) network with the smallest form factor currently available. This network consists of a DCS chip located in close proximity to the interaction point and a DCS controller located in the outer regions of the ATLAS detector. These two types of chips form a star shaped network with several DCS chips being controlled by one DCS controller. Both chips are manufactured in deep sub-micron technology. We present prototypes with emphasis on studies concerning single event upsets.

  8. The psychometric characteristics of the revised depression attitude questionnaire (R-DAQ) in Pakistani medical practitioners: a cross-sectional study of doctors in Lahore.

    Science.gov (United States)

    Haddad, Mark; Waqas, Ahmed; Sukhera, Ahmed Bashir; Tarar, Asad Zaman

    2017-07-27

    Depression is common mental health problem and leading contributor to the global burden of disease. The attitudes and beliefs of the public and of health professionals influence social acceptance and affect the esteem and help-seeking of people experiencing mental health problems. The attitudes of clinicians are particularly relevant to their role in accurately recognising and providing appropriate support and management of depression. This study examines the characteristics of the revised depression attitude questionnaire (R-DAQ) with doctors working in healthcare settings in Lahore, Pakistan. A cross-sectional survey was conducted in 2015 using the revised depression attitude questionnaire (R-DAQ). A convenience sample of 700 medical practitioners based in six hospitals in Lahore was approached to participate in the survey. The R-DAQ structure was examined using Parallel Analysis from polychoric correlations. Unweighted least squares analysis (ULSA) was used for factor extraction. Model fit was estimated using goodness-of-fit indices and the root mean square of standardized residuals (RMSR), and internal consistency reliability for the overall scale and subscales was assessed using reliability estimates based on Mislevy and Bock (BILOG 3 Item analysis and test scoring with binary logistic models. Mooresville: Scientific Software, 55) and the McDonald's Omega statistic. Findings using this approach were compared with principal axis factor analysis based on Pearson correlation matrix. 601 (86%) of the doctors approached consented to participate in the study. Exploratory factor analysis of R-DAQ scale responses demonstrated the same 3-factor structure as in the UK development study, though analyses indicated removal of 7 of the 22 items because of weak loading or poor model fit. The 3 factor solution accounted for 49.8% of the common variance. Scale reliability and internal consistency were adequate: total scale standardised alpha was 0.694; subscale reliability for

  9. ATLAS-AWS

    International Nuclear Information System (INIS)

    Gehrcke, Jan-Philip; Stonjek, Stefan; Kluth, Stefan

    2010-01-01

    We show how the ATLAS offline software is ported on the Amazon Elastic Compute Cloud (EC2). We prepare an Amazon Machine Image (AMI) on the basis of the standard ATLAS platform Scientific Linux 4 (SL4). Then an instance of the SLC4 AMI is started on EC2 and we install and validate a recent release of the ATLAS offline software distribution kit. The installed software is archived as an image on the Amazon Simple Storage Service (S3) and can be quickly retrieved and connected to new SL4 AMI instances using the Amazon Elastic Block Store (EBS). ATLAS jobs can then configure against the release kit using the ATLAS configuration management tool (cmt) in the standard way. The output of jobs is exported to S3 before the SL4 AMI is terminated. Job status information is transferred to the Amazon SimpleDB service. The whole process of launching instances of our AMI, starting, monitoring and stopping jobs and retrieving job output from S3 is controlled from a client machine using python scripts implementing the Amazon EC2/S3 API via the boto library working together with small scripts embedded in the SL4 AMI. We report our experience with setting up and operating the system using standard ATLAS job transforms.

  10. The ATLAS Data Flow system in Run2: Design and Performance

    CERN Document Server

    Rifki, Othmane; The ATLAS collaboration

    2016-01-01

    The ATLAS detector uses a real time selective triggering system to reduce the high interaction rate from 40 MHz to its data storage capacity of 1 kHz. A hardware first level trigger limits the rate to 100 kHz and a software high level trigger selects events for offline analysis. By building on the experience gained during the successful first run of the LHC, the ATLAS Trigger and Data Acquisition system has been simplified and upgraded to take advantage of state of the art technologies. The Dataflow element of the system is composed of distributed hardware and software responsible for buffering and transporting event data from the Readout system to the High Level Trigger and to the event storage. This system has been reshaped in order to maximize the flexibility and efficiency of the data selection process. The updated dataflow is different from the previous implementation both in terms of architecture and performance. The biggest difference is within the high level trigger, where the merger of region-of-inte...

  11. Module and electronics developments for the ATLAS ITK pixel system

    CERN Document Server

    Nellist, Clara; The ATLAS collaboration

    2016-01-01

    Summary ATLAS is preparing for an extensive modification of its detector in the course of the planned HL‐ LHC accelerator upgrade around 2025 which includes a replacement of the entire tracking system by an all‐silicon detector (Inner Tracker, ITk). A revised trigger and data taking system is foreseen with triggers expected at lowest level at an average rate of 1 MHz. The five innermost layers of ITk will comprise of a pixel detector built of new sensor and readout electronics technologies to improve the tracking performance and cope with the severe HL‐LHC environment in terms of occupancy and radiation. The total area of the new pixel system could measure up to 14 m2, depending on the final layout choice that is expected to take place in early 2017. A new on‐detector readout chip is designed in the context of the RD53 collaboration in 65 nm CMOS technology. This paper will present the on‐going R&D within the ATLAS ITK project towards the new pixel modules and the off‐detector electronics. Pla...

  12. The next generation of the ATLAS PanDA Monitoring System

    CERN Document Server

    Schovancova, J; The ATLAS collaboration; Klimentov, A; Love, P; Potekhin, M; Wenaus, T

    2014-01-01

    For many years the PanDA Workload Management System has been the basis for distributed production and analysis for the ATLAS experiment at the LHC. Since the start of data taking PanDA usage has ramped up steadily, with up to 1M completed jobs/day in 2013. The associated monitoring data volume has been rising as well, to levels that present a new set of challenges in the areas of database scalability and monitoring system performance and efficiency. Outside of ATLAS, the PanDA system is also being used in projects like AMS, LSST and a few others. It currently undergoes a significant redesign, both of the core server components responsible for workload management, brokerage and data access, and of the monitoring part, which is critically important for efficient execution of the workflow in a way that’s transparent to the user and also provides an effective set of tools for operational support. The new generation of the PanDA Monitoring Service is designed based on a proven, scalable, industry-standard Web Fr...

  13. Advanced Alignment of the ATLAS Tracking System

    CERN Document Server

    Butti, P; The ATLAS collaboration

    2014-01-01

    In order to reconstruct the trajectories of charged particles, the ATLAS experiment exploits a tracking system built using different technologies, planar silicon modules or microstrips (PIX and SCT detectors) and gaseous drift tubes (TRT), all embedded in a 2T solenoidal magnetic field. Misalignments and deformations of the active detector elements deteriorate the track reconstruction resolution and lead to systematic biases on the measured track parameters. The alignment procedures exploits various advanced tools and techniques in order to determine for module positions and correct for deformations. For the LHC Run II, the system is being upgraded with the installation of a new pixel layer, the Insertable B-layer (IBL).

  14. Overview of ATLAS PanDA Workload Management

    Science.gov (United States)

    Maeno, T.; De, K.; Wenaus, T.; Nilsson, P.; Stewart, G. A.; Walker, R.; Stradling, A.; Caballero, J.; Potekhin, M.; Smith, D.; ATLAS Collaboration

    2011-12-01

    The Production and Distributed Analysis System (PanDA) plays a key role in the ATLAS distributed computing infrastructure. All ATLAS Monte-Carlo simulation and data reprocessing jobs pass through the PanDA system. We will describe how PanDA manages job execution on the grid using dynamic resource estimation and data replication together with intelligent brokerage in order to meet the scaling and automation requirements of ATLAS distributed computing. PanDA is also the primary ATLAS system for processing user and group analysis jobs, bringing further requirements for quick, flexible adaptation to the rapidly evolving analysis use cases of the early datataking phase, in addition to the high reliability, robustness and usability needed to provide efficient and transparent utilization of the grid for analysis users. We will describe how PanDA meets ATLAS requirements, the evolution of the system in light of operational experience, how the system has performed during the first LHC data-taking phase and plans for the future.

  15. Overview of ATLAS PanDA Workload Management

    International Nuclear Information System (INIS)

    Maeno, T.; De, K.; Wenaus, T.; Nilsson, P.; Stewart, G.A.; Walker, R.; Stradling, A.; Caballero, J.; Potekhin, M.; Smith, D.

    2011-01-01

    The Production and Distributed Analysis System (PanDA) plays a key role in the ATLAS distributed computing infrastructure. All ATLAS Monte-Carlo simulation and data reprocessing jobs pass through the PanDA system. We will describe how PanDA manages job execution on the grid using dynamic resource estimation and data replication together with intelligent brokerage in order to meet the scaling and automation requirements of ATLAS distributed computing. PanDA is also the primary ATLAS system for processing user and group analysis jobs, bringing further requirements for quick, flexible adaptation to the rapidly evolving analysis use cases of the early datataking phase, in addition to the high reliability, robustness and usability needed to provide efficient and transparent utilization of the grid for analysis users. We will describe how PanDA meets ATLAS requirements, the evolution of the system in light of operational experience, how the system has performed during the first LHC data-taking phase and plans for the future.

  16. The ATLAS Analysis Model

    CERN Multimedia

    Amir Farbin

    The ATLAS Analysis Model is a continually developing vision of how to reconcile physics analysis requirements with the ATLAS offline software and computing model constraints. In the past year this vision has influenced the evolution of the ATLAS Event Data Model, the Athena software framework, and physics analysis tools. These developments, along with the October Analysis Model Workshop and the planning for CSC analyses have led to a rapid refinement of the ATLAS Analysis Model in the past few months. This article introduces some of the relevant issues and presents the current vision of the future ATLAS Analysis Model. Event Data Model The ATLAS Event Data Model (EDM) consists of several levels of details, each targeted for a specific set of tasks. For example the Event Summary Data (ESD) stores calorimeter cells and tracking system hits thereby permitting many calibration and alignment tasks, but will be only accessible at particular computing sites with potentially large latency. In contrast, the Analysis...

  17. Analog Readout and Digitizing System for ATLAS TileCal Demonstrator

    CERN Document Server

    Tang, F; The ATLAS collaboration

    2014-01-01

    The TileCal Demonstrator is a prototype for a future upgrade to the ATLAS hadron calorimeter when the Large Hadron Collider increases luminosity in year 2023 (HL-LHC). It will be used for functionality and performance tests. The Demonstrator has 48 channels of upgraded readout and digitizing electronics and a new digital trigger capability, but is backwards-compatible with the present detector system insofar as it also provides analog trigger signals. The Demonstrator is comprised of 4 identical mechanical mini-drawers, each equipped with up to 12 photomultipliers (PMTs). The on-detector electronics includes 45 Front-End Boards, each serving an individual PMT; 4 Main Boards, each to control and digitize up to 12 PMT signals, and 4 corresponding high-speed Daughter Boards serving as data hubs between on-detector and off-detector electronics. The Demonstrator is fully compatible with the present system, accepting ATLAS triggers, timing and slow control commands for the data acquisition, detector control, and de...

  18. THE 2002 DIG TRAINING

    CERN Multimedia

    Mapelli, L.

    The Detector Interface Group organized this year a training program, divided in two sessions, for people wishing to learn how to use and customize the modern DAQ prototype used for test beam and laboratory data acquisition by several groups in ATLAS. This Data Acquisition prototype is an evolution of the DAQ/EF-1 prototype where some parts have been evolving for exploitation at the test beam first (Tilecal starting in 2000, Muon MDT in 2001 and Pixel in 2002) and later for laboratory tests (LAr starting in 2000, Muons MDT and TGC in 2001). The training sessions have been organized with the idea of building a detector data acquisition to read data from a detector crate and send the data over the Read Out Link to the remaining part of the DAQ. The first session took place last April 18th-19th. It was organized with some presentations and many hand-on exercises to learn how to build a DAQ configuration database and a controller to configure, control and steer the DAQ at the level of a hypothetic detector cra...

  19. Calorimetry triggering in ATLAS

    CERN Document Server

    Igonkina, O; Adragna, P; Aharrouche, M; Alexandre, G; Andrei, V; Anduaga, X; Aracena, I; Backlund, S; Baines, J; Barnett, B M; Bauss, B; Bee, C; Behera, P; Bell, P; Bendel, M; Benslama, K; Berry, T; Bogaerts, A; Bohm, C; Bold, T; Booth, J R A; Bosman, M; Boyd, J; Bracinik, J; Brawn, I, P; Brelier, B; Brooks, W; Brunet, S; Bucci, F; Casadei, D; Casado, P; Cerri, A; Charlton, D G; Childers, J T; Collins, N J; Conde Muino, P; Coura Torres, R; Cranmer, K; Curtis, C J; Czyczula, Z; Dam, M; Damazio, D; Davis, A O; De Santo, A; Degenhardt, J; Delsart, P A; Demers, S; Demirkoz, B; Di Mattia, A; Diaz, M; Djilkibaev, R; Dobson, E; Dova, M, T; Dufour, M A; Eckweiler, S; Ehrenfeld, W; Eifert, T; Eisenhandler, E; Ellis, N; Emeliyanov, D; Enoque Ferreira de Lima, D; Faulkner, P J W; Ferland, J; Flacher, H; Fleckner, J E; Flowerdew, M; Fonseca-Martin, T; Fratina, S; Fhlisch, F; Gadomski, S; Gallacher, M P; Garitaonandia Elejabarrieta, H; Gee, C N P; George, S; Gillman, A R; Goncalo, R; Grabowska-Bold, I; Groll, M; Gringer, C; Hadley, D R; Haller, J; Hamilton, A; Hanke, P; Hauser, R; Hellman, S; Hidvgi, A; Hillier, S J; Hryn'ova, T; Idarraga, J; Johansen, M; Johns, K; Kalinowski, A; Khoriauli, G; Kirk, J; Klous, S; Kluge, E-E; Koeneke, K; Konoplich, R; Konstantinidis, N; Kwee, R; Landon, M; LeCompte, T; Ledroit, F; Lei, X; Lendermann, V; Lilley, J N; Losada, M; Maettig, S; Mahboubi, K; Mahout, G; Maltrana, D; Marino, C; Masik, J; Meier, K; Middleton, R P; Mincer, A; Moa, T; Monticelli, F; Moreno, D; Morris, J D; Mller, F; Navarro, G A; Negri, A; Nemethy, P; Neusiedl, A; Oltmann, B; Olvito, D; Osuna, C; Padilla, C; Panes, B; Parodi, F; Perera, V J O; Perez, E; Perez Reale, V; Petersen, B; Pinzon, G; Potter, C; Prieur, D P F; Prokishin, F; Qian, W; Quinonez, F; Rajagopalan, S; Reinsch, A; Rieke, S; Riu, I; Robertson, S; Rodriguez, D; Rogriquez, Y; Rhr, F; Saavedra, A; Sankey, D P C; Santamarina, C; Santamarina Rios, C; Scannicchio, D; Schiavi, C; Schmitt, K; Schultz-Coulon, H C; Schfer, U; Segura, E; Silverstein, D; Silverstein, S; Sivoklokov, S; Sjlin, J; Staley, R J; Stamen, R; Stelzer, J; Stockton, M C; Straessner, A; Strom, D; Sushkov, S; Sutton, M; Tamsett, M; Tan, C L A; Tapprogge, S; Thomas, J P; Thompson, P D; Torrence, E; Tripiana, M; Urquijo, P; Urrejola, P; Vachon, B; Vercesi, V; Vorwerk, V; Wang, M; Watkins, P M; Watson, A; Weber, P; Weidberg, T; Werner, P; Wessels, M; Wheeler-Ellis, S; Whiteson, D; Wiedenmann, W; Wielers, M; Wildt, M; Winklmeier, F; Wu, X; Xella, S; Zhao, L; Zobernig, H; de Seixas, J M; dos Anjos, A; Asman, B; Özcan, E

    2009-01-01

    The ATLAS experiment is preparing for data taking at 14 TeV collision energy. A rich discovery physics program is being prepared in addition to the detailed study of Standard Model processes which will be produced in abundance. The ATLAS multi-level trigger system is designed to accept one event in 2 105 to enable the selection of rare and unusual physics events. The ATLAS calorimeter system is a precise instrument, which includes liquid Argon electro-magnetic and hadronic components as well as a scintillator-tile hadronic calorimeter. All these components are used in the various levels of the trigger system. A wide physics coverage is ensured by inclusively selecting events with candidate electrons, photons, taus, jets or those with large missing transverse energy. The commissioning of the trigger system is being performed with cosmic ray events and by replaying simulated Monte Carlo events through the trigger and data acquisition system.

  20. ATLAS DataFlow Infrastructure recent results from ATLAS cosmic and first-beam data-taking

    CERN Document Server

    Vandelli, W

    2010-01-01

    The ATLAS DataFlow infrastructure is responsible for the collection and conveyance of event data from the detector front-end electronics to the mass storage. Several optimized and multi-threaded applications fulfill this purpose operating over a multi-stage Gigabit Ethernet network which is the backbone of the ATLAS Trigger and Data Acquisition System. The system must be able to efficiently transport event-data with high reliability, while providing aggregated bandwidths larger than 5 GByte/s and coping with many thousands network connections. Nevertheless, routing and streaming capabilities and monitoring and data accounting functionalities are also fundamental requirements. During 2008, a few months of ATLAS cosmic data-taking and the first experience with the LHC beams provided an unprecedented testbed for the evaluation of the performance of the ATLAS DataFlow, in terms of functionality, robustness and stability. Besides, operating the system far from its design specifications helped in exercising its fle...

  1. The ATLAS Fast Tracker system

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00353645; The ATLAS collaboration

    2017-01-01

    From 2010 to 2012 the Large Hadron Collider (LHC) operated at a centre-of-mass energy of 7 TeV and 8 TeV, colliding bunches of particles every 50 ns. During operation, the ATLAS trigger system has performed efficiently contributing to important results, including the discovery of the Higgs boson in 2012. The LHC restarted in 2015 and will operate for four years at a center of mass energy of 13 TeV and bunch crossing of 50 ns and 25 ns. These running conditions result in the mean number of overlapping proton-proton interactions per bunch crossing increasing from 20 to 60. The Fast Tracker (FTK) system is designed to deliver full event track reconstruction for all tracks with transverse momentum above 1 GeV at a Level-1 rate of 100 kHz with an average latency below 100 microseconds. This will allow the trigger to utilize tracking information from the entire detector at an earlier event selection stage than ever before, allowing for more efficient event rejection. To achieve this goal the system uses a parallel ...

  2. ATLAS & Google - The Data Ocean Project

    CERN Document Server

    Lassnig, Mario; The ATLAS collaboration

    2018-01-01

    With the LHC High Luminosity upgrade the workload and data management systems are facing new major challenges. To address those challenges ATLAS and Google agreed to cooperate on a project to connect Google Cloud Storage and Compute Engine to the ATLAS computing environment. The idea is to allow ATLAS to explore the use of different computing models, to allow ATLAS user analysis to benefit from the Google infrastructure, and to give Google real science use cases to improve their cloud platform. Making the output of a distributed analysis from the grid quickly available to the analyst is a difficult problem. Redirecting the analysis output to Google Cloud Storage can provide an alternative, faster solution for the analyst. First, Google's Cloud Storage will be connected to the ATLAS Data Management System Rucio. The second part aims to let jobs run on Google Compute Engine, accessing data from either ATLAS storage or Google Cloud Storage. The third part involves Google implementing a global redirection between...

  3. ATLAS experiment : mapping the secrets of the universe

    CERN Multimedia

    ATLAS Outreach

    2010-01-01

    This 4 page color brochure describes ATLAS and the LHC, the ATLAS inner detector, calorimeters, muon spectrometer, magnet system, a short definition of the terms "particles," "dark matter," "mass," "antimatter." It also explains the ATLAS collaboration and provides the ATLAS website address with some images of the detector and the ATLAS collaboration at work.

  4. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00291854; The ATLAS collaboration; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-01-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services. Being an intermediate middleware system between clients and external information sources (like central BDII, GOCDB, MyOSG), AGIS defines the relations between experiment specific used resources and physical distributed computing capabilities. Being in production during LHC Runl AGIS became the central information system for Distributed Computing in ATLAS and it is continuously evolving to fulfil new user requests, enable enhanced operations and follow the extension of the ATLAS Computing model. The ATLAS Computin...

  5. Online radiation dose measurement system for ATLAS experiment

    International Nuclear Information System (INIS)

    Mandic, I.; Cindro, V.; Dolenc, I.; Gorisek, A.; Kramberger, G.; Mikuz, M.; Bronner, J.; Hartet, J.; Franz, S.

    2009-01-01

    In experiments at Large Hadron Collider, detectors and electronics will be exposed to high fluxes of photons, charged particles and neutrons. Damage caused by the radiation will influence performance of detectors. It will therefore be important to continuously monitor the radiation dose in order to follow the level of degradation of detectors and electronics and to correctly predict future radiation damage. A system for online radiation monitoring using semiconductor radiation sensors at large number of locations has been installed in the ATLAS experiment. Ionizing dose in SiO 2 will be measured with RadFETs, displacement damage in silicon in units of 1-MeV(Si) equivalent neutron fluence with p-i-n diodes. At 14 monitoring locations where highest radiation levels are expected the fluence of thermal neutrons will be measured from current gain degradation in dedicated bipolar transistors. The design of the system and tests of its performance in mixed radiation field is described in this paper. First results from this test campaign confirm that doses can be measured with sufficient sensitivity (mGy for total ionizing dose measurements, 10 9 n/cm 2 for NIEL (non-ionizing energy loss) measurements, 10 12 n/cm 2 for thermal neutrons) and accuracy (about 20%) for usage in the ATLAS detector

  6. Dutch ministerial visit

    CERN Multimedia

    2007-01-01

    Dutch Minister of Education, Culture and Science R. Plasterk (third from left) in the ATLAS cavern with NIKHEF Director F. Linde, CERN Chief Scientific Officer J. Engelen, Ambassador J. van Eenennaam, ATLAS Collaboration Spokesperson P. Jenni, Mission Representative G. Vrielink and ATLAS Magnet Project Leader H. ten Kate.Minister of Education, Culture and Science from the Kingdom of the Netherlands, Ronald Plasterk, visited CERN on 25th October. With Jos Engelen, CERN Scientific Director, as his guide he visited Point 1 of the LHC tunnel and ATLAS, where Nikhef (the national institute for subatomic physics, a Dutch government and university collaboration) constructed all 96 of the largest muon drift chambers in the barrel as well as parts of the magnet system, the inner detector, the DAQ and triggering. Overall the Netherlands contribute 4.5% to the annual CERN budget and the minister’s visit celebrated the contributions of the 79 ...

  7. The TDAQ Baseline Architecture

    CERN Multimedia

    Wickens, F J

    The Trigger-DAQ community is currently busy preparing material for the DAQ, HLT and DCS TDR. Over the last few weeks a very important step has been a series of meetings to complete agreement on the baseline architecture. An overview of the architecture indicating some of the main parameters is shown in figure 1. As reported at the ATLAS Plenary during the February ATLAS week, the main area where the baseline had not yet been agreed was around the Read-Out System (ROS) and details in the DataFlow. The agreed architecture has: Read-Out Links (ROLs) from the RODs using S-Link; Read-Out Buffers (ROB) sited near the RODs, mounted in a chassis - today assumed to be a PC, using PCI bus at least for configuration, control and monitoring. The baseline assumes data aggregation, in the ROB and/or at the output (which could either be over a bus or in the network). Optimization of the data aggregation will be made in the coming months, but the current model has each ROB card receiving input from 4 ROLs, and 3 such c...

  8. An important step for the ATLAS toroid magnet

    CERN Multimedia

    2000-01-01

    The ATLAS experiment's prototype toroid coil arrives at CERN from the CEA laboratory in Saclay on 6 October. The world's largest superconducting toroid magnet is under construction for the ATLAS experiment. A nine-metre long fully functional prototype coil was delivered to CERN at the beginning of October and has since been undergoing tests in the West Area. Built mainly by companies in France and Italy under the supervision of engineers from the CEA-Saclay laboratory near Paris and Italy's INFN-LASA, the magnet is a crucial step forward in the construction of the ATLAS superconducting magnet system. Unlike any particle detector that has gone before, the ATLAS detector's magnet system consists of a large toroidal system enclosing a small central solenoid. The barrel part of the toroidal system will use eight toroid coils, each a massive 25 metres in length. These will dwarf the largest toroids in the world when ATLAS was designed, which measure about six metres. So the ATLAS collaboration decided to build a...

  9. Modelling of data acquisition systems

    International Nuclear Information System (INIS)

    Buono, S.; Gaponenko, I.; Jones, R.; Mapelli, L.; Mornacchi, G.; Prigent, D.; Sanchez-Corral, E.; Spiwoks, R.; Skiadelli, M.; Ambrosini, G.

    1994-01-01

    The RD13 project was approved in April 1991 for the development of a scalable data taking system suitable to host various LHC studies. One of its goals is to use simulations as a tool for understanding, evaluating, and constructing different configurations of such data acquisition (DAQ) systems. The RD13 project has developed a modelling framework for this purpose. It is based on MODSIM II, an object-oriented, discrete-event simulation language. A library of DAQ components allows to describe a variety of DAQ architectures and different hardware options in a modular and scalable way. A graphical user interface (GUI) is used to do easy configuration, initialization and on-line monitoring of the simulation program. A tracing facility is used to do flexible off-line analysis of a trace file written at run-time

  10. Experience with highly-parallel software for the storage system of the ATLAS Experiment at CERN

    CERN Document Server

    Colombo, T; The ATLAS collaboration

    2012-01-01

    The ATLAS experiment is observing proton-proton collisions delivered by the LHC accelerator. The ATLAS Trigger and Data Acquisition (TDAQ) system selects interesting events on-line in a three-level trigger system in order to store them at a budgeted rate of several hundred Hz. This paper focuses on the TDAQ data-logging system and in particular on the implementation and performance of a novel parallel software design. In this respect, the main challenge presented by the data-logging workload is the conflict between the largely parallel nature of the event processing, especially the recently introduced event compression, and the constraint of sequential file writing and checksum evaluation. This is further complicated by the necessity of operating in a fully data-driven mode, to cope with continuously evolving trigger and detector configurations. In this paper we report on the design of the new ATLAS on-line storage software. In particular we will discuss our development experience using recent concurrency-ori...

  11. Renewable Energy Atlas of the United States

    Energy Technology Data Exchange (ETDEWEB)

    Kuiper, J. [Environmental Science Division; Hlava, K. [Environmental Science Division; Greenwood, H. [Environmentall Science Division; Carr, A. [Environmental Science Division

    2013-12-13

    The Renewable Energy Atlas (Atlas) of the United States is a compilation of geospatial data focused on renewable energy resources, federal land ownership, and base map reference information. This report explains how to add the Atlas to your computer and install the associated software. The report also includes: A description of each of the components of the Atlas; Lists of the Geographic Information System (GIS) database content and sources; and A brief introduction to the major renewable energy technologies. The Atlas includes the following: A GIS database organized as a set of Environmental Systems Research Institute (ESRI) ArcGIS Personal GeoDatabases, and ESRI ArcReader and ArcGIS project files providing an interactive map visualization and analysis interface.

  12. A Slice of ATLAS

    CERN Document Server

    2004-01-01

    An entire section of the ATLAS detector is being assembled at Prévessin. Since May the components have been tested using a beam from the SPS, giving the ATLAS team valuable experience of operating the detector as well as an opportunity to debug the system.

  13. The Latest from ATLAS

    CERN Multimedia

    2009-01-01

    Since November 2008, ATLAS has undertaken detailed maintenance, consolidation and repair work on the detector (see Bulletin of 20 July 2009). Today, the fraction of the detector that is operational has increased compared to last year: less than 1% of dead channels for most of the sub-systems. "We are going to start taking data this year with a detector which is even more efficient than it was last year," agrees ATLAS Spokesperson, Fabiola Gianotti. By mid-September the detector was fully closed again, and the cavern sealed. The magnet system has been operated at nominal current for extensive periods over recent months. Once the cavern was sealed, ATLAS began two weeks of combined running. Right now, subsystems are joining the run incrementally until the point where the whole detector is integrated and running as one. In the words of ATLAS Technical Coordinator, Marzio Nessi: "Now we really start physics." In parallel, the analysis ...

  14. Time-stamping system for nuclear physics experiments at RIKEN RIBF

    International Nuclear Information System (INIS)

    Baba, H.; Ichihara, T.; Ohnishi, T.; Takeuchi, S.; Yoshida, K.; Watanabe, Y.; Ota, S.; Shimoura, S.; Yoshinaga, K.

    2015-01-01

    A time-stamping system for nuclear physics experiments has been introduced at the RIKEN Radioactive Isotope Beam Factory. Individual trigger signals can be applied for separate data acquisition (DAQ) systems. After the measurements are complete, separately taken data are merged based on the time-stamp information. In a typical experiment, coincidence trigger signals are formed from multiple detectors to take desired events only. The time-stamping system allows the use of minimum bias triggers. Since coincidence conditions are given by software, a variety of physics events can be flexibly identified. The live time for a DAQ system is important when attempting to determine reaction cross-sections. However, the combined live time for separate DAQ systems is not clearly known because it depends not only on the DAQ dead time but also on the coincidence conditions. Using the proposed time-stamping system, all trigger timings can be acquired, so that the combined live time can be easily determined. The combined live time is also estimated using Monte Carlo simulations, and the results are compared with the directly measured values in order to assess the accuracy of the simulation

  15. CERN Summer Student Project Report

    CERN Document Server

    Parton, Thomas

    2015-01-01

    My Summer Student project was divided between two areas: work on Thin Gap Chamber (TGC) Level-1 muon triggers for the ATLAS experiment, and data acquisition (DAQ) for an RPC muon detector at the Gamma Irradiation Facility (GIF++)

  16. ATLAS Point-1 System Administration Group

    CERN Multimedia

    Marc Dobson

    2007-01-01

    Hello, my name is Joe Blog and I am about to go on shift at ATLAS. When I enter the control room shown below with my CERN ID card, I go to the subsystem desk for which I am responsible. This is the first shift of the run period and there is a login window displayed on the screens. I just need to hit return and the control room desktop is started. Before I can do anything I must give my credentials in the shifter window which is then synchronised with the shift plan. After that I have access to all the allowed commands and can start preparing for the run. In order not to forget any steps I consult the documentation on how to prepare for a run on the Point-1 web. I can also check what the general status is for the ATLAS online computing farm, the sub-detectors and the LHC by using the utilities provided. ATLAS Control Room. The situation described is made up but the conditions are real. But the control room that the shifters and general public see is only the tip of the iceberg. Behind these tools lie the...

  17. Calorimetry triggering in ATLAS

    International Nuclear Information System (INIS)

    Igonkina, O; Achenbach, R; Andrei, V; Adragna, P; Aharrouche, M; Bauss, B; Bendel, M; Alexandre, G; Anduaga, X; Aracena, I; Backlund, S; Bogaerts, A; Baines, J; Barnett, B M; Bee, C; P, Behera; Bell, P; Benslama, K; Berry, T; Bohm, C

    2009-01-01

    The ATLAS experiment is preparing for data taking at 14 TeV collision energy. A rich discovery physics program is being prepared in addition to the detailed study of Standard Model processes which will be produced in abundance. The ATLAS multi-level trigger system is designed to accept one event in 2 | 10 5 to enable the selection of rare and unusual physics events. The ATLAS calorimeter system is a precise instrument, which includes liquid Argon electro-magnetic and hadronic components as well as a scintillator-tile hadronic calorimeter. All these components are used in the various levels of the trigger system. A wide physics coverage is ensured by inclusively selecting events with candidate electrons, photons, taus, jets or those with large missing transverse energy. The commissioning of the trigger system is being performed with cosmic ray events and by replaying simulated Monte Carlo events through the trigger and data acquisition system.

  18. Calorimetry Triggering in ATLAS

    International Nuclear Information System (INIS)

    Igonkina, O.; Achenbach, R.; Adragna, P.; Aharrouche, M.; Alexandre, G.; Andrei, V.; Anduaga, X.; Aracena, I.; Backlund, S.; Baines, J.; Barnett, B.M.; Bauss, B.; Bee, C.; Behera, P.; Bell, P.; Bendel, M.; Benslama, K.; Berry, T.; Bogaerts, A.; Bohm, C.; Bold, T.; Booth, J.R.A.; Bosman, M.; Boyd, J.; Bracinik, J.; Brawn, I.P.; Brelier, B.; Brooks, W.; Brunet, S.; Bucci, F.; Casadei, D.; Casado, P.; Cerri, A.; Charlton, D.G.; Childers, J.T.; Collins, N.J.; Conde Muino, P.; Coura Torres, R.; Cranmer, K.; Curtis, C.J.; Czyczula, Z.; Dam, M.; Damazio, D.; Davis, A.O.; De Santo, A.; Degenhardt, J.

    2011-01-01

    The ATLAS experiment is preparing for data taking at 14 TeV collision energy. A rich discovery physics program is being prepared in addition to the detailed study of Standard Model processes which will be produced in abundance. The ATLAS multi-level trigger system is designed to accept one event in 2/10 5 to enable the selection of rare and unusual physics events. The ATLAS calorimeter system is a precise instrument, which includes liquid Argon electro-magnetic and hadronic components as well as a scintillator-tile hadronic calorimeter. All these components are used in the various levels of the trigger system. A wide physics coverage is ensured by inclusively selecting events with candidate electrons, photons, taus, jets or those with large missing transverse energy. The commissioning of the trigger system is being performed with cosmic ray events and by replaying simulated Monte Carlo events through the trigger and data acquisition system.

  19. Calorimetry triggering in ATLAS

    Energy Technology Data Exchange (ETDEWEB)

    Igonkina, O [Nikhef National Institute for Subatomic Physics, Amsterdam (Netherlands); Achenbach, R; Andrei, V [Kirchhoff Institut fuer Physik, Universitaet Heidelberg, Heidelberg (Germany); Adragna, P [Physics Department, Queen Mary, University of London, London (United Kingdom); Aharrouche, M; Bauss, B; Bendel, M [Institut fr Physik, Universitt Mainz, Mainz (Germany); Alexandre, G [Section de Physique, Universite de Geneve, Geneva (Switzerland); Anduaga, X [Universidad Nacional de La Plata, La Plata (Argentina); Aracena, I [Stanford Linear Accelerator Center (SLAC), Stanford (United States); Backlund, S; Bogaerts, A [European Laboratory for Particle Physics (CERN), Geneva (Switzerland); Baines, J; Barnett, B M [STFC Rutherford Appleton Laboratory, Harwell Science and Innovation Campus, Didcot, Oxon (United Kingdom); Bee, C [Centre de Physique des Particules de Marseille, IN2P3-CNRS, Marseille (France); P, Behera [Iowa State University, Ames, Iowa (United States); Bell, P [School of Physics and Astronomy, University of Manchester, Manchester (United Kingdom); Benslama, K [University of Regina, Regina (Canada); Berry, T [Department of Physics, Royal Holloway and Bedford New College, Egham (United Kingdom); Bohm, C [Fysikum, Stockholm University, Stockholm (Sweden)

    2009-04-01

    The ATLAS experiment is preparing for data taking at 14 TeV collision energy. A rich discovery physics program is being prepared in addition to the detailed study of Standard Model processes which will be produced in abundance. The ATLAS multi-level trigger system is designed to accept one event in 2 | 10{sup 5} to enable the selection of rare and unusual physics events. The ATLAS calorimeter system is a precise instrument, which includes liquid Argon electro-magnetic and hadronic components as well as a scintillator-tile hadronic calorimeter. All these components are used in the various levels of the trigger system. A wide physics coverage is ensured by inclusively selecting events with candidate electrons, photons, taus, jets or those with large missing transverse energy. The commissioning of the trigger system is being performed with cosmic ray events and by replaying simulated Monte Carlo events through the trigger and data acquisition system.

  20. Evolution of the Trigger and Data Acquisition System in the ATLAS experiment

    CERN Document Server

    Kama, Sami; The ATLAS collaboration

    2012-01-01

    The ATLAS detector is designed to observe proton-proton collisions delivered by the LHC accelerator. The ATLAS Trigger and Data Acquisition (TDAQ) system is responsible for the selection and the conveyance of physics data, reducing the rate of stored events from the initial 40 MHz LHC frequency to several hundreds Hz. The TDAQ system is organised in a three-level selection scheme, including a hardware-based first-level trigger and second- and third-level triggers implemented as software systems distributed on commodity hardware nodes. The second-level trigger operates over limited regions of the detector, the so-called Regions-of-Interest (RoI). The last selection step deals instead with complete events. In the current design, the second and third trigger levels are separate systems. While this architecture was successfully operated well beyond the original design goals, the accumulated experience stimulated interest to explore possible evolutions. One attractive direction is to merge the second and third tri...