WorldWideScience

Sample records for atlas daq system

  1. The Message Reporting System of the ATLAS DAQ System

    CERN Document Server

    Caprini, M; Kolos, S; 10th ICATPP Conference on Astroparticle, Particle, Space Physics, Detectors and Medical Physics Applications

    2008-01-01

    The Message Reporting System (MRS) in the ATLAS data acquisition system (DAQ) is one package of the Online Software which acts as a glue of various elements of DAQ, High Level Trigger (HLT) and Detector Control System (DCS). The aim of the MRS is to provide a facility which allows all software components in ATLAS to report messages to other components of the distributed DAQ system. The processes requiring a MRS are on one hand applications that report error conditions or information and on the other hand message processors that receive reported messages. A message reporting application can inject one or more messages into the MRS at any time. An application wishing to receive messages can subscribe to a message group according to defined criteria. The application receives messages that fulfill the subscription criteria when they are reported to MRS. The receiver message processing can consist of anything from simply logging the messages in a file/terminal to performing message analysis. The inter-process comm...

  2. ATLAS DAQ/HLT rack DCS

    International Nuclear Information System (INIS)

    Ermoline, Yuri; Burckhart, Helfried; Francis, David; Wickens, Frederick J.

    2007-01-01

    The ATLAS Detector Control System (DCS) group provides a set of standard tools, used by subsystems to implement their local control systems. The ATLAS Data Acquisition and High Level Trigger (DAQ/HLT) rack DCS provides monitoring of the environmental parameters (air temperatures, humidity, etc.). The DAQ/HLT racks are located in the underground counting room (20 racks) and in the surface building (100 racks). The rack DCS is based on standard ATLAS tools and integrated into overall operation of the experiment. The implementation is based on the commercial control package and additional components, developed by CERN Joint Controls Project Framework. The prototype implementation and measurements are presented

  3. Application of the ATLAS DAQ and Monitoring System for MDT and RPC Commissioning

    CERN Document Server

    Pasqualucci, E

    2007-01-01

    The ATLAS DAQ and monitoring software are currently commonly used to test detectors during the commissioning phase. In this paper, their usage in MDT and RPC commissioning is described, both at the surface pre-commissioning and commissioning stations and in the ATLAS pit. Two main components are heavily used for detector tests. The ROD Crate DAQ software is based on the ATLAS Readout application. Based on the plug-in mechanism, it provides a complete environment to interface any kind of detector or trigger electronics to the ATLAS DAQ system. All the possible flavours of this application are used to test and run the MDT and RPC detectors at the pre-commissioning and commissioning sites. Ad-hoc plug-ins have been developed to implement data readout via VME, both with ROD prototypes and emulating final electronics to read out data with temporary solutions, and to provide trigger distribution and busy management in a multi-crate environment. Data driven event building functionality is also used to combine data f...

  4. Full system test of module to DAQ for ATLAS IBL

    Energy Technology Data Exchange (ETDEWEB)

    Behpour, Rouhina; Mattig, Peter; Wensing, Marius [Wuppertal University (Germany); Bindi, Marcello [Goettingen University (Germany)

    2015-07-01

    IBL (Insertable B Layer) as the inner most layer in the ATLAS detector at the LHC has been successfully integrated to the system last June 2014. IBL system reliability and consistency is under investigation during ongoing milestone runs at CERN. Back of Crate card (BOC) and Read out Driver (ROD) as two of the main electronic cards act as an interface between the IBL modules and the TDAQ chain. The detector data will be received and processed and then formatted by an interaction between these two electronic cards. The BOC takes advantage of using S-Link implementation inside the main FPGAs. The S-Link protocol as a standard high performance data acquisition link between the readout electronic cards and the TDAQ system is developed and used at CERN. It is based on the idea that detector formatted data will be transferred through optical fibers to the ROS (Read out System) PC for being stored via the ROBIN (Read out Buffer) cards. This talk presents the results that confirm a stable and good performance of the system, from the modules to the read out electronic cards and then to the ROS PCs via S-Link.

  5. Performance and scalability of the back-end sub-system in the ATLAS DAQ/EF prototype

    CERN Document Server

    Alexandrov, I N; Badescu, E; Burckhart, Doris; Caprini, M; Cohen, L; Duval, P Y; Hart, R; Jones, R; Kazarov, A; Kolos, S; Kotov, V; Laugier, D; Mapelli, Livio P; Moneta, L; Qian, Z; Radu, A A; Ribeiro, C A; Roumiantsev, V; Ryabov, Yu; Schweiger, D; Soloviev, I V

    2000-01-01

    The DAQ group of the future ATLAS experiment has developed a prototype system based on the trigger/DAQ architecture described in the ATLAS Technical Proposal to support studies of the full system functionality, architecture as well as available hardware and software technologies. One sub-system of this prototype is the back- end which encompasses the software needed to configure, control and monitor the DAQ, but excludes the processing and transportation of physics data. The back-end consists of a number of components including run control, configuration databases and message reporting system. The software has been developed using standard, external software technologies such as OO databases and CORBA. It has been ported to several C++ compilers and operating systems including Solaris, Linux, WNT and LynxOS. This paper gives an overview of the back-end software, its performance, scalability and current status. (17 refs).

  6. Applications of CORBA in the ATLAS prototype DAQ

    CERN Document Server

    Jones, R; Mapelli, Livio P; Ryabov, Yu

    2000-01-01

    This paper presents the experience of using the Common Object Request Broker Architecture (CORBA) in the ATLAS prototype DAQ project. Many communication links in the DAQ system have been designed and implemented using the CORBA standard. A public domain package, called Inter-Language Unification (ILU) has been used to implement CORBA based communications between DAQ components in a local area network (LAN) of heterogeneous computers. The CORBA Naming Service provides the principal mechanism through which most clients of an ORE-based system locate objects that they intend to use. In our project, conventions are employed that meaningfully partition the name space of the Naming Service according to divisions in the DAQ system itself. The Inter Process Communication (IPC) package, implemented in C++ on the top of CORBA/ILU, incorporates this facility and hides the details of the naming schema is described. The development procedure and environment for remote database access using IPC is described. Various end-use...

  7. Communication between Trigger/DAQ and DCS in ATLAS

    International Nuclear Information System (INIS)

    Burckhart, H.; Jones, R.; Hart, R.; Khomoutnikov, V.; Ryabov, Y.

    2001-01-01

    Within the ATLAS experiment Trigger/DAQ and DCS are both logically and physically separated. Nevertheless there is a need to communicate. The initial problem definition and analysis suggested three subsystems the Trigger/DAQ DCS Communication (DDC) project should support the ability to: 1. exchange data between Trigger/DAQ and DCS; 2. send alarm messages from DCS to Trigger/DAQ; 3. issue commands to DCS from Trigger/DAQ. Each subsystem is developed and implemented independently using a common software infrastructure. Among the various subsystems of the ATLAS Trigger/DAQ the Online is responsible for the control and configuration. It is the glue connecting the different systems such as data flow, level 1 and high-level triggers. The DDC uses the various Online components as an interface point on the Trigger/DAQ side with the PVSS II SCADA system on the DCS side and addresses issues such as partitioning, time stamps, event numbers, hierarchy, authorization and security. PVSS II is a commercial product chosen by CERN to be the SCADA system for all LHC experiments. Its API provides full access to its database, which is sufficient to implement the 3 subsystems of the DDC software. The DDC project adopted the Online Software Process, which recommends a basic software life-cycle: problem statement, analysis, design, implementation and testing. Each phase results in a corresponding document or in the case of the implementation and testing, a piece of code. Inspection and review take a major role in the Online software process. The DDC documents have been inspected to detect flaws and resulted in a improved quality. A first prototype of the DDC is ready and foreseen to be used at the test-beam during summer 2001

  8. Development and test of the DAQ system for a Micromegas prototype installed into the ATLAS experiment

    CERN Document Server

    Zibell, Andre; The ATLAS collaboration; Bianco, Michele; Martoiu, Victor Sorin

    2015-01-01

    A Micromegas (MM) quadruplet prototype with an active area of 0.5 m$^2$ that adopts the general design foreseen for the upgrade of the innermost forward muon tracking systems (Small Wheels) of the ATLAS detector in 2018-2019, has been built at CERN and is going to be tested in the ATLAS cavern environment during the LHC RUN-II period 2015-2017. The integration of this prototype detector into the ATLAS data acquisition system using custom ATCA equipment is presented. An ATLAS compatible ReadOutDriver (ROD) based on the Scalable Readout System (SRS), the Scalable Readout Unit (SRU), will be used in order to transmit the data after generating valid event fragments to the high-level Read Out System (ROS). The SRU will be synchronized with the LHC bunch crossing clock (40.08 MHz) and will receive the Level-1 trigger signals from the Central Trigger Processor (CTP) through the TTCrx receiver ASIC. The configuration of the system will be driven directly from the ATLAS Run Control System. By using the ATLAS TDAQ Soft...

  9. The readiness of the ATLAS Trigger-DAQ system for the second LHC run

    CERN Document Server

    Rammensee, Michael; The ATLAS collaboration

    2015-01-01

    After its first shutdown, the Large Hadron Collider (LHC) will provide proton-proton collisions with increased luminosity and energy. In the ATLAS experiment~\\cite{Atlas}, the Trigger and Data Acquisition (TDAQ) system has been upgraded to deal with the increased event rates~\\cite{TDAQPhase1}. The updated system is radically different from the previous implementation, both in terms of architecture and expected performance. The main architecture has been reshaped in order to profit from the technological progress and to maximize the flexibility and efficiency of the data selection process. Design choices and the strategies employed to minimize the data-collection and the selection latency will be discussed. First results of tests done during the commissioning phase and the operational performance after the first months of data taking will be presented.

  10. DAQ

    CERN Multimedia

    F. Meijers

    2010-01-01

     The DAQ system (see Figure 2) consists of: - the full detector read-out of a total of 633 FEDs (Front-End Drivers) – the FRL (Front-end Readout - Link) provides the common interface between the sub-detector specific FEDs and the central DAQ; - 8 DAQ slices with a 100 GB/s event building capacity – corresponding to a nominal 2 kB per FRL at a Level-1 (L1) trigger rate of 100 kHz; - an event filter to run the HLT (High Level Trigger) comprising 720 PCs with two quad-core 2.6 GHz CPUs; - a 16-node storage manager system allowing a writing rate that exceeds 1 GB/s, with concurrent transfers to Tier 0 at the same rate, and a total storage capacity of 250 TB. It also forwards events to the online DQM (Data Quality Monitoring). Figure 2: The CMS DAQ system The DAQ system for the 2010 physics runs The DAQ system has been deployed for pp and heavy-ion physics data-taking. It can be easily ...

  11. The LHCb DAQ system

    CERN Document Server

    Jost, B

    2000-01-01

    The LHCb experiment is the most recently approved of the 4 experiments under construction at CERN's LHC accelerator. It is a special purpose experiment designed to precisely measure the CP violation parameters in the B-B system. Triggering poses special problems since the interesting events containing B-mesons are immersed in a large background of inelastic p-p reactions. We therefore decided to implement a 4 level triggering scheme. The LHCb Data Acquisition (DAQ) system will have to cope with an average trigger rate of similar to 40 kHz, after two levels of hardware triggers, and an average event size of similar to 150 kB. Thus an event-building network which can sustain an average bandwidth of 6 GB /s is required. A powerful software trigger farm will have to be installed to reduce the rate from the 40 kHz to similar to 200 Hz of events written to permanent storage. In this paper we will concentrate on the networking aspects of the LHCb data acquisition and the controls system. 11 Refs.

  12. Applications of CORBA in the ATLAS prototype DAQ

    Science.gov (United States)

    Jones, R.; Kolos, S.; Mapelli, L.; Ryabov, Y.

    2000-04-01

    This paper presents the experience of using the Common Object Request Broker Architecture (CORBA) in the ATLAS prototype DAQ project. Many communication links in the DAQ system have been designed and implemented using the CORBA standard. A public domain package, called Inter-Language Unification (ILU) has been used to implement CORBA based communications between DAQ components in a local area network (LAN) of heterogeneous computers. The CORBA Naming Service provides the principal mechanism through which most clients of an ORE-based system locate objects that they intend to use. In our project, conventions are employed that meaningfully partition the name space of the Naming Service according to divisions in the DAQ system itself. The Inter Process Communication (IPC) package, implemented in C++ on the top of CORBA/ILU, incorporates this facility and hides the details of the naming schema is described. The development procedure and environment for remote database access using IPC is described. Various end-user interfaces have been implemented using the Java language that communicate with C++ servers via CORBA/ILU. To support such interfaces, a second implementation of IPC in Java has been developed. The design and implementation of such connections are described. An alternative CORBA implementation, ORBacus, has been evaluated and compared with ILU.

  13. The readiness of ATLAS Trigger-DAQ system for the second LHC run

    CERN Document Server

    Rammensee, Michael; The ATLAS collaboration

    2015-01-01

    After its first shutdown, LHC will provide pp collisions with increased luminosity and energy. In the ATLAS experiment, the Trigger and Data Acquisition (TDAQ) system has been upgraded to deal with the increased event rates. The updated system is radically different from the previous implementation, both in terms of architecture and expected performance. The main architecture has been reshaped in order to profit from the technological progress and to maximize the flexibility and efficiency of the data selection process. The trigger system in ATLAS consists of a hardware Level-1 (L1) and a software based high-level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. The pre-existing two-level software filtering, known as L2 and the Event Filter, are now merged into a single process, performing incremental data collection and analysis. This design has many advantages, among which are: the radical simplification of the architec...

  14. DAQ

    CERN Multimedia

    F. Meijers.

    The DAQ system consists of the full detector readout, 8 DAQ slices with a 1 Tbit/s event building capacity, an event filter to run the HLT comprising 720 8-core PCs, and a 16-node storage manager system allowing a writing rate up to 2 GByte/s and a total capacity of 250 TBytes. Operation: The DAQ system has been successfully deployed to capture the first LHC collisions. Here trigger rates were typically in the range 1 – 11 kHz. The DAQ system serviced global cosmics and commissioning data taking. Here typically data were taken with ~1 kHz cosmic trigger rate and raw event size of ~500 kByte. Often an additional ~100 kHz of random triggers were mixed, which were pre-scaled for storage, to stress test the overall system. Operational procedures for DAQ shifters and on-call experts have been consolidated. Throughout 2009, the online cluster, the production online Oracle database, and the central Detector Control System (DCS) have been operational 24/7. A development and integration database has been ...

  15. DAQ

    CERN Multimedia

    J.A. Coarasa Perez

    Event Builder One of the key design features of CMS is the large Central Data Acquisition System capable of bringing over 100 GB of data to the surface and building 100,000 events every second. This very large DAQ system is ex¬pected to give CMS a competitive advantage since we can have a very flexible High Level Trigger entirely run¬ning on standard computer processors. The first stage of what will be the largest DAQ system in the world is now being commissioned at Point 5. While the detector has been read out until now by a small system called the mini-DAQ, the large central DAQ Event Builder has been put together and debugged over the last 4 months. During the month of September, the full system from FED (front end connection to the detector readout) to Filter Unit is being commissioned and we hope to use the central DAQ Event Builder for the Global Run at the end of September. The first batch of 400 computers arrived around in mid-April. These computers became Readout Units (RUs), wit...

  16. DAQ

    CERN Multimedia

    F. Meijers

    2011-01-01

    The DAQ system (see Figure 2) consists of: – the full detector read-out of a total of 633 FEDs (front-end drivers). The FRL (front-end readout link) provides the common interface between the sub-detector specific FEDs and the central DAQ; – 8 DAQ slices with a 100 GB/s event building capacity – corresponding to a nominal 2 kB per FRL at a Level-1 trigger rate of 100 kHz; – an event filter to run the HLT (High Level Trigger) composing 720 PCs with two quad-core 2.6 GHz CPUs; – a 16-node storage manager system allowing a writing rate that exceeds 1 GB/s, with concurrent transfers to Tier 0 at the same rate, and a total storage capacity of 250 TB. It also forwards events to the online DQM (Data Quality Monitoring). Figure 2: The CMS DAQ system. The two-stage event builder assembles event fragments from typically eight front-ends located underground (USC) into one super-...

  17. DAQ

    CERN Multimedia

    F. Meijers and C. Schwick

    2010-01-01

    The DAQ system has been deployed for physics data taking as well as supporting global test and commissioning activities. In addition to 24/7 operations, activities addressing performance and functional improvements are ongoing. The DAQ system consists of the full detector readout, 8 DAQ slices with a 1 Tbit/s event building capacity, an event filter to run the HLT comprising 720 8-core PCs, and a 16-node storage manager system allowing up to 2 GByte/s writing rate and a total capacity of 250 TBytes. Operation The LHC delivered the highest luminosity in fills with 6-8 colliding bunches and reached peak luminosities of 1-2 1029/cm2/s. The DAQ was typically operating in those conditions with a ~15 kHz trigger rate, a raw event size of ~500 kByte, and a ~150 Hz recording of stream-A with a size of ~50 kB. The CPU load on the HLT was ~10%. Tests for Heavy-Ion operation Tests have been carried out to examine the situation for data-taking in the future Heavy Ion (HI) run. The high occupancy expected in HI run...

  18. DAQ

    CERN Multimedia

    F. Meijers

    2011-01-01

    Operation for the 2011 physics run For the 2011 run, the HLT farm has been extended with additional PCs comprising 288 system boards with two 6-core CPUs each. This brought the total HLT capacity from 5760 cores to 9216 cores and 18 TB of memory. It provides a capacity for HLT of about 100 ms/event (on a 2.7 GHz E5430 core) at 100 kHz L1 rate in pp collisions. All central DAQ nodes have been migrated to SLC5/64-bit kernel and 64-bit applications. The DAQ system has been deployed for pp physics data-taking in 2011 and performed with high efficiency (downtime for central DAQ was less than 1%). For pp physics data-taking, the DAQ was operating with a L1 trigger rate up to ~100 kHz and, typically, a raw event size of ~500 kB, and ~400 Hz recording of stream-A (which includes all physics triggers) with a size of ~250 kB after compression. The event size increases linearly with the pile-up, as expected. The CPU load on the HLT reached close to 100%, depending on L1 and HLT menus. By changing the L1 and HLT pre-...

  19. DAQ

    CERN Multimedia

    E. Meschi

    2013-01-01

    The File-based Filter Farm in the CMS DAQ MarkII The CMS DAQ system will be upgraded after LS1 in order to replace obsolete network equipment, use more homogeneous switching technologies, prepare the ground for future upgrade of the detector front-ends. The experiment parameters for the post-LS1 data taking remain similar to the ones of Run 1: a Level-1 aggregate rate of 100 kHz and an aggregate HLT output bandwidth of up to 2 GB/s. A moderate event-size increase is anticipated from increased pile-up and changes in the detector readout. For the output bandwidth, the figure of 2 GB/s is assumed. The original Filter Farm design has been successfully operated in 2010–2013 and its efficiency and fault tolerance brought to an excellent level. There are, however, a number of disadvantages in that design at the interface between the DAQ data flow and the High-Level Trigger that warrant a careful scrutiny in view of the deployment of DAQ2, after the LS1: The reduction of the number of RU bui...

  20. DAQ

    CERN Multimedia

    J. Hegeman

    2013-01-01

    The DAQ2 system for post-LS1 is a re-implementation of the central DAQ event data flow with the capability to read-out the majority of legacy back-end sub-detector electronics FEDs, as well as the new MicroTCA-based back-end electronics (see for example the previous (December 2012) issue of the CMS bulletin). A further upgrade in the DAQ and Trigger is the development of the new TCDS, outlined in the forthcoming Level-1 Trigger Upgrade TDR. The new TCDS (Trigger Control and Distribution System) Currently, CMS trigger control comprises three more-or-less separate systems. The Trigger Timing and Control (TTC) system distributes the L1A signals and synchronisation commands to all front-ends. The Trigger Throttling System (TTS) collects front-end readiness information and propagates those up to the central Trigger Control System (TCS). The TCS allows or vetoes Level-1 triggers from the Global Trigger (GT) based on the TTS state and on the trigger rules. These three systems will be combined in the new control ...

  1. DAQ

    CERN Multimedia

    F. Meijers

    2010-01-01

    The DAQ system consists of the full detector readout, 8 DAQ slices with a 1 Tbit/s event building capacity, an event filter to run the HLT comprising 720 8-core PCs, and a 16-node storage manager system allowing a writing rate up to 2 GByte/s and a total capacity of 250 TBytes. Operation Returning after the Christmas stop, the DAQ system serviced global cosmics and commissioning data taking. Typically data were taken with ~1 kHz cosmic trigger rate and raw event size of ~500 kByte. Often an additional ~100 kHz of random triggers were mixed, which were pre-scaled for storage, to stress test the overall system. The online cluster, the production online Oracle database, and the central Detector Control System (DCS) have been operational 24/7. Infrastructure Immediately after the Christmas break, the on-line data center has been into maximum heat production mode to stress the cooling infrastructure.  The maximum heat load produced in the room was about 570 kW. It appeared that the current settings ...

  2. DAQ

    CERN Document Server

    A. Racz

    The CMS DAQ installation status The year 2005 was dedicated to the production/test of the custom made electronic boards and the procurement of the commercial items needed to operate the underground part of the Data Acquisition System of CMS. The first half of 2006 was spent to install the DAQ infrastructures in USC55 (dedicated cable trays in the false floor) and to prepare the racks to receive the hardware elements. The second half of 2006 was dedicated to the installation of the CMS DAQ elements in the underground control. As a quick reminder, the underground part of the Data Acquisition System performs two tasks: a) Front End data collection and transmission to the online computing farm on the surface (SCX). b) Front End status collection and elaboration of a smart back pressure signal preventing the overflow of the Front End electronic. The hardware elements installed to perform these two tasks are the following:     500 FRL cards receiving the data of one or two sender...

  3. EPICS based DAQ system

    International Nuclear Information System (INIS)

    Cheng Weixing; Chen Yongzhong; Zhou Weimin; Ye Kairong; Liu Dekang

    2002-01-01

    EPICS is the most popular developing platform to build control system and beam diagnostic system in modern physics experiment facilities. An EPICS based data acquisition system was built in Redhat 6.2 operation system. The system is successfully used in the beam position monitor mapping, it improves the mapping process a lot

  4. The BELLE DAQ system

    Science.gov (United States)

    Suzuki, Soh Yamagata; Yamauchi, Masanori; Nakao, Mikihiko; Itoh, Ryosuke; Fujii, Hirofumi

    2000-10-01

    We built a data acquisition system for the BELLE experiment. The system was designed to cope with the average trigger rate up to 500 Hz at the typical event size of 30 kB. This system has five components: (1) the readout sequence controller, (2) the FASTBUS-TDC readout systems using charge-to-time conversion, (3) the barrel shifter event builder, (4) the parallel online computing farm, and (5) the data transfer system to the mass storage. This system has been in operation for physics data taking since June 1999 without serious problems.

  5. The BELLE DAQ system

    International Nuclear Information System (INIS)

    Suzuki, Soh Yamagata; Yamauchi, Masanori; Nakao, Mikihiko; Itoh, Ryosuke; Fujii, Hirofumi

    2000-01-01

    We built a data acquisition system for the BELLE experiment. The system was designed to cope with the average trigger rate up to 500 Hz at the typical event size of 30 kB. This system has five components: (1) the readout sequence controller, (2) the FASTBUS-TDC readout systems using charge-to-time conversion, (3) the barrel shifter event builder, (4) the parallel online computing farm, and (5) the data transfer system to the mass storage. This system has been in operation for physics data taking since June 1999 without serious problems

  6. Editor for Remote Database used in ATLAS Trigger/DAQ

    CERN Document Server

    Meessen, C; Valenta, J

    2006-01-01

    The poster gives brief summary of the ATLAS T/DAQ system, then it introduces the RDB database and describes the RDB Editor application, including its internal structure, GUI features, etc. The RDB Editor is an easy-to-use Java application which allows simple navigation between huge number of objects stored in the RDB. It supports bookmarks, histories, etc. in the way usual in the web browsers. Moreover, it is possible to enhance the application by specialized (graphical) viewers for objects of particular class which will allow the user to see, for example, details that are hard to spot in textual view. As an example of such a plug-in, viewer for EFD_Configuration class was developed.

  7. Development and test of the DAQ system for a Micromegas prototype to be installed in the ATLAS experiment

    CERN Document Server

    Zibell, Andre; The ATLAS collaboration; Bianco, Michele; Martoiu, Victor Sorin

    2015-01-01

    A Micromegas (MM) quadruplet prototype with an active area of 0.5 m 2 that adopts the general design foreseen for the upgrade of the innermost forward muon tracking systems (Small Wheels) of the ATLAS detector in 2018-2019, has been built at CERN and is going to be tested in the ATLAS cavern environment during the LHC RUN-II period 2015-2017. The integration of this prototype detector into the ATLAS data acquisition system using custom ATCA equipment is presented. An ATLAS compatible Read Out Driver (ROD) based on the Scalable Readout System (SRS), the Scalable Readout Unit (SRU), will be used in order to transmit the data after generating valid event fragments to the high-level Read Out System (ROS). The SRU will be synchronized with the LHC bunch crossing clock (40.08 MHz) and will receive the Level-1 trigger signals from the Central Trigger Processor (CTP) through the TTCrx receiver ASIC. The configuration of the system will be driven directly from the ATLAS Run Control System. By using the ATLAS TDAQ Soft...

  8. A DAQ system for CAMAC controller CC/NET using DAQ-Middleware

    International Nuclear Information System (INIS)

    Inoue, E; Yasu, Y; Nakayoshi, K; Sendai, H

    2010-01-01

    DAQ-Middleware is a framework for the DAQ system which is based on RT-Middleware (Robot Technology Middleware) and dedicated to making DAQ systems. DAQ-Middleware has come into use as a one of the DAQ system framework for the next generation Particle Physics experiment at KEK in recent years. DAQ-Middleware comprises DAQ-Components with all necessary basic functions of the DAQ and is easily extensible. So, using DAQ-Middleware, you are able to construct easily your own DAQ system by combining these components. As an example, we have developed a DAQ system for a CC/NET [1] using DAQ-Middleware by the addition of GUI part and CAMAC readout part. The CC/NET, the CAMAC controller was developed to accomplish high speed read-out of CAMAC data. The basic design concept of CC/NET is to realize data taking through networks. So, it is consistent with the DAQ-Middleware concept. We show how it is convenient to use DAQ-Middleware.

  9. High performance message passing for the ATLAS DAQ/EF-1 project

    CERN Document Server

    Mornacchi, Giuseppe

    1999-01-01

    Summary form only. A message passing library has been developed in the context of the ATLAS DAQ/EF-1 project. It is used for time critical applications within the front-end part of the DAQ system, mainly to exchange data control messages between I/O processors. Key objectives of the design were low message overheads, efficient use of the data transfer buses, provision of broadcast functionality and a hardware and operating system independent implementation of the application interface. The design and implementation of the message passing library are presented. As required by the project, the implementation is based on commercial components, namely VMEbus, PCI, the Lynx-OS real-time operating system and an additional inter- processor link, PVIC. The latter offers broadcast functionality identified as being important to the overall performance of the message passing. In addition, performance benchmarks for all implementing buses are presented for both simple test programs and the full DAQ applications. (0 refs)...

  10. DAQ

    CERN Multimedia

    Frans Meijers

    2012-01-01

    Operations for the 2012 physics run For the 2012 run, the DAQ system operates typically at the start of a fill with a L1 Trigger rate close to 90 kHz, a raw event size of ~700 kB, and ~1 kHz recording of stream-A with a size of ~450 kB after compression. The stream-A includes the physics triggers and consists since 2012 of the “core” triggers and the “parked” triggers, at about equal rate. In order to be able to handle the higher instantaneous luminosities in 2012 (so far, up to 6.5E33 at 50 ns bunch spacing) with a pile-up of ~35 events, an extension of the HLT was installed, commissioned and is in operation since the start of data taking. Extension of the HLT farm The CMS event builder and High-Level Trigger (HLT) farm are built using standard commercial PCs and networking equipment and are therefore easily extendable with state-of-the-art hardware. The HLT farm has been extended twice so far, in May 2011 and recently in May 2012. Table 1 shows the parameters and...

  11. DAQ

    CERN Multimedia

    P. Schieferdecker

    ConfDB: CMS HLT Configuration Database The CMS High Level Trigger (HLT) is based on the CMSSW reconstruction framework and is therefore configured in much the same way as any offline or analysis job: by passing a document to the internal event processing machinery which is valid according to the CMSSW configuration grammar. For offline reconstruction or analysis, this document can be formatted as a text file or a Python script, which CMSSW can both interpret as to which specific software modules to load, which value to assign to each of their parameters, and in which succession to apply them to a given event. The configuration of the HLT is very complex: saving the most recent version of it into a single text file results in more than 8000 lines of instructions, amounting to more than 350kB in size. As for any other subsystem of the CMS data acquisition system (DAQ), the record of the state of the HLT during data-taking must be meticulously kept and archived. It is crucial that several versions of a part...

  12. Physics Requirements for the ALICE DAQ system

    CERN Document Server

    Vande Vyvre, P

    2000-01-01

    Abstract Abstract The goal of this note is to review the requirements for the DAQ system originated from the various physics topics that will be studied by the ALICE experiment. It summarises all the current requirements both for Pb-Pb and p-p interactions. The consequences in terms of throughput at different stages of the DAQ system are presented for different running scenarios.

  13. DAQ

    CERN Multimedia

    F. Meijers

    2012-01-01

    The DAQ operated efficiently for the remainder of the pp 2012 run, where LHC reached a peak luminosity of 7.5E33 (at 50 ns bunch spacing). At the start of a fill, typical conditions are: an L1 trigger rate close to 90 kHz, a raw event size of ~700 kB, and ~1.5 kHz recording of stream-A with a size of ~500 kB after compression. The stream-A High Level Trigger (HLT) output includes the physics triggers and consists of the ‘core’ triggers and the ‘parked’ triggers, at about equal rate. Downtime due to central DAQ was below 1%. During the year, various improvements and enhancements have been implemented. An example is the introduction of the ‘action-matrix’ in run control. This matrix defines a small set of run modes linking a consistent set of configurations of sub-detector read-out configurations, and L1 and HLT settings as a function of LHC modes. This mechanism facilitates operation as it automatically proposes the run mode depending on the actual...

  14. Belle DAQ system upgrade at 2001

    CERN Document Server

    Suzuki, S Y; Kim, H W; Kim, H J; Kim, H O; Nakao, M; Won, E; Yamauchi, M

    2002-01-01

    We renewed the data acquisition system for the Belle experiment. Previous data acquisition system, which has been used since December 1998, did not have level 2 trigger facility. To improve the data reduction factor and total throughput, we replaced event builder, online computer farm and the storage system. The event builder and online computer farm are unified into one system. This event building farm uses commodity hardware and newly appended level 2 trigger functionality. This data acquisition system started its operation since last autumn and is very stable. We took 36 fb sup - sup 1 with new DAQ system, it had already overtaken 30 fb sup - sup 1 that is total amount of previous DAQ system.

  15. A rule-based verification and control framework in ATLAS Trigger-DAQ

    CERN Document Server

    Kazarov, A; Lehmann-Miotto, G; Sloper, J E; Ryabov, Yu; Computing In High Energy and Nuclear Physics

    2007-01-01

    In order to meet the requirements of ATLAS data taking, the ATLAS Trigger-DAQ system is composed of O(1000) of applications running on more than 2600 computers in a network. With such system size, s/w and h/w failures are quite often. To minimize system downtime, the Trigger-DAQ control system shall include advanced verification and diagnostics facilities. The operator should use tests and expertise of the TDAQ and detectors developers in order to diagnose and recover from errors, if possible automatically. The TDAQ control system is built as a distributed tree of controllers, where behavior of each controller is defined in a rule-based language allowing easy customization. The control system also includes verification framework which allow users to develop and configure tests for any component in the system with different levels of complexity. It can be used as a stand-alone test facility for a small detector installation, as part of the general TDAQ initialization procedure, and for diagnosing the problems ...

  16. DAQ

    CERN Multimedia

    Gomez-Reino Garrido

    Rack Control In order to operate and monitor CMS detector, a large amount of electronic equipment is being installed in around five hundred racks. These racks, full of PCs and other industrial and custom electronic instruments, should be closely controlled and monitored on a full time basis. For this purpose, CMS has developed a Rack Control & Monitoring software application that is also used by the rest of LHC experiments. On the control side, this application interfaces the electrical distribution system allowing to power on and power off individual or groups of racks. For the rack environment monitoring part, the rack control software communicates with CERN made monitoring boards installed in every rack. These boards provide, among other information, temperature, humidity and air flow readings inside each rack. Some automated actions are performed by the tool to anticipate and, if possible, prevent safety system actions in the racks. Racks are automatically switched off if temperature or dew point r...

  17. The DoubleChooz DAQ systems.

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The Double Chooz (DC) reactor anti-neutrino experiment consists of a neutrino detector and a large area Outer Veto detector. A custom data-acquisition (DAQ) system written in Ada language for all the sub-detector in the neutrino detector systems and a generic object oriented data acquisition system for the Outer Veto detector were developed. Generic object-oriented programming was also used to support several electronic systems to be readout providing a simple interface for any new electronics to be added given its dedicated driver. The core electronics of the experiment is based on FADC electronics (500MHz sampling rate), therefore a data-reduction scheme has been implemented to reduce the data volume per trigger. A dynamic data-format was created to allow dynamic reduction of each trigger before data is written to disk. The decision is based on low level information that determines the relevance of each trigger. The DAQ is structured internally into two types of processors: several read-out processors readi...

  18. Rate Predictions and Trigger/DAQ Resource Monitoring in ATLAS

    CERN Document Server

    Schaefer, D M; The ATLAS collaboration

    2012-01-01

    Since starting in 2010, the Large Hadron Collider (LHC) has pro- duced collisions at an ever increasing rate. The ATLAS experiment successfully records the collision data with high eciency and excel- lent data quality. Events are selected using a three-level trigger system, where each level makes a more re ned selection. The level-1 trigger (L1) consists of a custom-designed hardware trigger which seeds two higher software based trigger levels. Over 300 triggers compose a trig- ger menu which selects physics signatures such as electrons, muons, particle jets, etc. Each trigger consumes computing resources of the ATLAS trigger system and oine storage. The LHC instantaneous luminosity conditions, desired physics goals of the collaboration, and the limits of the trigger infrastructure determine the composition of the ATLAS trigger menu. We describe a trigger monitoring frame- work for computing the costs of individual trigger algorithms such as data request rates and CPU consumption. This framework has been used...

  19. FAIR DAQ system: Performances and global DAQ management

    International Nuclear Information System (INIS)

    Ordine, A.; Boiano, A.; Zaghi, A.

    1997-01-01

    We present on overview of the features of FAIR (FAst Inter-crate Readout), a novel open-quotes plug-n-playclose quotes trigger and readout oriented bus system. It provides for an effective low-cost homogeneous, highly extendible and scalable, front-end environment. Readout and event-building are performed, at the same time, without the need of CPUs, by means of a transparent hardware level protocol. The measured rate of data transfer and event-building can be as fast as 22ns/longword (1.44 Gbit/s). The measured performances will be discussed. The open-quotes plug-n-playclose quotes feature will be also presented in some detail along with the control system based on a network embedded in the bus

  20. The use of Ethernet in the DataFlow of the ATLAS Trigger & DAQ

    CERN Document Server

    Stancu, Stefan; Dobinson, Bob; Korcyl, Krzysztof; Knezo, Emil; CHEP 2003 Computing in High Energy Physics

    2003-01-01

    The article analyzes a proposed network topology for the ATLAS DAQ DataFlow, and identifies the Ethernet features required for a proper operation of the network: MAC address table size, switch performance in terms of throughput and latency, the use of Flow Control, Virtual LANs and Quality of Service. We investigate these features on some Ethernet switches, and conclude on their usefulness for the ATLAS DataFlow network

  1. Verification and Diagnostics Framework in ATLAS Trigger/DAQ

    CERN Document Server

    Barczyk, M.; Caprini, M.; Da Silva Conceicao, J.; Dobson, M.; Flammer, J.; Jones, R.; Kazarov, A.; Kolos, S.; Liko, D.; Lucio, L.; Mapelli, L.; Soloviev, I.; Hart, R.; Amorim, A.; Klose, D.; Lima, J.; Pedro, J.; Wolters, H.; Badescu, E.; Alexandrov, I.; Kotov, V.; Mineev, M.; Ryabov, Yu.; Ryabov, Yu.

    2003-01-01

    Trigger and data acquisition (TDAQ) systems for modern HEP experiments are composed of thousands of hardware and software components depending on each other in a very complex manner. Typically, such systems are operated by non-expert shift operators, which are not aware of system functionality details. It is therefore necessary to help the operator to control the system and to minimize system down-time by providing knowledge-based facilities for automatic testing and verification of system components and also for error diagnostics and recovery. For this purpose, a verification and diagnostic framework was developed in the scope of ATLAS TDAQ. The verification functionality of the framework allows developers to configure simple low-level tests for any component in a TDAQ configuration. The test can be configured as one or more processes running on different hosts. The framework organizes tests in sequences, using knowledge about components hierarchy and dependencies, and allowing the operator to verify the fun...

  2. FPGAs for next gen DAQ and Computing systems at CERN

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The need for FPGAs in DAQ is a given, but newer systems needed to be designed to meet the substantial increase in data rate and the challenges that it brings. FPGAs are also power efficient computing devices. So the work also looks at accelerating HEP algorithms and integration of FPGAs with CPUs taking advantage of programming models like OpenCL. Other explorations involved using OpenCL to model a DAQ system.

  3. Three Generations of FPGA DAQ Development for the ATLAS Pixel Detector

    CERN Document Server

    AUTHOR|(CDS)2091916; Hsu, Shih-Chieh; Hauck, Scott Alan

    The Large Hadron Collider (LHC) at the European Center for Nuclear Research (CERN) tracks a schedule of long physics runs, followed by periods of inactivity known as Long Shutdowns (LS). During these LS phases both the LHC, and the experiments around its ring, undergo maintenance and upgrades. For the LHC these upgrades improve their ability to create data for physicists; the more data the LHC can create the more opportunities there are for rare events to appear that physicists will be interested in. The experiments upgrade so they can record the data and ensure the event won’t be missed. Currently the LHC is in Run 2 having completed the first LS of three. This thesis focuses on the development of Field-Programmable Gate Array (FPGA)-based readout systems that span across three major tasks of the ATLAS Pixel data acquisition (DAQ) system. The evolution of Pixel DAQ’s Readout Driver (ROD) card is presented. Starting from improvements made to the new Insertable B-Layer (IBL) ROD design, which was part of t...

  4. FASTBUS readout system for the CDF DAQ upgrade

    International Nuclear Information System (INIS)

    Andresen, J.; Areti, H.; Black, D.

    1993-11-01

    The Data Acquisition System (DAQ) at the Collider Detector at Fermilab is currently being upgraded to handle a minimum of 100 events/sec for an aggregate bandwidth that is at least 25 Mbytes/sec. The DAQ System is based on a commercial switching network that has interfaces to VME bus. The modules that readout the front end crates (FASTBUS and RABBIT) have to deliver the data to the VME bus based host adapters of the switch. This paper describes a readout system that has the required bandwidth while keeping the experiment dead time due to the readout to a minimum

  5. Concepts and technologies used in contemporary DAQ systems

    CERN Multimedia

    CERN. Geneva

    2004-01-01

    based trigger processor and event building farms. We have also seen a shift from standard or proprietary bus systems used in event building to GigaBit networks and commodity components, such as PCs. With the advances in processing power, network throughput, and storage technologes, today's data rates in large experiments routinely reach hundreds of MegaBytes/s. We will present examples of contemporary DAQ systems from different experiments, try to identify or categorize new approaches, and will compare the performance and throughput of existing DAQ systems with the projected data rates of the LHC experiments to see how close we have come to accomplish these goals. We will also tr...

  6. A DAQ system for pixel detectors R and D

    International Nuclear Information System (INIS)

    Battaglia, M.; Bisello, D.; Contarato, D.; Giubilato, P.; Pantano, D.; Tessaro, M.

    2009-01-01

    Pixel detector R and D for HEP and imaging applications require an easily configurable and highly versatile DAQ system able to drive and read out many different chip designs in a transparent way, with different control logics and/or clock signals. An integrated, real-time data collection and analysis environment is essential to achieve fast and reliable detector characterization. We present a DAQ system developed to fulfill these specific needs, able to handle multiple devices at the same time while providing a convenient, ROOT based data display and online analysis environment.

  7. Gated integrator PXI-DAQ system for Thomson scattering diagnostics

    Energy Technology Data Exchange (ETDEWEB)

    Patel, Kiran, E-mail: kkpatel@ipr.res.in; Pillai, Vishal; Singh, Neha; Thomas, Jinto; Kumar, Ajai

    2017-06-15

    Gated Integrator (GI) PXI based data acquisition (DAQ) system has been designed and developed for the ease of acquiring fast Thomson Scattered signals (∼50 ns pulse width). The DAQ system consists of in-house designed and developed GI modules and PXI-1405 chassis with several PXI-DAQ modules. The performance of the developed system has been validated during the SST-1 campaigns. The dynamic range of the GI module depends on the integrating capacitor (C{sub i}) and the modules have been calibrated using 12 pF and 27 pF integrating capacitors. The developed GI module based data acquisition system consists of sixty four channels for simultaneous sampling using eight PXI based digitization modules having eight channels per module. The error estimation and functional tests of this unit are carried out using standard source and also with the fast detectors used for Thomson scattering diagnostics. User friendly Graphical User Interface (GUI) has been developed using LabVIEW on Windows platform to control and acquire the Thomson scattering signal. A robust, easy to operate and maintain with low power consumption, having higher dynamic range with very good sensitivity and cost effective DAQ system is developed and tested for the SST-1 Thomson scattering diagnostics.

  8. SPHERE DAQ and off-line systems: implementation based on the qdpb system

    International Nuclear Information System (INIS)

    Isupov, A.Yu.

    2003-01-01

    Design of the on-line data acquisition (DAQ) system for the SPHERE setup (LHE, JINR) is described. SPHERE DAQ is based on the qdpb (Data Processing with Branchpoints) system and configurable experimental data and CAMAC hardware representations. Implementation of the DAQ and off-line program code, depending on the SPHERE setup's hardware layout and experimental data contents, is explained as well as software modules specific for such implementation

  9. Orthos, an alarm system for the ALICE DAQ operations

    Science.gov (United States)

    Chapeland, Sylvain; Carena, Franco; Carena, Wisla; Chibante Barroso, Vasco; Costa, Filippo; Denes, Ervin; Divia, Roberto; Fuchs, Ulrich; Grigore, Alexandru; Simonetti, Giuseppe; Soos, Csaba; Telesca, Adriana; Vande Vyvre, Pierre; von Haller, Barthelemy

    2012-12-01

    ALICE (A Large Ion Collider Experiment) is the heavy-ion detector studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). The DAQ (Data Acquisition System) facilities handle the data flow from the detectors electronics up to the mass storage. The DAQ system is based on a large farm of commodity hardware consisting of more than 600 devices (Linux PCs, storage, network switches), and controls hundreds of distributed hardware and software components interacting together. This paper presents Orthos, the alarm system used to detect, log, report, and follow-up abnormal situations on the DAQ machines at the experimental area. The main objective of this package is to integrate alarm detection and notification mechanisms with a full-featured issues tracker, in order to prioritize, assign, and fix system failures optimally. This tool relies on a database repository with a logic engine, SQL interfaces to inject or query metrics, and dynamic web pages for user interaction. We describe the system architecture, the technologies used for the implementation, and the integration with existing monitoring tools.

  10. Orthos, an alarm system for the ALICE DAQ operations

    International Nuclear Information System (INIS)

    Chapeland, Sylvain; Carena, Franco; Carena, Wisla; Chibante Barroso, Vasco; Costa, Filippo; Divia, Roberto; Fuchs, Ulrich; Grigore, Alexandru; Simonetti, Giuseppe; Soos, Csaba; Telesca, Adriana; Vande Vyvre, Pierre; Von Haller, Barthelemy; Denes, Ervin

    2012-01-01

    ALICE (A Large Ion Collider Experiment) is the heavy-ion detector studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). The DAQ (Data Acquisition System) facilities handle the data flow from the detectors electronics up to the mass storage. The DAQ system is based on a large farm of commodity hardware consisting of more than 600 devices (Linux PCs, storage, network switches), and controls hundreds of distributed hardware and software components interacting together. This paper presents Orthos, the alarm system used to detect, log, report, and follow-up abnormal situations on the DAQ machines at the experimental area. The main objective of this package is to integrate alarm detection and notification mechanisms with a full-featured issues tracker, in order to prioritize, assign, and fix system failures optimally. This tool relies on a database repository with a logic engine, SQL interfaces to inject or query metrics, and dynamic web pages for user interaction. We describe the system architecture, the technologies used for the implementation, and the integration with existing monitoring tools.

  11. LHCb Silicon Tracker DAQ and DCS Online Systems

    CERN Multimedia

    Buechler, A; Rodriguez, P

    2009-01-01

    The LHCb experiment at the Large Hadron Collider (LHC) at CERN in Geneva Switzerland is specialized on precision measurements of b quark decays. The Silicon Tracker (ST) contributes a crucial part in tracking the particle trajectories and consists of two silicon micro-strip detectors, the Tracker Turicensis upstream of the LHCb magnet and the Inner Tracker downstream. The radiation and the magnetic field represent new challenges for the implementation of a Detector Control System (DCS) and the data acquisition (DAQ). The DAQ has to deal with more than 270K analog readout channels, 2K readout chips and real time DAQ at a rate of 1.1 MHz with data processing at TELL1 level. The TELL1 real time algorithms for clustering thresholds and other computations run on dedicated FPGAs that implement 13K configurable parameters per board, in total 1.17 K parameters for the ST. After data processing the total throughput amounts to about 6.4 Gbytes from an input data rate of around ~337 Gbytes per second. A finite state ma...

  12. DAQ system for high energy polarimeter at the LHE, JINR: implementation based on the qdpb (data processing with branchpoints) system

    International Nuclear Information System (INIS)

    Isupov, A.Yu.

    2001-01-01

    Online data acquisition (DAQ) system's implementation for the High Energy Polarimeter (HEP) at the LHE, JINR is described. HEP DAQ is based on the qdpb system. Software modules specific for such implementation (HEP data and hardware dependent) are discussed

  13. DAQ system for low density plasma parameters measurement

    International Nuclear Information System (INIS)

    Joshi, Rashmi S.; Gupta, Suryakant B.

    2015-01-01

    In various cases where low density plasmas (number density ranges from 1E4 to 1E6 cm -3 ) exist for example, basic plasma studies or LEO space environment measurement of plasma parameters becomes very critical. Conventional tip (cylindrical) Langmuir probes often result into unstable measurements in such lower density plasma. Due to larger surface area, a spherical Langmuir probe is used to measure such lower plasma densities. Applying a sweep voltage signal to the probe and measuring current values corresponding to these voltages gives V-I characteristics of plasma which can be plotted on a digital storage oscilloscope. This plot is analyzed for calculating various plasma parameters. The aim of this paper is to measure plasma parameters using a spherical Langmuir probe and indigenously developed DAQ system. DAQ system consists of Keithley source-meter and a host system connected by a GPIB interface. An online plasma parameter diagnostic system is developed for measuring plasma properties for non-thermal plasma in vacuum. An algorithm is developed using LabVIEW platform. V-I characteristics of plasma are plotted with respect to different filament current values and different locations of Langmuir probe with reference to plasma source. V-I characteristics is also plotted for forward and reverse voltage sweep generated programmatically from the source meter. (author)

  14. Design of data transmission for a portable DAQ system

    International Nuclear Information System (INIS)

    Zhou Wenxiong; Nan Gangyang; Zhang Jianchuan; Wang Yanyu

    2014-01-01

    Field Programmable Gate Array (FPGA), combined with ARM (Advanced RISC Machines) is increasingly employed in the portable data acquisition (DAQ) system for nuclear experiments to reduce the system volume and achieve powerful and multifunctional capacity. High-speed data transmission between FPGA and ARM is one of the most challenging issues for system implementation. In this paper, we propose a method to realize the high-speed data transmission by using the FPGA to acquire massive data from FEE (Front-end electronics) and send it to the ARM whilst the ARM to transmit the data to the remote computer through the TCP/IP protocol for later process. This paper mainly introduces the interface design of the high-speed transmission method between the FPGA and the ARM, the transmission logic of the FPGA, and the program design of the ARM. The theoretical research shows that the maximal transmission speed between the FPGA and the ARM through this way can reach 50 MB/s. In a realistic nuclear physics experiment, this portable DAQ system achieved 2.2 MB/s data acquisition speed. (authors)

  15. High Performance Gigabit Ethernet Switches for DAQ Systems

    CERN Document Server

    Barczyk, Artur

    2005-01-01

    Commercially available high performance Gigabit Ethernet (GbE) switches are optimized mostly for Internet and standard LAN application traffic. DAQ systems on the other hand usually make use of very specific traffic patterns, with e.g. deterministic arrival times. Industry's accepted loss-less limit of 99.999% may be still unacceptably high for DAQ purposes, as e.g. in the case of the LHCb readout system. In addition, even switches passing this criteria under random traffic can show significantly higher loss rates if subject to our traffic pattern, mainly due to buffer memory limitations. We have evaluated the performance of several switches, ranging from "pizza-box" devices with 24 or 48 ports up to chassis based core switches in a test-bed capable to emulate realistic traffic patterns as expected in the readout system of our experiment. The results obtained in our tests have been used to refine and parametrize our packet level simulation of the complete LHCb readout network. In this paper we report on the...

  16. An Introduction to ATLAS Pixel Detector DAQ and Calibration Software Based on a Year's Work at CERN for the Upgrade from 8 to 13 TeV

    CERN Document Server

    AUTHOR|(CDS)2094561

    An overview is presented of the ATLAS pixel detector Data Acquisition (DAQ) system obtained by the author during a year-long opportunity to work on calibration software for the 2015-16 Layer‑2 upgrade. It is hoped the document will function more generally as an easy entry point for future work on ATLAS pixel detector calibration systems. To begin with, the overall place of ATLAS pixel DAQ within the CERN Large Hadron Collider (LHC), the purpose of the Layer-2 upgrade and the fundamentals of pixel calibration are outlined. This is followed by a brief look at the high level structure and key features of the calibration software. The paper concludes by discussing some difficulties encountered in the upgrade project and how these led to unforeseen alternative enhancements, such as development of calibration “simulation” software allowing the soundness of the ongoing upgrade work to be verified while not all of the actual readout hardware was available for the most comprehensive testing.

  17. A verilog simulation of the CDF DAQ system

    Energy Technology Data Exchange (ETDEWEB)

    Schurecht, K.; Harris, R. (Fermi National Accelerator Lab., Batavia, IL (United States)); Sinervo, P.; Grindley, R. (Toronto Univ., ON (Canada). Dept. of Physics)

    1991-11-01

    A behavioral simulation of the CDF data acquisition system was written in the Verilog modeling language in order to investigate the effects of various improvements to the existing system. This system is modeled as five separate components that communicate with each other via Fastbus interrupt messages. One component of the system, the CDF event builder, is modeled in substantially greater detail due to its complex structure. This simulation has been verified by comparing its performance with that of the existing DAQ system. Possible improvements to the existing systems were studied using the simulation, and the optimal upgrade path for the system was chosen on the basis of these studies. The overall throughput of the modified system is estimated to be double that of the existing setup. Details of this modeling effort will be discussed, including a comparison of the modeled and actual performance of the existing system.

  18. A verilog simulation of the CDF DAQ system

    International Nuclear Information System (INIS)

    Schurecht, K.; Harris, R.; Sinervo, P.; Grindley, R.

    1991-11-01

    A behavioral simulation of the CDF data acquisition system was written in the Verilog modeling language in order to investigate the effects of various improvements to the existing system. This system is modeled as five separate components that communicate with each other via Fastbus interrupt messages. One component of the system, the CDF event builder, is modeled in substantially greater detail due to its complex structure. This simulation has been verified by comparing its performance with that of the existing DAQ system. Possible improvements to the existing systems were studied using the simulation, and the optimal upgrade path for the system was chosen on the basis of these studies. The overall throughput of the modified system is estimated to be double that of the existing setup. Details of this modeling effort will be discussed, including a comparison of the modeled and actual performance of the existing system

  19. Configurable data and CAMAC hardware representations for implementation of the SPHERE DAQ and offline systems

    International Nuclear Information System (INIS)

    Isupov, A.Yu.

    2001-01-01

    An implementation of the experimental data configurable representation for using in the DAQ and offline systems of the SPHERE setup at the LHE, JINR is described. A software scheme of the SPHERE CAMAC hardware's configurable description, intended to online data acquisition (DAQ) implementation based on the qdpb system, is issued

  20. Embedded DAQ System Design for Temperature and Humidity Measurement

    Directory of Open Access Journals (Sweden)

    Tarique Rafique Memon

    2016-05-01

    Full Text Available In this work, we have proposed a cost effective DAQ (Data Acquisition system design useful for local industries by using user friendly LABVIEW (Laboratory Virtual Instrumentation Electronic Workbench. The proposed system can measure and control different industrial parameters which can be presented in graphical icon format. The system design is proposed for 8-channels, whereas tested and recorded for two parameters i.e. temperature and RH (Relative Humidity. Both parameters are set as per upper and lower limits and controlled using relays. Embedded system is developed using standard microcontroller to acquire and process the analog data and plug-in for further processing using serial interface with PC using LABVIEW. The designed system is capable of monitoring and recording the corresponding linkage between temperature and humidity in industrial unit's and indicates the abnormalities within the process and control those abnormalities through relays

  1. Embedded DAQ System Design for Temperature and Humidity Measurement

    International Nuclear Information System (INIS)

    Memon, T.R.

    2013-01-01

    In this work, we have proposed a cost effective DAQ (Data Acquisition) system design useful for local industries by using user friendly LABVIEW (Laboratory Virtual Instrumentation Electronic Workbench). The proposed system can measure and control different industrial parameters which can be presented in graphical icon format. The system design is proposed for 8-channels, whereas tested and recorded for two parameters i.e. temperature and RH (Relative Humidity). Both parameters are set as per upper and lower limits and controlled using relays. Embedded system is developed using standard microcontroller to acquire and process the analog data and plug-in for further processing using serial interface with PC using LABVIEW. The designed system is capable of monitoring and recording the corresponding linkage between temperature and humidity in industrial unit's and indicates the abnormalities within the process and control those abnormalities through relays. (author)

  2. Overview and performance of the FNAL KTeV DAQ system

    International Nuclear Information System (INIS)

    Nakaya, T.; O'Dell, V.; Hazumi, M.; Yamanaka, T.

    1995-11-01

    KTeV is a new fixed target experiment at Fermilab designed to study CP violation in the neutral kaon system. The KTeV Data Acquisition System (DAQ) is out of the highest performance DAQ's in the field of high energy physics. The sustained data throughput of the KTeV DAQ reaches 160 Mbytes/sec, and the available online level 3 processing power is 3600 Mips. In order to handle such high data throughput, the KTeV DAQ is designed around a memory matrix core where the data flow is divided and parallelized. In this paper, we present the architecture and test results of the KTeV DAQ system

  3. The 2002 Test Beam DAQ

    CERN Multimedia

    Mapelli, L.

    The ATLAS Tilecal group has been the first user of the Test Beam version of the DAQ/EF-1 prototype in 2000. The prototype was successfully tested in lab in summer 1999 and it has been officially adopted as baseline solution for the Test Beam DAQ at the end of 1999. It provides the right solution for users who need to have a modern data acquisition chain for final or almost final front-end and off-detector electronics (RODs and ROD emulators). The typical architecture for the readout and the DAQ is sketched in the figure below. A number of detector crates can send data over the Read Out Link to the Read Out System. The Read Out System sends data over an Ethernet link to a SubFarm PC that provides to send the data to Central Data Recording. In 2001 also the Muon MDT group has adopted this modern DAQ where for the first time a PC-based ReadOut System has been used, instead of the VME based implementation used in 2000, and for the Tilecal DAQ in 2001. In 2002 also Tilecal has adopted the PC-based implement...

  4. Development of multi-channel gated integrator and PXI-DAQ system for nuclear detector arrays

    International Nuclear Information System (INIS)

    Kong Jie; Su Hong; Chen Zhiqiang; Dong Chengfu; Qian Yi; Gao Shanshan; Zhou Chaoyang; Lu Wan; Ye Ruiping; Ma Junbing

    2010-01-01

    A multi-channel gated integrator and PXI based data acquisition system have been developed for nuclear detector arrays with hundreds of detector units. The multi-channel gated integrator can be controlled by a programmable GI controller. The PXI-DAQ system consists of NI PXI-1033 chassis with several PXI-DAQ cards. The system software has a user-friendly GUI which is written in C language using LabWindows/CVI under Windows XP operating system. The performance of the PXI-DAQ system is very reliable and capable of handling event rate up to 40 kHz.

  5. The TOTEM DAQ based on the Scalable Readout System (SRS)

    Science.gov (United States)

    Quinto, Michele; Cafagna, Francesco S.; Fiergolski, Adrian; Radicioni, Emilio

    2018-02-01

    The TOTEM (TOTal cross section, Elastic scattering and diffraction dissociation Measurement at the LHC) experiment at LHC, has been designed to measure the total proton-proton cross-section and study the elastic and diffractive scattering at the LHC energies. In order to cope with the increased machine luminosity and the higher statistic required by the extension of the TOTEM physics program, approved for the LHC's Run Two phase, the previous VME based data acquisition system has been replaced with a new one based on the Scalable Readout System. The system features an aggregated data throughput of 2GB / s towards the online storage system. This makes it possible to sustain a maximum trigger rate of ˜ 24kHz, to be compared with the 1KHz rate of the previous system. The trigger rate is further improved by implementing zero-suppression and second-level hardware algorithms in the Scalable Readout System. The new system fulfils the requirements for an increased efficiency, providing higher bandwidth, and increasing the purity of the data recorded. Moreover full compatibility has been guaranteed with the legacy front-end hardware, as well as with the DAQ interface of the CMS experiment and with the LHC's Timing, Trigger and Control distribution system. In this contribution we describe in detail the architecture of full system and its performance measured during the commissioning phase at the LHC Interaction Point.

  6. The DAQ system for the AEḡIS experiment

    Science.gov (United States)

    Prelz, F.; Aghion, S.; Amsler, C.; Ariga, T.; Bonomi, G.; Brusa, R. S.; Caccia, M.; Caravita, R.; Castelli, F.; Cerchiari, G.; Comparat, D.; Consolati, G.; Demetrio, A.; Di Noto, L.; Doser, M.; Ereditato, A.; Evans, C.; Ferragut, R.; Fesel, J.; Fontana, A.; Gerber, S.; Giammarchi, M.; Gligorova, A.; Guatieri, F.; Haider, S.; Hinterberger, A.; Holmestad, H.; Kellerbauer, A.; Krasnický, D.; Lagomarsino, V.; Lansonneur, P.; Lebrun, P.; Malbrunot, C.; Mariazzi, S.; Matveev, V.; Mazzotta, Z.; Müller, S. R.; Nebbia, G.; Nedelec, P.; Oberthaler, M.; Pacifico, N.; Pagano, D.; Penasa, L.; Petracek, V.; Prevedelli, M.; Ravelli, L.; Rienaecker, B.; Robert, J.; Røhne, O. M.; Rotondi, A.; Sacerdoti, M.; Sandaker, H.; Santoro, R.; Scampoli, P.; Simon, M.; Smestad, L.; Sorrentino, F.; Testera, G.; Tietje, I. C.; Widmann, E.; Yzombard, P.; Zimmer, C.; Zmeskal, J.; Zurlo, N.

    2017-10-01

    In the sociology of small- to mid-sized (O(100) collaborators) experiments the issue of data collection and storage is sometimes felt as a residual problem for which well-established solutions are known. Still, the DAQ system can be one of the few forces that drive towards the integration of otherwise loosely coupled detector systems. As such it may be hard to complete with off-the-shelf components only. LabVIEW and ROOT are the (only) two software systems that were assumed to be familiar enough to all collaborators of the AEḡIS (AD6) experiment at CERN: working out of the GXML representation of LabVIEW Data types, a semantically equivalent representation as ROOT TTrees was developed for permanent storage and analysis. All data in the experiment is cast into this common format and can be produced and consumed on both systems and transferred over TCP and/or multicast over UDP for immediate sharing over the experiment LAN. We describe the setup that has been able to cater to all run data logging and long term monitoring needs of the AEḡIS experiment so far.

  7. A prototype DAQ system for the ALICE experiment based on SCI

    International Nuclear Information System (INIS)

    Skaali, B.; Ingebrigtsen, L.; Wormald, D.; Polovnikov, S.; Roehrig, H.

    1998-01-01

    A prototype DAQ system for the ALICE/PHOS beam test an commissioning program is presented. The system has been taking data since August 1997, and represents one of the first applications of the Scalable Coherent Interface (SCI) as interconnect technology for an operational DAQ system. The front-end VMEbus address space is mapped directly from the DAQ computer memory space through SCI via PCI-SCI bridges. The DAQ computer is a commodity PC running the Linux operating system. The results of measurements of data transfer rate and latency for the PCI-SCI bridges in a PC-VMEbus SCI-configuration are presented. An optical SCI link based on the Motorola Optobus I data link is described

  8. The version control service for ATLAS data acquisition configuration filesDAQ ; configuration ; OKS ; XML

    CERN Document Server

    Soloviev, Igor; The ATLAS collaboration

    2012-01-01

    To configure data taking session the ATLAS systems and detectors store more than 160 MBytes of data acquisition related configuration information in OKS XML files. The total number of the files exceeds 1300 and they are updated by many system experts. In the past from time to time after such updates we had experienced problems caused by XML syntax errors or inconsistent state of files from a point of view of the overall ATLAS configuration. It was not always possible to know who made a modification causing problems or how to go back to a previous version of the modified file. Few years ago a special service addressing these issues has been implemented and deployed on ATLAS Point-1. It excludes direct write access to XML files stored in a central database repository. Instead, for an update the files are copied into a user repository, validated after modifications and committed using a version control system. The system's callback updates the central repository. Also, it keeps track of all modifications providi...

  9. Status of the Melbourne experimental particle physics DAQ, silicon hodoscope and readout systems

    International Nuclear Information System (INIS)

    Moorhead, G.F.

    1995-01-01

    This talk will present a brief review of the current status of the Melbourne Experimental Particle Physics group's primary data acquisition system (DAQ), the associated silicon hodoscope and trigger systems, and of the tests currently underway and foreseen. Simulations of the propagation of 106-Ru β particles through the system will also be shown

  10. Trigger and DAQ in the Combined Test Beam

    CERN Multimedia

    Dobson, M; Padilla, C

    2004-01-01

    Introduction During the Combined Test Beam the latest prototype of the ATLAS Trigger and DAQ system is being used to support the data taking of all the detectors. Further development of the TDAQ subsystems benefits from the direct experience given by the integration in the beam test. Support of detectors for the Combined Test Beam All ATLAS detectors need their own detector-specific DAQ development. The readout electronics is controlled by a Readout Driver (ROD), custom-built for each detector. The ROD receives data for events that are accepted by the first level trigger. The detector-specific part of the DAQ system needs to control the ROD and to respond to commands of the central DAQ (e.g. to "Start" a run). The ROD module then sends event data to a Readout System (ROS), a PC with special receiver modules/buffers. At this point the data enters the realm of the ATLAS DAQ and High Level Trigger system, constructed from Linux PCs connected with gigabit Ethernet networks. Most ATLAS detectors, representing s...

  11. Data Acquisition (DAQ) system dedicated for remote sensing applications on Unmanned Aerial Vehicles (UAV)

    Science.gov (United States)

    Keleshis, C.; Ioannou, S.; Vrekoussis, M.; Levin, Z.; Lange, M. A.

    2014-08-01

    Continuous advances in unmanned aerial vehicles (UAV) and the increased complexity of their applications raise the demand for improved data acquisition systems (DAQ). These improvements may comprise low power consumption, low volume and weight, robustness, modularity and capability to interface with various sensors and peripherals while maintaining the high sampling rates and processing speeds. Such a system has been designed and developed and is currently integrated on the Autonomous Flying Platforms for Atmospheric and Earth Surface Observations (APAESO/NEA-YΠOΔOMH/NEKΠ/0308/09) however, it can be easily adapted to any UAV or any other mobile vehicle. The system consists of a single-board computer with a dual-core processor, rugged surface-mount memory and storage device, analog and digital input-output ports and many other peripherals that enhance its connectivity with various sensors, imagers and on-board devices. The system is powered by a high efficiency power supply board. Additional boards such as frame-grabbers, differential global positioning system (DGPS) satellite receivers, general packet radio service (3G-4G-GPRS) modems for communication redundancy have been interfaced to the core system and are used whenever there is a mission need. The onboard DAQ system can be preprogrammed for automatic data acquisition or it can be remotely operated during the flight from the ground control station (GCS) using a graphical user interface (GUI) which has been developed and will also be presented in this paper. The unique design of the GUI and the DAQ system enables the synchronized acquisition of a variety of scientific and UAV flight data in a single core location. The new DAQ system and the GUI have been successfully utilized in several scientific UAV missions. In conclusion, the novel DAQ system provides the UAV and the remote-sensing community with a new tool capable of reliably acquiring, processing, storing and transmitting data from any sensor integrated

  12. DAQ system for testing RPC front-end electronics of the INO experiment

    International Nuclear Information System (INIS)

    Hari Prasad, K.; Sukhwani, Menka; Kesarkar, Tushar A.; Kumar, Sandeep; Chandratre, V.B.; Das, D.; Shinde, R.R.; Satyanarayana, B.

    2015-01-01

    The Resistive Plate Chamber (RPC) is the active detector element in the INO experiment. The in-house developed ANUSPARSH-III ASICs are being used as front-end electronics of the detector. The 2 m X 2 m RPC being used has 64-readout channels on X-side and 64-readout channels on Y-side. In order to test and validate the FE along with the RPC, a 64-channel DAQ system has been designed and developed. The detector parameters to be measured are noise rate, efficiency, hit pattern register and time resolution. The salient features of the DAQ system are: 64-channel LVDS receiver in FPGA, FPGA based parameter calculations and a micro controller for acquiring the processed data from FPGAs and sent through Ethernet and USB interfaces. The DAQ system consists of following parts: Two FPGAs each receiving 32 LVDS channels, FPGA firm-ware, micro controller firm-ware, Ethernet interface, embedded web server hosting data analysis software, USB interface, and Lab-windows based data analysis software. The DAQ system has been tested at TIFR with 1 m X 1 m RPC

  13. Using Linux PCs in DAQ applications

    CERN Document Server

    Ünel, G; Beck, H P; Cetin, S A; Conka, T; Crone, G J; Fernandes, A; Francis, D; Joosb, M; Lehmann, G; López, J; Mailov, A A; Mapelli, Livio P; Mornacchi, Giuseppe; Niculescu, M; Petersen, J; Tremblet, L J; Veneziano, Stefano; Wildish, T; Yasu, Y

    2000-01-01

    ATLAS Data Acquisition/Event Filter "-1" (DAQ/EF1) project provides the opportunity to explore the use of commodity hardware (PCs) and Open Source Software (Linux) in DAQ applications. In DAQ/EF-1 there is an element called the LDAQ which is responsible for providing local run-control, error-handling and reporting for a number of read- out modules in front end crates. This element is also responsible for providing event data for monitoring and for the interface with the global control and monitoring system (Back-End). We present the results of an evaluation of the Linux operating system made in the context of DAQ/EF-1 where there are no strong real-time requirements. We also report on our experience in implementing the LDAQ on a VMEbus based PC (the VMIVME-7587) and a desktop PC linked to VMEbus with a Bit3 interface both running Linux. We then present the problems encountered during the integration with VMEbus, the status of the LDAQ implementation and draw some conclusions on the use of Linux in DAQ applica...

  14. Prototype system tests of the Belle II PXD DAQ system

    Energy Technology Data Exchange (ETDEWEB)

    Fleischer, Soeren; Gessler, Thomas; Kuehn, Wolfgang; Lange, Jens Soeren; Muenchow, David; Spruck, Bjoern [II. Physikalisches Institut, Justus-Liebig-Universitaet Giessen (Germany); Liu, Zhen' An; Xu, Hao; Zhao, Jingzhou [Institute of High Energy Physics, Chinese Academy of Sciences (China); Collaboration: II PXD Collaboration

    2012-07-01

    The data acquisition system for the Belle II DEPFET Pixel Vertex Detector (PXD) is designed to cope with a high input data rate of up to 21.6 GB/s. The main hardware component will be AdvancedTCA-based Compute Nodes (CN) equipped with Xilinx Virtex-5 FX70T FPGAs. The design for the third Compute Node generation was completed recently. The xTCA-compliant system features a carrier board and 4 AMC daughter boards. First test results of a prototype board will be presented, including tests of (a) The high-speed optical links used for data input, (b) The two 2 GB DDR2-chips on the board and (c) Output of data via ethernet, using UDP and TCP/IP with both hardware and software protocol stacks.

  15. Implementation of KoHLT-EB DAQ System using compact RIO with EPICS

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Dae-Sik; Kim, Suk-Kwon; Lee, Dong Won [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Cho, Seungyon [National Fusion Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    EPICS (Experimental Physics and Industrial Control System) is a collection of software tools collaboratively developed which can be integrated to provide a comprehensive and scalable control system. Currently there is an increase in use of such systems in large Physics experiments like KSTAR, ITER and DAIC (Daejeon Accelerator Ion Complex). The Korean heat load test facility (KoHLT-EB) was installed at KAERI. This facility is utilized for a qualification test of the plasma facing component (PFC) for the ITER first wall and DEMO divertor, and the thermo-hydraulic experiments. The existing data acquisition device was Agilent 34980A multifunction switch and measurement unit and controlled by Agilent VEE. In the present paper, we report the EPICS based newly upgraded KoHLT-EB DAQ system which is the advanced data acquisition system using FPGA-based reconfigurable DAQ devices like compact RIO. The operator interface of KoHLT-EB DAQ system is composed of Control-System Studio (CSS) and another server is able to archive the related data using the standalone archive tool and the archiveviewer can retrieve that data at any time in the infra-network.

  16. Upgrade of the TOTEM DAQ using the Scalable Readout System (SRS)

    International Nuclear Information System (INIS)

    Quinto, M; Cafagna, F; Fiergolski, A; Radicioni, E

    2013-01-01

    The main goals of the TOTEM Experiment at the LHC are the measurements of the elastic and total p-p cross sections and the studies of the diffractive dissociation processes. At LHC, collisions are produced at a rate of 40 MHz, imposing strong requirements for the Data Acquisition Systems (DAQ) in terms of trigger rate and data throughput. The TOTEM DAQ adopts a modular approach that, in standalone mode, is based on VME bus system. The VME based Front End Driver (FED) modules, host mezzanines that receive data through optical fibres directly from the detectors. After data checks and formatting are applied in the mezzanine, data is retransmitted to the VME interface and to another mezzanine card plugged in the FED module. The VME bus maximum bandwidth limits the maximum first level trigger (L1A) to 1 kHz rate. In order to get rid of the VME bottleneck and improve scalability and the overall capabilities of the DAQ, a new system was designed and constructed based on the Scalable Readout System (SRS), developed in the framework of the RD51 Collaboration. The project aims to increase the efficiency of the actual readout system providing higher bandwidth, and increasing data filtering, implementing a second-level trigger event selection based on hardware pattern recognition algorithms. This goal is to be achieved preserving the maximum back compatibility with the LHC Timing, Trigger and Control (TTC) system as well as with the CMS DAQ. The obtained results and the perspectives of the project are reported. In particular, we describe the system architecture and the new Opto-FEC adapter card developed to connect the SRS with the FED mezzanine modules. A first test bench was built and validated during the last TOTEM data taking period (February 2013). Readout of a set of 3 TOTEM Roman Pot silicon detectors was carried out to verify performance in the real LHC environment. In addition, the test allowed a check of data consistency and quality

  17. Development of BPM/BLM DAQ System for KOMAC Beam Line

    Energy Technology Data Exchange (ETDEWEB)

    Song, Young-Gi; Kim, Jae-Ha; Yun, Sang-Pil; Kim, Han-Sung; Kwon, Hyeok-Jung; Cho, Yong-Sub [Korea Atomic Energy Research Institute, Gyeongju (Korea, Republic of)

    2016-10-15

    The proton beam is accelerated from 3 MeV to 100 MeV through 11 DTL tanks. The KOMAC installed 10 beam lines, 5 for 20-MeV beams and 5 for 100-MeV beams. The proton beam is transmitted to two target room. The KOMAC has been operating two beam lines, one for 20 MeV and one for 100 MeV. New beam line, RI beam line is under commissioning. A Data Acquisition (DAQ) system is essential to monitor beam signals in an analog front-end circuitry from BPM and BLM at beam lines. A data acquisition (DAQ) system is essential to monitor beam signals in an analog front-end circuitry from BPM and BLM at beam lines. The DAQ digitizes beam signal and the sampling is synchronized with a reference signal which is an external trigger for beam operation. The digitized data is accessible by the Experimental Physics and Industrial Control System (EPICS)-based control system, which manages the whole accelerator control. The beam monitoring system integrates BLM and BPM signals into the control system and offers realtime data to operators. The IOC, which is implemented with Linux and a PCI driver, supports data acquisition as a very flexible solution.

  18. The DAQ system of OPERA experiment and its specifications for the spectrometers

    International Nuclear Information System (INIS)

    Dusini, S.; Barichello, G.; Dal Corso, F.; Felici, G.; Lindozzi, M.; Stalio, S.; Sorrentino, G.

    2004-01-01

    We present an overview of the data acquisition system (DAQ) and event building of OPERA. OPERA is a long baseline neutrino experiment with a high modularity detector and low event rate. To deal with these features a distributed DAQ system base on Ethernet standards for the data transfer has been chosen. A distributed GPS clock signal is used for synchronizations and time stamp of the data. This architecture allows very modular and flexible event building based on a software trigger strategy. We also present its specific application to the spectrometer sub-detector where RPC trackers are installed. Self-triggerability is a dedicated feature to be also sensitive to out-of-spill events and to possibly allow data taking before the official start of the experiment

  19. Cold front-end electronics and Ethernet-based DAQ systems for large LAr TPC readout

    CERN Document Server

    D.Autiero,; B.Carlus,; Y.Declais,; S.Gardien,; C.Girerd,; J.Marteau; H.Mathez

    2010-01-01

    Large LAr TPCs are among the most powerful detectors to address open problems in particle and astro-particle physics, such as CP violation in leptonic sector, neutrino properties and their astrophysical implications, proton decay search etc. The scale of such detectors implies severe constraints on their readout and DAQ system. We are carrying on a R&D in electronics on a complete readout chain including an ASIC located close to the collecting planes in the argon gas phase and a DAQ system based on smart Ethernet sensors implemented in a µTCA standard. The choice of the latter standard is motivated by the similarity in the constraints with those existing in Network Telecommunication Industry. We also developed a synchronization scheme developed from the IEEE1588 standard integrated by the use of the recovered clock from the Gigabit link

  20. Web-based DAQ systems: connecting the user and electronics front-ends

    Science.gov (United States)

    Lenzi, Thomas

    2016-12-01

    Web technologies are quickly evolving and are gaining in computational power and flexibility, allowing for a paradigm shift in the field of Data Acquisition (DAQ) systems design. Modern web browsers offer the possibility to create intricate user interfaces and are able to process and render complex data. Furthermore, new web standards such as WebSockets allow for fast real-time communication between the server and the user with minimal overhead. Those improvements make it possible to move the control and monitoring operations from the back-end servers directly to the user and to the front-end electronics, thus reducing the complexity of the data acquisition chain. Moreover, web-based DAQ systems offer greater flexibility, accessibility, and maintainability on the user side than traditional applications which often lack portability and ease of use. As proof of concept, we implemented a simplified DAQ system on a mid-range Spartan6 Field Programmable Gate Array (FPGA) development board coupled to a digital front-end readout chip. The system is connected to the Internet and can be accessed from any web browser. It is composed of custom code to control the front-end readout and of a dual soft-core Microblaze processor to communicate with the client.

  1. Web-based DAQ systems: connecting the user and electronics front-ends

    International Nuclear Information System (INIS)

    Lenzi, Thomas

    2016-01-01

    Web technologies are quickly evolving and are gaining in computational power and flexibility, allowing for a paradigm shift in the field of Data Acquisition (DAQ) systems design. Modern web browsers offer the possibility to create intricate user interfaces and are able to process and render complex data. Furthermore, new web standards such as WebSockets allow for fast real-time communication between the server and the user with minimal overhead. Those improvements make it possible to move the control and monitoring operations from the back-end servers directly to the user and to the front-end electronics, thus reducing the complexity of the data acquisition chain. Moreover, web-based DAQ systems offer greater flexibility, accessibility, and maintainability on the user side than traditional applications which often lack portability and ease of use. As proof of concept, we implemented a simplified DAQ system on a mid-range Spartan6 Field Programmable Gate Array (FPGA) development board coupled to a digital front-end readout chip. The system is connected to the Internet and can be accessed from any web browser. It is composed of custom code to control the front-end readout and of a dual soft-core Microblaze processor to communicate with the client.

  2. DATA ACQUISITION (DAQ)

    CERN Multimedia

    Frans Meijers

    The installation of the 50 kHz DAQ/HLT system has been completed during 2008. The equipment consists of the full detector readout, 8 DAQ slices with a 1 Tbit/s event building capacity, an event filter to run the High Level Trigger (HLT) comprising 720 8-core PCs, and a 16-node storage manager system allowing a write throughput up to 2 GByte/s and a total capacity of 300 TByte. The 50 kHz DAQ system has been commissioned and has been put into service for global cosmics and commissioning data taking. During CRAFT, data was taken with the full detector at ~600 Hz cosmic trigger rate. Often an additional 20 kHz of random triggers were mixed, which were pre-scaled for storage.  The random rate has been increased to ~90 kHz for the commissioning and cosmics runs in 2009, which included all detectors except tracker. The DAQ system is used, in addition to global data taking, for further commissioning and testing of the central DAQ. To this end data emulators are used at the front-end of the central DAQ (in...

  3. The LHCb RICH Upgrade: Development of the DCS and DAQ system.

    CERN Multimedia

    Cavallero, Giovanni

    2018-01-01

    The LHCb experiment is preparing for an upgrade during the second LHC long shutdown in 2019-2020. In order to fully exploit the LHC flavour physics potential with a five-fold increase in the instantaneous luminosity, a trigger-less readout will be implemented. The RICH detectors will require new photon detectors and a brand new front-end electronics. The status of the integration of the RICH photon detector modules with the MiniDAQ, the prototype of the upgraded LHCb readout architecture, has been reported. The development of the prototype of the RICH Upgrade Experiment Control System, integrating the DCS and DAQ partitions in a single FSM, has been described. The status of the development of the RICH Upgrade Inventory, Bookkeeping and Connectivity database has been reported as well.

  4. The New CMS DAQ System for Run 2 of the LHC

    CERN Document Server

    AUTHOR|(CDS)2087644; Behrens, Ulf; Branson, James; Chaze, Olivier; Cittolin, Sergio; Darlea, Georgiana Lavinia; Deldicque, Christian; Dobson, Marc; Dupont, Aymeric; Erhan, Samim; Forrest, Andrew Kevin; Gigi, Dominique; Glege, Frank; Gomez Ceballos, Guillelmo; Gomez-Reino Garrido, Robert; Hegeman, Jeroen Guido; Holzner, Andre Georg; Masetti, Lorenzo; Meijers, Franciscus; Meschi, Emilio; Mommsen, Remigius; Morovic, Srecko; Vivian O'Dell; Orsini, Luciano; Paus, Christoph Maria Ernst; Petrucci, Andrea; Pieri, Marco; Racz, Attila; Sakulin, Hannes; Schwick, Christoph; Stieger, Benjamin Bastian; Sumorok, Konstanty; Veverka, Jan; Zejdl, Petr

    2015-01-01

    The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GB/s to the high level trigger (HLT) farm. The HLT farm selects interesting events for storage and offline analysis at a rate of around 1 kHz. The DAQ system has been redesigned during the accelerator shutdown in 2013/14. The motivation is twofold Firstly, the current compute nodes, networking, and storage infrastructure will have reached the end of their lifetime by the time the LHC restarts. Secondly, in order to handle higher LHC luminosities and event pileup, a number of sub-detectors will be upgraded, increasing the number of readout channels and replacing the off-detector readout electronics with a micro-TCA implementation. The new DAQ architecture will take advantage of the latest developments in the computing industry. For data concentration, 10/40 Gb/s Ethernet technologies will be used, as well as an implementation...

  5. The operational performance of the ATLAS trigger and data acquisition system and its possible evolution

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The first part of this presentation will give an overview of the operational performance of the DAQ system during 2011 and the first months of data taking in 2012. It will describe how the flexibility inherent in the design of the system has be exploited to meet the changing needs of ATLAS data taking and in some cases push performance beyond the original design performance specification. The experience accumulated in the ATLAS DAQ/HLT system operation during these years stimulated also interest to explore possible evolutions, despite the success of the current design. One attractive direction is to merge three systems - the se...

  6. Development of DAQ-Middleware

    International Nuclear Information System (INIS)

    Yasu, Y; Nakayoshi, K; Sendai, H; Inoue, E; Tanaka, M; Suzuki, S; Satoh, S; Muto, S; Otomo, T; Nakatani, T; Uchida, T; Ando, N; Kotoku, T; Hirano, S

    2010-01-01

    DAQ-Middleware is a software framework of network-distributed DAQ system based on Robot Technology Middleware, which is an international standard of Object Management Group (OMG) in Robotics and its implementation was developed by AIST. DAQ-Component is a software unit of DAQ-Middleware. Basic components have been already developed. For examples, Gatherer is a readout component, Logger is a data logging component, Monitor is an analysis component and Dispatcher, which is connected to Gatherer as input of data path and to Logger/Monitor as output of data path. DAQ operator is a special component, which controls those components by using the control/status path. The control/status path and data path as well as XML-based system configuration and XML/HTTP-based system interface are well defined in DAQ-Middleware framework. DAQ-Middleware was adopted by experiments at J-PARC while the commissioning at the first beam had been successfully carried out. The functionality of DAQ-Middleware and the status of DAQ-Middleware at J-PARC are presented.

  7. Effective diagnostic DAQ systems to reduce unnecessary data in KSTAR

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Taegu, E-mail: glory@nfri.re.kr; Lee, Woongryol; Hong, Jaesic; Park, Kaprai

    2016-11-15

    Highlights: • When plasma shots do not successfully perform during the intended target time, the diagnostics systems continue to record these unusable data, contributing to increasing data size. • To overcome this problem, some KSTAR’s library were upgraded to monitor the plasma status in real-time. • With the real-time information of plasma status, some of the KSTAR diagnostic systems stop the acquisition process of unnecessary data. • We were able to reduce the refuse data of approximately 698 GByte in the KSTAR 7th campaign. • It was a very effective way to store useful data, and it was helpful to analysts after shot. - Abstract: The plasma status of Korea Superconducting Tokamak Advanced Research (KSTAR) is measured by various diagnostics systems. The measured data size has been increasing every year due to increasing plasma pulse lengths, higher diagnostics operating frequencies, the additions of new diagnostic systems, and an increasing number of diagnostics channels. At times, when plasma shots do not successfully perform during the intended target time, the diagnostics systems continue to record these unusable data, contributing to increasing data size. In addition, the analysis time was affected, as these data need to be separated from the relevant data set. To overcome this problem, KSTAR’s Standard Framework (SFW), Real Time Monitoring (RTMON), and Pulse Automation and Scheduling System (PASS) were upgraded to monitor the plasma status in real-time. When the plasma current is less than 200kA, RTMON sends the plasma status information every second to the SFW via EPICS Channel Access. With the real-time information on plasma status, some of the KSTAR diagnostic systems stop the acquisition process of unnecessary data. This paper describes a method for reducing the storage of unnecessary data and its results in the KSTAR 7th campaign.

  8. The trigger and DAQ systems of the NA59 experiment

    CERN Document Server

    Ünel, Gokhan; Ballestrero, Sergio

    2004-01-01

    The NA59 experiment on the CERN SPS-H2 beam-line took data during the summers of 1999 and 2000 to perform intercalibration studies of polarization measurement and to test the use of an aligned crystal as a quarter-wave plate. The analysis revealed a proof of concept for the birefringence property of aligned crystals for photons in the 30-170 GeV energy range. The 90-m-long detector for this fixed target experiment had two independent readout schemes: one for more than 120 time-to-digital and analog-to-digital converter channels to obtain tracking and energy information; and another for the readout of the silicon strip detectors to improve vertex resolution. The readout electronics of the Na59 experiment was based on VMEbus and CAMAC systems. Novel data acquisition and online monitoring software were written to work on the commodity hardware (PCs) running mainly the Linux operating system. 21 Refs.

  9. Applications of an OO (Objected Oriented) methodology and case to a DAQ system

    International Nuclear Information System (INIS)

    Bee, C.P.; Eshghi, S.; Jones, R.

    1996-01-01

    The RD13 project has evaluated the use of the Object Oriented Information Engineering (OOIE) method during the development of several software components connected to the DAQ system. The method is supported by a sophisticated commercial CASE tool (Object Management Workbench) and programming environment (Kappa) which covers the full life-cycle of the software including model simulation, code generation and application deployment. This paper gives an overview of the method, CASE tool, DAD components which have been developed and we relate our experiences with the method and tool, its integration into our development environment and the spiral life cycle if supports. (author)

  10. DAQ application of PC oscilloscope for chaos fiber-optic fence system based on LabVIEW

    Science.gov (United States)

    Lu, Manman; Fang, Nian; Wang, Lutang; Huang, Zhaoming; Sun, Xiaofei

    2011-12-01

    In order to obtain simultaneously high sample rate and large buffer in data acquisition (DAQ) for a chaos fiber-optic fence system, we developed a double-channel high-speed DAQ application of a digital oscilloscope of PicoScope 5203 based on LabVIEW. We accomplished it by creating call library function (CLF) nodes to call the DAQ functions in the two dynamic link libraries (DLLs) of PS5000.dll and PS5000wrap.dll provided by Pico Technology Company. The maximum real-time sample rate of the DAQ application can reach 1GS/s. We can control the resolutions of the application at the sample time and data amplitudes by changing their units in the block diagram, and also control the start and end times of the sampling operations. The experimental results show that the application has enough high sample rate and large buffer to meet the demanding DAQ requirements of the chaos fiber-optic fence system.

  11. A potent approach for the development of FPGA based DAQ system for HEP experiments

    Science.gov (United States)

    Khan, Shuaib Ahmad; Mitra, Jubin; David, Erno; Kiss, Tivadar; Nayak, Tapan Kumar

    2017-10-01

    With ever increasing particle beam energies and interaction rates in modern High Energy Physics (HEP) experiments in the present and future accelerator facilities, there has always been the demand for robust Data Acquisition (DAQ) schemes which perform in the harsh radiation environment and handle high data volume. The scheme is required to be flexible enough to adapt to the demands of future detector and electronics upgrades, and at the same time keeping the cost factor in mind. To address these challenges, in the present work, we discuss an efficient DAQ scheme for error resilient, high speed data communication on commercially available state-of-the-art FPGA with optical links. The scheme utilises GigaBit Transceiver (GBT) protocol to establish radiation tolerant communication link between on-detector front-end electronics situated in harsh radiation environment to the back-end Data Processing Unit (DPU) placed in a low radiation zone. The acquired data are reconstructed in DPU which reduces the data volume significantly, and then transmitted to the computing farms through high speed optical links using 10 Gigabit Ethernet (10GbE). In this study, we focus on implementation and testing of GBT protocol and 10GbE links on an Intel FPGA. Results of the measurements of resource utilisation, critical path delays, signal integrity, eye diagram and Bit Error Rate (BER) are presented, which are the indicators for efficient system performance.

  12. Development of the Calibrator of Reactivity Meter Using PC-Based DAQ System

    International Nuclear Information System (INIS)

    Edison; Mariatmo, A.; Sujarwono

    2007-01-01

    The reactivity meter calibrator has been developed by applying the PC-Based DAQ System programmed using LabVIEW. The Output of the calibrator is voltage proportional to neutron density n(t) corresponding to the step reactivity change ρ 0 . The “Kalibrator meter reactivitas.vi” program calculates seven roots and coefficients of solution n(t) of Reactor Kinetic equation using the in-hour equation. Based on data of dt = t k+1 - t k and t 0 = 0 input by user, the program approximates n(t) for each time interval t k ≤ t k+1 , where k = 0, 1, 2, 3, .... by a step function n(t) = n 0 ∑ j=1 7 A j e ω j t k . Then the program commands the DAQ device to output voltage V(t) = n(t) Volt at time t. The measurement of standard reactivity with the meter reactivity showed that the maximum deviation of measured reactivity from its standard were less than 1 %. (author)

  13. A potent approach for the development of FPGA based DAQ system for HEP experiments

    International Nuclear Information System (INIS)

    Khan, Shuaib Ahmad; Mitra, Jubin; Nayak, Tapan Kumar; David, Erno; Kiss, Tivadar

    2017-01-01

    With ever increasing particle beam energies and interaction rates in modern High Energy Physics (HEP) experiments in the present and future accelerator facilities, there has always been the demand for robust Data Acquisition (DAQ) schemes which perform in the harsh radiation environment and handle high data volume. The scheme is required to be flexible enough to adapt to the demands of future detector and electronics upgrades, and at the same time keeping the cost factor in mind. To address these challenges, in the present work, we discuss an efficient DAQ scheme for error resilient, high speed data communication on commercially available state-of-the-art FPGA with optical links. The scheme utilises GigaBit Transceiver (GBT) protocol to establish radiation tolerant communication link between on-detector front-end electronics situated in harsh radiation environment to the back-end Data Processing Unit (DPU) placed in a low radiation zone. The acquired data are reconstructed in DPU which reduces the data volume significantly, and then transmitted to the computing farms through high speed optical links using 10 Gigabit Ethernet (10GbE). In this study, we focus on implementation and testing of GBT protocol and 10GbE links on an Intel FPGA. Results of the measurements of resource utilisation, critical path delays, signal integrity, eye diagram and Bit Error Rate (BER) are presented, which are the indicators for efficient system performance.

  14. Experience using a distributed object oriented database for a DAQ system

    International Nuclear Information System (INIS)

    Bee, C.P.; Eshghi, S.; Jones, R.

    1996-01-01

    To configure the RD13 data acquisition system, we need many parameters which describe the various hardware and software components. Such information has been defined using an entity-relation model and stored in a commercial memory-resident database. during the last year, Itasca, an object oriented database management system (OODB), was chosen as a replacement database system. We have ported the existing databases (hs and sw configurations, run parameters etc.) to Itasca and integrated it with the run control system. We believe that it is possible to use an OODB in real-time environments such as DAQ systems. In this paper, we present our experience and impression: why we wanted to change from an entity-relational approach, some useful features of Itasca, the issues we meet during this project including integration of the database into an existing distributed environment and factors which influence performance. (author)

  15. Commissioning and integration testing of the DAQ system for the CMS GEM upgrade

    CERN Document Server

    Castaneda Hernandez, Alfredo Martin

    2017-01-01

    The CMS muon system will undergo a series of upgrades in the coming years to preserve and extend its muon detection capabilities during the High Luminosity LHC.The first of these will be the installation of triple-foil GEM detectors in the CMS forward region with the goal of maintaining trigger rates and preserving good muon reconstruction, even in the expected harsh environment.In 2017 the CMS GEM project is looking to achieve a major milestone in the project with the installation of 5 super-chambers in CMS; this exercise will allow for the study of services installation and commissioning, and integration with the rest of the subsystems for the first time. An overview of the DAQ system will be given with emphasis on the usage during chamber quality control testing, commissioning in CMS, and integration with the central CMS system.

  16. New COMPASS DAQ

    Energy Technology Data Exchange (ETDEWEB)

    Bai, Yunpeng; Konorov, Igor

    2015-07-01

    This contribution focuses on the deployment and first results of the new FPGA-based data acquisition system (DAQ) of the COMPASS experiment. Since 2002, the number of channels increased to approximately 300000, trigger rate increased to 30 kHz; the average event size remained roughly 35 kB. In order to handle the increased data rates, the new DAQ system with custom FPGA based data handling cards (DHC) had been decided to replace the event building network. The DHCs are equipped with 16 high speed serial links, 2GB of DDR3 memory with bandwidth of 6 GB/s, Gigabit Ethernet connection, and COMPASS Trigger Control System. It uses two different firmware versions: multiplexer and switch. The multiplexer DHC can combine 15 incoming links into one outgoing, whereas the switch combines 8 data streams from multiplexers and using information from look-up table sends the full events to the readout engine servers equipped by spillbuffer PCI-Express cards that receive the data. Both types of DHC can buffer data which allows to distribute the load over the cycle of accelerator. For the purposes of configuration, run control, and monitoring, software tools are developed. Communication between processes in the system is implemented using the DIM library. The DAQ is fully configurable from the web interface. New DAQ system has been deployed for the pilot run starting from the September 2014. In the poster, the preliminary performance and stability results of the new DAQ are presented and compared with the original system in more detail.

  17. DAQ INSTALLATION IN USC COMPLETED

    CERN Multimedia

    A. Racz

    After one year of work at P5 in the underground control rooms (USC55-S1&S2), the DAQ installation in USC55 is completed. The first half of 2006 was dedicated to the DAQ infrastructures installation (private cable trays, rack equipment for a very dense cabling, connection to services i.e. water, power, network). The second half has been spent to install the custom made electronics (FRLs and FMMs) and place all the inter-rack cables/fibers connecting all sub-systems to central DAQ (more details are given in the internal pages). The installation has been carried out by DAQ group members, coming from the hardware and software side as well. The pictures show the very nice team spirit !

  18. Development of a cost-effective and flexible vibration DAQ system for long-term continuous structural health monitoring

    Science.gov (United States)

    Nguyen, Theanh; Chan, Tommy H. T.; Thambiratnam, David P.; King, Les

    2015-12-01

    In the structural health monitoring (SHM) field, long-term continuous vibration-based monitoring is becoming increasingly popular as this could keep track of the health status of structures during their service lives. However, implementing such a system is not always feasible due to on-going conflicts between budget constraints and the need of sophisticated systems to monitor real-world structures under their demanding in-service conditions. To address this problem, this paper presents a comprehensive development of a cost-effective and flexible vibration DAQ system for long-term continuous SHM of a newly constructed institutional complex with a special focus on the main building. First, selections of sensor type and sensor positions are scrutinized to overcome adversities such as low-frequency and low-level vibration measurements. In order to economically tackle the sparse measurement problem, a cost-optimized Ethernet-based peripheral DAQ model is first adopted to form the system skeleton. A combination of a high-resolution timing coordination method based on the TCP/IP command communication medium and a periodic system resynchronization strategy is then proposed to synchronize data from multiple distributed DAQ units. The results of both experimental evaluations and experimental-numerical verifications show that the proposed DAQ system in general and the data synchronization solution in particular work well and they can provide a promising cost-effective and flexible alternative for use in real-world SHM projects. Finally, the paper demonstrates simple but effective ways to make use of the developed monitoring system for long-term continuous structural health evaluation as well as to use the instrumented building herein as a multi-purpose benchmark structure for studying not only practical SHM problems but also synchronization related issues.

  19. NOvA Event Building, Buffering and Data-Driven Triggering From Within the DAQ System

    Energy Technology Data Exchange (ETDEWEB)

    Fischler, M. [Fermilab; Green, C. [Fermilab; Kowalkowski, J. [Fermilab; Norman, A. [Fermilab; Paterno, M. [Fermilab; Rechenmacher, R. [Fermilab

    2012-06-22

    To make its core measurements, the NOvA experiment needs to make real-time data-driven decisions involving beam-spill time correlation and other triggering issues. NOvA-DDT is a prototype Data-Driven Triggering system, built using the Fermilab artdaq generic DAQ/Event-building tools set. This provides the advantages of sharing online software infrastructure with other Intensity Frontier experiments, and of being able to use any offline analysis module--unchanged--as a component of the online triggering decisions. The NOvA-artdaq architecture chosen has significant advantages, including graceful degradation if the triggering decision software fails or cannot be done quickly enough for some fraction of the time-slice ``events.'' We have tested and measured the performance and overhead of NOvA-DDT using an actual Hough transform based trigger decision module taken from the NOvA offline software. The results of these tests--98 ms mean time per event on only 1/16 of th e available processing power of a node, and overheads of about 2 ms per event--provide a proof of concept: NOvA-DDT is a viable strategy for data acquisition, event building, and trigger processing at the NOvA far detector.

  20. Design of low noise front-end ASIC and DAQ system for CdZnTe detector

    International Nuclear Information System (INIS)

    Luo Jie; Deng Zhi; Liu Yinong

    2012-01-01

    A low noise front-end ASIC has been designed for CdZnTe detector. This chip contains 16 channels and each channel consists of a dual-stage charge sensitive preamplifier, 4th order semi-Gaussian shaper, leakage current compensation (LCC) circuit, discriminator and output buffer. This chip has been fabricated in Chartered 0.35 μm CMOS process, the preliminary results show that it works well. The total channel charge gain can be adjusted from 100 mV/fC to 400 mV/fC and the peaking time can be adjusted from 1 μs to 4 μs. The minimum measured ENC at zero input capacitance is 70 e and minimum noise slope is 20 e/pF. The peak detector and derandomizer (PDD) ASIC developed by BNL and an associated USB DAQ board are also introduced in this paper. Two front-end ASICs can be connected to the PDD ASIC on the USB DAQ board and compose a 32 channels DAQ system for CdZnTe detector. (authors)

  1. NOvA Event Building, Buffering and Data-Driven Triggering From Within the DAQ System

    International Nuclear Information System (INIS)

    Fischler, M; Rechenmacher, R; Green, C; Kowalkowski, J; Norman, A; Paterno, M

    2012-01-01

    The NOvA experiment is a long baseline neutrino experiment design to make precision probes of the structure of neutrino mixing. The experiment features a unique deadtimeless data acquisition system that is capable acquiring and building an event data stream from the continuous readout of the more than 360,000 far detector channels. In order to achieve its physics goals the experiment must be able to buffer, correlate and extract the data in this stream with the beam-spills that occur that Fermilab. In addition the NOvA experiment seeks to enhance its data collection efficiencies for rare class of event topologies that are valuable for calibration through the use of data driven triggering. The NOvA-DDT is a prototype Data-Driven Triggering system. NOvA-DDT has been developed using the Fermilab artdaq generic DAQ/Event-building toolkit. This toolkit provides the advantages of sharing online software infrastructure with other Intensity Frontier experiments, and of being able to use any offline analysis module-unchanged-as a component of the online triggering decisions. We have measured the performance and overhead of NOvA-DDT framework using a Hough transform based trigger decision module developed for the NOvA detector to identify cosmic rays. The results of these tests which were run on the NOvA prototype near detector, yielded a mean processing time of 98 ms per event, while consuming only 1/16th of the available processing capacity. These results provide a proof of concept that a NOvA-DDT based processing system is a viable strategy for data acquisition and triggering for the NOvA far detector.

  2. The Data Acquisition and Calibration System for the ATLAS Semiconductor Tracker

    CERN Document Server

    Abdesselam, A; Barr, A J; Bell, P; Bernabeu, J; Butterworth, J M; Carter, J R; Carter, A A; Charles, E; Clark, A; Colijn, A P; Costa, M J; Dalmau, J M; Demirkoz, B; Dervan, P J; Donega, M; D'Onifrio, M; Escobar, C; Fasching, D; Ferguson, D P S; Ferrari, P; Ferrère, D; Fuster, J; Gallop, B; García, C; González, S; González-Sevilla, S; Goodrick, M J; Gorisek, A; Greenall, A; Grillo, A A; Hessey, N P; Hill, J C; Jackson, J N; Jared, R C; Johannson, P D C; de Jong, P; Joseph, J; Lacasta, C; Lane, J B; Lester, C G; Limper, M; Lindsay, S W; McKay, R L; Magrath, C A; Mangin-Brinet, M; Martí i García, S; Mellado, B; Meyer, W T; Mikulec, B; Minano, M; Mitsou, V A; Moorhead, G; Morrissey, M; Paganis, E; Palmer, M J; Parker, M A; Pernegger, H; Phillips, A; Phillips, P W; Postranecky, M; Robichaud-Véronneau, A; Robinson, D; Roe, S; Sandaker, H; Sciacca, F; Sfyrla, A; Stanecka, E; Stapnes, S; Stradling, A; Tyndel, M; Tricoli, A; Vickey, T; Vossebeld, J H; Warren, M R M; Weidberg, A R; Wells, P S; Wu, S L

    2008-01-01

    The SemiConductor Tracker (SCT) data acquisition (DAQ) system will calibrate, configure, and control the approximately six million front-end channels of the ATLAS silicon strip detector. It will provide a synchronized bunch-crossing clock to the front-end modules, communicate first-level triggers to the front-end chips, and transfer information about hit strips to the ATLAS high-level trigger system. The system has been used extensively for calibration and quality assurance during SCT barrel and endcap assembly and for performance confirmation tests after transport of the barrels and endcaps to CERN. Operating in data-taking mode, the DAQ has recorded nearly twenty million synchronously-triggered events during commissioning tests including almost a million cosmic ray triggered events. In this paper we describe the components of the data acquisition system, discuss its operation in calibration and data-taking modes and present some detector performance results from these tests.

  3. The data acquisition and calibration system for the ATLAS Semiconductor Tracker

    International Nuclear Information System (INIS)

    Abdesselam, A; Barr, A J; Demirkoez, B; Barber, T; Carter, J R; Bell, P; Bernabeu, J; Costa, M J; Escobar, C; Butterworth, J M; Carter, A A; Dalmau, J M; Charles, E; Fasching, D; Ferguson, D P S; Clark, A; Donega, M; D'Onifrio, M; Colijn, A-P; Dervan, P J

    2008-01-01

    The SemiConductor Tracker (SCT) data acquisition (DAQ) system will calibrate, configure, and control the approximately six million front-end channels of the ATLAS silicon strip detector. It will provide a synchronized bunch-crossing clock to the front-end modules, communicate first-level triggers to the front-end chips, and transfer information about hit strips to the ATLAS high-level trigger system. The system has been used extensively for calibration and quality assurance during SCT barrel and endcap assembly and for performance confirmation tests after transport of the barrels and endcaps to CERN. Operating in data-taking mode, the DAQ has recorded nearly twenty million synchronously-triggered events during commissioning tests including almost a million cosmic ray triggered events. In this paper we describe the components of the data acquisition system, discuss its operation in calibration and data-taking modes and present some detector performance results from these tests

  4. ATLAS production system

    CERN Document Server

    Borodin, Mikhail; The ATLAS collaboration; De, Kaushik; Klimentov, Alexei; Golubkov, Dmitry; Maeno, Tadashi; Mashinistov, Ruslan; Wenaus, Torre; Padolski, Siarhei

    2016-01-01

    The second generation of the ATLAS production system called ProdSys2 is a distributed workload manager which used by thousands of physicists to analyze the data remotely, with the volume of processed data is beyond the exabyte scale, across a more than hundred heterogeneous sites. It achieves high utilization by combining dynamic job definition based on many criterias, such as input and output size, memory requirements and CPU consumption with manageable scheduling policies and by supporting different kind of computational resources, such as GRID, clouds, supercomputers and volunteering computers. Besides jobs definition Production System also includes flexible web user interface, which implements user-friendly environment for main ATLAS workflows, e.g. simple way of combining different data flows, and real-time monitoring, optimised for using with huge amount of information to present. We present an overview of the ATLAS Production System major components: job and task definition, workflow manager web user i...

  5. ATLAS TDAQ System Administration:

    CERN Document Server

    Lee, Christopher Jon; The ATLAS collaboration; Bogdanchikov, Alexander; Ballestrero, Sergio; Contescu, Alexandru Cristian; Dubrov, Sergei; Fazio, Daniel; Korol, Aleksandr; Scannicchio, Diana; Twomey, Matthew Shaun; Voronkov, Artem

    2015-01-01

    The ATLAS Trigger and Data Acquisition (TDAQ) system is responsible for the online processing of live data, streaming from the ATLAS experiment at the Large Hadron Collider (LHC) at CERN. The online farm is composed of ̃3000 servers, processing the data readout from ̃100 million detector channels through multiple trigger levels. During the two years of the first Long Shutdown (LS1) there has been a tremendous amount of work done by the ATLAS TDAQ System Administrators, implementing numerous new software applications, upgrading the OS and the hardware, changing some design philosophies and exploiting the High Level Trigger farm with different purposes. During the data taking only critical security updates are applied and broken hardware is replaced to ensure a stable operational environment. The LS1 provided an excellent opportunity to look into new technologies and applications that would help to improve and streamline the daily tasks of not only the System Administrators, but also of the scientists who wil...

  6. ATLAS Detector Interface Group

    CERN Multimedia

    Mapelli, L

    Originally organised as a sub-system in the DAQ/EF-1 Prototype Project, the Detector Interface Group (DIG) was an information exchange channel between the Detector systems and the Data Acquisition to provide critical detector information for prototype design and detector integration. After the reorganisation of the Trigger/DAQ Project and of Technical Coordination, the necessity to provide an adequate context for integration of detectors with the Trigger and DAQ lead to organisation of the DIG as one of the activities of Technical Coordination. Such an organisation emphasises the ATLAS wide coordination of the Trigger and DAQ exploitation aspects, which go beyond the domain of the Trigger/DAQ project itself. As part of Technical Coordination, the DIG provides the natural environment for the common work of Trigger/DAQ and detector experts. A DIG forum for a wide discussion of all the detector and Trigger/DAQ integration issues. A more restricted DIG group for the practical organisation and implementation o...

  7. FELIX - the new detector readout system for the ATLAS experiment

    CERN Document Server

    AUTHOR|(SzGeCERN)754725; The ATLAS collaboration; Anderson, John Thomas; Borga, Andrea; Boterenbrood, Hendrik; Chen, Hucheng; Chen, Kai; Drake, Gary; Donszelmann, Mark; Francis, David; Gorini, Benedetto; Guest, Daniel; Lanni, Francesco; Lehmann Miotto, Giovanna; Levinson, Lorne; Roich, Alexander; Schreuder, Frans Philip; Schumacher, J\\"orn; Vandelli, Wainer; Vermeulen, Jos; Wu, Weihao; Zhang, Jinlong

    2016-01-01

    From the ATLAS Phase-I upgrade and onward, new or upgraded detectors and trigger systems will be interfaced to the data acquisition, detector control and timing (TTC) systems by the Front-End Link eXchange (FELIX). FELIX is the core of the new ATLAS Trigger/DAQ architecture. Functioning as a router between custom serial links and a commodity network, FELIX is implemented by server PCs with commodity network interfaces and PCIe cards with large FPGAs and many high speed serial fiber transceivers. By separating data transport from data manipulation, the latter can be done by software in commodity servers attached to the network. Replacing traditional point-to-point links between Front-end components and the DAQ system by a switched network, FELIX provides scaling, flexibility uniformity and upgradability. Different Front-end data types or different data sources can be routed to different network endpoints that handle that data type or source: e.g. event data, configuration, calibration, detector control, monito...

  8. Production Performance of the ATLAS Semiconductor Tracker Readout System

    CERN Document Server

    Mitsou, V A

    2006-01-01

    The ATLAS Semiconductor Tracker (SCT) together with the pixel and the transition radiation detectors will form the tracking system of the ATLAS experiment at LHC. It will consist of 20000 single-sided silicon microstrip sensors assembled back-to-back into modules mounted on four concentric barrels and two end-cap detectors formed by nine disks each. The SCT module production and testing has finished while the macro-assembly is well under way. After an overview of the layout and the operating environment of the SCT, a description of the readout electronics design and operation requirements will be given. The quality control procedure and the DAQ software for assuring the electrical functionality of hybrids and modules will be discussed. The focus will be on the electrical performance results obtained during the assembly and testing of the end-cap SCT modules.

  9. Development of the DAQ System of Triple-GEM Detectors for the CMS Muon Spectrometer Upgrade at LHC

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00387583

    The Gas Electron Multiplier (GEM) upgrade project aims at improving the performance of the muon spectrometer of the Compact Muon Solenoid (CMS) experiment which will suffer from the increase in luminosity of the Large Hadron Collider (LHC). After a long technical stop in 2019-2020, the LHC will restart and run at a luminosity of 2 × 1034 cm−2 s−1, twice its nominal value. This will in turn increase the rate of particles to which detectors in CMS will be exposed and affect their performance. The muon spectrometer in particular will suffer from a degraded detection efficiency due to the lack of redundancy in its most forward region. To solve this issue, the GEM collaboration proposes to instrument the first muon station with Triple-GEM detectors, a technology which has proven to be resistant to high fluxes of particles. Within the GEM collaboration, the Data Acquisition (DAQ) subgroup is in charge of the development of the electronics and software of the DAQ system of the detectors. This thesis presents th...

  10. A new ATLAS muon CSC readout system with system on chip technology on ATCA platform

    Science.gov (United States)

    Claus, R.; ATLAS Collaboration

    2016-07-01

    The ATLAS muon Cathode Strip Chamber (CSC) back-end readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run 2 luminosity. The readout design is based on the Reconfiguration Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the ATCA platform. The RCE design is based on the new System on Chip Xilinx Zynq series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources together with auxiliary memories to form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the Zynq for G-link, S-link and TTC allowed the full system of 320 G-links from the 32 chambers to be processed by 6 COBs in one ATCA shelf through software waveform feature extraction to output 32 S-links. The full system was installed in Sept. 2014. We will present the RCE/COB design concept, the firmware and software processing architecture, and the experience from the intense commissioning towards LHC Run 2.

  11. A new ATLAS muon CSC readout system with system on chip technology on ATCA platform

    International Nuclear Information System (INIS)

    Claus, R.

    2016-01-01

    The ATLAS muon Cathode Strip Chamber (CSC) back-end readout system has been upgraded during the LHC 2013–2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run 2 luminosity. The readout design is based on the Reconfiguration Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the ATCA platform. The RCE design is based on the new System on Chip Xilinx Zynq series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources together with auxiliary memories to form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the Zynq for G-link, S-link and TTC allowed the full system of 320 G-links from the 32 chambers to be processed by 6 COBs in one ATCA shelf through software waveform feature extraction to output 32 S-links. The full system was installed in Sept. 2014. We will present the RCE/COB design concept, the firmware and software processing architecture, and the experience from the intense commissioning towards LHC Run 2.

  12. A new ATLAS muon CSC readout system with system on chip technology on ATCA platform

    Science.gov (United States)

    Bartoldus, R.; Claus, R.; Garelli, N.; Herbst, R. T.; Huffer, M.; Iakovidis, G.; Iordanidou, K.; Kwan, K.; Kocian, M.; Lankford, A. J.; Moschovakos, P.; Nelson, A.; Ntekas, K.; Ruckman, L.; Russell, J.; Schernau, M.; Schlenker, S.; Su, D.; Valderanis, C.; Wittgen, M.; Yildiz, S. C.

    2016-01-01

    The ATLAS muon Cathode Strip Chamber (CSC) backend readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run-2 luminosity. The readout design is based on the Reconfigurable Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the Advanced Telecommunication Computing Architecture (ATCA) platform. The RCE design is based on the new System on Chip XILINX ZYNQ series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources. Together with auxiliary memories, all these components form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the ZYNQ for high speed input and output fiberoptic links and TTC allowed the full system of 320 input links from the 32 chambers to be processed by 6 COBs in one ATCA shelf. The full system was installed in September 2014. We will present the RCE/COB design concept, the firmware and software processing architecture, and the experience from the intense commissioning for LHC Run 2.

  13. A new ATLAS muon CSC readout system with system on chip technology on ATCA platform

    International Nuclear Information System (INIS)

    Bartoldus, R.; Claus, R.; Garelli, N.; Herbst, R.T.; Huffer, M.; Kocian, M.; Ruckman, L.; Russell, J.; Su, D.; Wittgen, M.; Iakovidis, G.; Iordanidou, K.; Moschovakos, P.; Ntekas, K.; Kwan, K.; Lankford, A.J.; Nelson, A.; Schernau, M.; Schlenker, S.; Valderanis, C.

    2016-01-01

    The ATLAS muon Cathode Strip Chamber (CSC) backend readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run-2 luminosity. The readout design is based on the Reconfigurable Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the Advanced Telecommunication Computing Architecture (ATCA) platform. The RCE design is based on the new System on Chip XILINX ZYNQ series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources. Together with auxiliary memories, all these components form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the ZYNQ for high speed input and output fiberoptic links and TTC allowed the full system of 320 input links from the 32 chambers to be processed by 6 COBs in one ATCA shelf. The full system was installed in September 2014. We will present the RCE/COB design concept, the firmware and software processing architecture, and the experience from the intense commissioning for LHC Run 2

  14. A new ATLAS muon CSC readout system with system on chip technology on ATCA platform

    Energy Technology Data Exchange (ETDEWEB)

    Claus, R., E-mail: claus@slac.stanford.edu

    2016-07-11

    The ATLAS muon Cathode Strip Chamber (CSC) back-end readout system has been upgraded during the LHC 2013–2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run 2 luminosity. The readout design is based on the Reconfiguration Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the ATCA platform. The RCE design is based on the new System on Chip Xilinx Zynq series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources together with auxiliary memories to form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the Zynq for G-link, S-link and TTC allowed the full system of 320 G-links from the 32 chambers to be processed by 6 COBs in one ATCA shelf through software waveform feature extraction to output 32 S-links. The full system was installed in Sept. 2014. We will present the RCE/COB design concept, the firmware and software processing architecture, and the experience from the intense commissioning towards LHC Run 2.

  15. A New ATLAS Muon CSC Readout System with System on Chip Technology on ATCA Platform

    CERN Document Server

    AUTHOR|(SzGeCERN)696050; Garelli, N.; Herbst, R.T.; Huffer, M.; Iakovidis, G.; Iordanidou, K.; Kwan, K.; Kocian, M.; Lankford, A.J.; Moschovakos, P.; Nelson, A.; Ntekas, K.; Ruckman, L.; Russell, J.; Schernau, M.; Schlenker, S.; Su, D.; Valderanis, C.; Wittgen, M.; Bartoldus, R.

    2016-01-01

    The ATLAS muon Cathode Strip Chamber (CSC) backend readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run 2 luminosity. The readout design is based on the Reconfigurable Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the Advanced Telecommunication Computing Architecture (ATCA) platform. The RCE design is based on the new System on Chip XILINX ZYNQ series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources together with auxiliary memories to form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the ZYNQ for G-link, S-link and TTC allowed the full system of 320 G-links from the 32 chambe...

  16. A New ATLAS Muon CSC Readout System with System on Chip Technology on ATCA Platform

    CERN Document Server

    ATLAS CSC Collaboration; The ATLAS collaboration

    2016-01-01

    The ATLAS muon Cathode Strip Chamber (CSC) backend readout system has been upgrade during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run 2 luminosity. The readout design is based on the Reconfigurable Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the Advanced Telecommunication Computing Architecture (ATCA) platform. The RCE design is based on the new System on Chip XILINX ZYNQ series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources together with auxiliary memories to form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the ZYNQ for G-link, S-link and TTC allowed the full system of 320 G-links from the 32 chamber...

  17. A New ATLAS Muon CSC Readout System with System on Chip Technology on ATCA Platform

    CERN Document Server

    AUTHOR|(SzGeCERN)664042

    2016-01-01

    The ATLAS muon Cathode Strip Chamber (CSC) back-end readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run 2 luminosity. The readout design is based on the Reconfiguration Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the ATCA platform. The RCE design is based on the new System on Chip Xilinx Zynq series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources together with auxiliary memories to form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the Zynq for G-link, S-link and TTC allowed the full system of 320 G-links from the 32 chambers to be processed by 6 COBs in one ATCA shelf thr...

  18. A New ATLAS Muon CSC Readout System with System on Chip Technology on ATCA Platform

    CERN Document Server

    Claus, Richard; The ATLAS collaboration

    2015-01-01

    The ATLAS muon Cathode Strip Chamber (CSC) backend readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run 2 luminosity. The readout design is based on the Reconfiguration Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the ATCA platform. The RCE design is based on the new System on Chip XILINX ZYNQ series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources together with auxiliary memories to form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the ZYNQ for G-link, S-link and TTC allowed the full system of 320 G-links from the 32 chambers to be processed by 6 COBs in one ATCA shelf thro...

  19. AGIS: The ATLAS Grid Information System

    CERN Document Server

    Anisenkov, A; The ATLAS collaboration; Klimentov, A; Senchenko, A

    2012-01-01

    The ATLAS Computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we present ATLAS Grid Information System (AGIS) designed to integrate configuration and status information about resources, services and topology of whole ATLAS Grid needed by ATLAS Distributed Computing applications and services.

  20. DAQ Architecture for the LHCb Upgrade

    International Nuclear Information System (INIS)

    Liu, Guoming; Neufeld, Niko

    2014-01-01

    LHCb will have an upgrade of its detector in 2018. After the upgrade, the LHCb experiment will run at a high luminosity of 2 × 10 33 cm −2 s −1 . The upgraded detector will be read out at 40 MHz with a highly flexible software-based triggering strategy. The Data Acquisition (DAQ) system of LHCb reads out the data fragments from the Front-End Electronics and transports them to the High-Lever Trigger farm at an aggregate throughput of ∼ 32 Tbit/s. The DAQ system will be based on high speed network technologies such as InfiniBand and/or 10/40/100 Gigabit Ethernet. Independent of the network technology, there are different possible architectures for the DAQ system. In this paper, we present our studies on the DAQ architecture, where we analyze size, complexity and relative cost. We evaluate and compare several data-flow schemes for a network-based DAQ: push, pull and push with barrel-shifter traffic shaping. We also discuss the requirements and overall implications of the data-flow schemes on the DAQ system.

  1. LHCb; DAQ Architecture for the LHCb Upgrade

    CERN Multimedia

    Neufeld, N

    2013-01-01

    LHCb will have an upgrade of its detector in 2018. After the upgrade, the LHCb experiment will run at a high luminosity of 2x 10$^{33}$ cm$^{-2}$ . s$^{-1}$. The upgraded detector will be read out at 40 MHz with a highly flexible software-based triggering strategy. The Data Acquisition (DAQ) system of HCb reads out the data fragments from the Front-End Electronics and transports them to the High-Lever Trigger farm at an aggregate throughput of 32 Tbit/s. The DAQ system will be based on high speed network technologies such as InfiniBand and/or 10/40/100 Gigabit Ethernet. Independent of the network technology, there are different possible architectures for the DAQ system. In this paper, we present our studies on the DAQ architecture, where we analyze size, complexity and (relative) cost. We evaluate and compare several data-flow schemes for a network-based DAQ: push, pull and push with barrel-shifter traffic shaping. We also discuss the requirements and overall implications of the data-flow schemes on the DAQ ...

  2. DATA ACQUISITION (DAQ)

    CERN Multimedia

    Gerry Bauer

    The CMS Storage Manager System The tail-end of the CMS Data Acquisition System is the Storage Manger (SM), which collects output from the HLT and stages the data at Cessy for transfer to its ultimate home in the Tier-0 center. A SM system has been used by CMS for several years with the steadily evolving software within the XDAQ framework, but until relatively recently, only with provisional hardware. The SM is well known to much of the collaboration through the ‘MiniDAQ’ system, which served as the central DAQ system in 2007, and lives on in 2008 for dedicated sub-detector commissioning. Since March of 2008 a first phase of the final hardware was commissioned and used in CMS Global Runs. The system originally planned for 2008 aimed at recording ~1MB events at a few hundred Hz. The building blocks to achieve this are based on Nexsan's SATABeast storage array - a device  housing up to 40 disks of 1TB each, and possessing two controllers each capable of almost 200 MB/sec throughput....

  3. A DAQ-Device-Based Continuous Wave Near-Infrared Spectroscopy System for Measuring Human Functional Brain Activity

    Directory of Open Access Journals (Sweden)

    Gang Xu

    2014-01-01

    Full Text Available In the last two decades, functional near-infrared spectroscopy (fNIRS is getting more and more popular as a neuroimaging technique. The fNIRS instrument can be used to measure local hemodynamic response, which indirectly reflects the functional neural activities in human brain. In this study, an easily implemented way to establish DAQ-device-based fNIRS system was proposed. Basic instrumentation components (light sources driving, signal conditioning, sensors, and optical fiber of the fNIRS system were described. The digital in-phase and quadrature demodulation method was applied in LabVIEW software to distinguish light sources from different emitters. The effectiveness of the custom-made system was verified by simultaneous measurement with a commercial instrument ETG-4000 during Valsalva maneuver experiment. The light intensity data acquired from two systems were highly correlated for lower wavelength (Pearson’s correlation coefficient r = 0.92, P < 0.01 and higher wavelength (r = 0.84, P < 0.01. Further, another mental arithmetic experiment was implemented to detect neural activation in the prefrontal cortex. For 9 participants, significant cerebral activation was detected in 6 subjects (P < 0.05 for oxyhemoglobin and in 8 subjects (P < 0.01 for deoxyhemoglobin.

  4. Automated Loads Analysis System (ATLAS)

    Science.gov (United States)

    Gardner, Stephen; Frere, Scot; O’Reilly, Patrick

    2013-01-01

    ATLAS is a generalized solution that can be used for launch vehicles. ATLAS is used to produce modal transient analysis and quasi-static analysis results (i.e., accelerations, displacements, and forces) for the payload math models on a specific Shuttle Transport System (STS) flight using the shuttle math model and associated forcing functions. This innovation solves the problem of coupling of payload math models into a shuttle math model. It performs a transient loads analysis simulating liftoff, landing, and all flight events between liftoff and landing. ATLAS utilizes efficient and numerically stable algorithms available in MSC/NASTRAN.

  5. Multiple brain atlas database and atlas-based neuroimaging system.

    Science.gov (United States)

    Nowinski, W L; Fang, A; Nguyen, B T; Raphel, J K; Jagannathan, L; Raghavan, R; Bryan, R N; Miller, G A

    1997-01-01

    For the purpose of developing multiple, complementary, fully labeled electronic brain atlases and an atlas-based neuroimaging system for analysis, quantification, and real-time manipulation of cerebral structures in two and three dimensions, we have digitized, enhanced, segmented, and labeled the following print brain atlases: Co-Planar Stereotaxic Atlas of the Human Brain by Talairach and Tournoux, Atlas for Stereotaxy of the Human Brain by Schaltenbrand and Wahren, Referentially Oriented Cerebral MRI Anatomy by Talairach and Tournoux, and Atlas of the Cerebral Sulci by Ono, Kubik, and Abernathey. Three-dimensional extensions of these atlases have been developed as well. All two- and three-dimensional atlases are mutually preregistered and may be interactively registered with an actual patient's data. An atlas-based neuroimaging system has been developed that provides support for reformatting, registration, visualization, navigation, image processing, and quantification of clinical data. The anatomical index contains about 1,000 structures and over 400 sulcal patterns. Several new applications of the brain atlas database also have been developed, supported by various technologies such as virtual reality, the Internet, and electronic publishing. Fusion of information from multiple atlases assists the user in comprehensively understanding brain structures and identifying and quantifying anatomical regions in clinical data. The multiple brain atlas database and atlas-based neuroimaging system have substantial potential impact in stereotactic neurosurgery and radiotherapy by assisting in visualization and real-time manipulation in three dimensions of anatomical structures, in quantitative neuroradiology by allowing interactive analysis of clinical data, in three-dimensional neuroeducation, and in brain function studies.

  6. Components for the data acquisition system of the ATLAS testbeams 1996

    International Nuclear Information System (INIS)

    Caprini, M; Niculescu, Michaela

    1997-01-01

    ATLAS is one of the experiments developed at CERN for the Large Hadron Collider. For the sub-detector testbeams a data acquisition system (DAQ) was designed. The Bucharest group is a member of the ATLAS DAQ collaboration and contributed to the development of some components of the testbeam DAQ: -read-out modules for standalone and combined test-beams; - readout module for the liquid argon detector; - run control graphical user interface; - central data recording system. The readout module is able to acquire data event by event from the detector electronics and is based on a Finite State Machine (FSM) incorporating a general scheme for the calibration procedure. The FSM allows detectors to take data either in standalone mode, with local control and recording, or in combined mode together with other sub-detectors, with a very easy switching between the two different configurations. The readout module for the liquid argon detector is written as a data flow element which takes raw data and creates a formatted event. At initialization stage the run and detector parameters are read from the Run Control Parameters database. Then the state changes are driven by three interrupt signals (Start of Burst, Trigger, End of Burst) generated by hardware. In calibration mode at each trigger the event is built (calibration data are taken outside the beam) and then the conditions for the next calibration trigger are prepared (DAQ values, delays, pulsers). The graphical user interface is designed to be used for the control of the data acquisition system. The interface provides a global experiment panel for the activation and navigation in all the command and display panels. The user can start, stop or change the state of the system, obtain the most important information about the whole system states and activate other service programs in order to select parameters, databases and to display information about the evolution of the system. Central data recording system lays on the client

  7. The ATLAS Detector Control System

    CERN Document Server

    Schlenker, S; Kersten, S; Hirschbuehl, D; Braun, H; Poblaguev, A; Oliveira Damazio, D; Talyshev, A; Zimmermann, S; Franz, S; Gutzwiller, O; Hartert, J; Mindur, B; Tsarouchas, CA; Caforio, D; Sbarra, C; Olszowska, J; Hajduk, Z; Banas, E; Wynne, B; Robichaud-Veronneau, A; Nemecek, S; Thompson, PD; Mandic, I; Deliyergiyev, M; Polini, A; Kovalenko, S; Khomutnikov, V; Filimonov, V; Bindi, M; Stanecka, E; Martin, T; Lantzsch, K; Hoffmann, D; Huber, J; Mountricha, E; Santos, HF; Ribeiro, G; Barillari, T; Habring, J; Arabidze, G; Boterenbrood, H; Hart, R; Marques Vinagre, F; Lafarguette, P; Tartarelli, GF; Nagai, K; D'Auria, S; Chekulaev, S; Phillips, P; Ertel, E; Brenner, R; Leontsinis, S; Mitrevski, J; Grassi, V; Karakostas, K; Iakovidis, G.; Marchese, F; Aielli, G

    2011-01-01

    The ATLAS experiment is one of the multi-purpose experiments at the Large Hadron Collider (LHC), constructed to study elementary particle interactions in collisions of high-energy proton beams. Twelve different sub-detectors as well as the common experimental infrastructure are supervised by the Detector Control System (DCS). The DCS enables equipment supervision of all ATLAS sub-detectors by using a system of >130 server machines running the industrial SCADA product PVSS. This highly distributed system reads, processes and archives of the order of 106 operational parameters. Higher level control system layers allow for automatic control procedures, efficient error recognition and handling, and manage the communication with external systems such as the LHC. This contribution firstly describes the status of the ATLAS DCS and the experience gained during the LHC commissioning and the first physics data taking operation period. Secondly, the future evolution and maintenance constraints for the coming years an...

  8. Argonne's atlas control system upgrade

    International Nuclear Information System (INIS)

    Munson, F.; Quock, D.; Chapin, B.; Figueroa, J.

    1999-01-01

    The ATLAS facility (Argonne Tandem-Linac Accelerator System) is located at the Argonne National Laboratory. The facility is a tool used in nuclear and atomic physics research, which focuses primarily on heavy-ion physics. The accelerator as well as its control system are evolutionary in nature, and consequently, continue to advance. In 1998 the most recent project to upgrade the ATLAS control system was completed. This paper briefly reviews the upgrade, and summarizes the configuration and features of the resulting control system

  9. AGIS: The ATLAS Grid Information System

    CERN Document Server

    Anisenkov, A; The ATLAS collaboration; Klimentov, A; Oleynik, D; Petrosyan, A

    2014-01-01

    In this paper we describe ATLAS Grid Information System (AGIS), the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services.

  10. AGIS: The ATLAS Grid Information System

    OpenAIRE

    Anisenkov, A; Di Girolamo, A; Klimentov, A; Oleynik, D; Petrosyan, A

    2013-01-01

    In this paper we describe ATLAS Grid Information System (AGIS), the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services.

  11. Future of DAQ Frameworks and Approaches, and Their Evolution towards the Internet of Things

    Science.gov (United States)

    Neufeld, Niko

    2015-12-01

    Nowadays, a DAQ system is a complex network of processors, sensors and many other active devices. Historically, providing a framework for DAQ has been a very important role of host institutes of experiments. Reviewing evolution of such DAQ frameworks is a very interesting subject of the conference. “Internet of Things” is a recent buzz word but a DAQ framework could be a good example of IoT.

  12. The ATLAS Trigger Core Configuration and Execution System in Light of the ATLAS Upgrade for LHC Run 2

    CERN Document Server

    Heinrich, Lukas; The ATLAS collaboration

    2015-01-01

    During the 2013/14 shutdown of the Large Hadron Collider (LHC) the ATLAS first level trigger (L1T) and the data acquisition system (DAQ) were substantially upgraded to cope with the increase in luminosity and collision multiplicity, expected to be delivered by the LHC in 2015. To name a few, the L1T was extended on the calorimeter side (L1Calo) to better cope with pile-up and apply better-tuned isolation criteria on electron, photon, and jet candidates. The central trigger (CT) was widened to analyze twice as many inputs, provide more trigger lines, and serve multiple sub-detectors in parallel during calibration periods. A new FPGA-based trigger, capable of analyzing event topologies at 40 MHz, was added to provide further input to forming the level 1 trigger decision (L1Topo). On the DAQ side the dataflow was completely remodeled, merging the two previously existing stages of the software-based high level trigger into one. Partially because of these changes, partially because of the new trigger paradigm to h...

  13. DAQ systems for the high energy and nuclotron internal target polarimeters with network access to polarization calculation results and raw data

    International Nuclear Information System (INIS)

    Isupov, A.Yu.

    2004-01-01

    On-line data acquisition (DAQ) system for the Nuclotron Internal Target Polarimeter (ITP) at the LHE, JINR, is explained in respect of design and implementation, based on the distributed data acquisition and processing system qdpb. Software modules specific for this implementation (dependent on ITP data contents and hardware layout) are discussed briefly in comparison with those for the High Energy Polarimeter (HEP) at the LHE, JINR. User access methods both to raw data and to results of polarization calculations of the ITP and HEP are discussed

  14. AGIS: The ATLAS Grid Information System

    CERN Document Server

    Anisenkov, Alexey; Di Girolamo, Alessandro; Gayazov, Stavro; Klimentov, Alexei; Oleynik, Danila; Senchenko, Alexander

    2012-01-01

    ATLAS is a particle physics experiment at the Large Hadron Collider at CERN. The experiment produces petabytes of data annually through simulation production and tens petabytes of data per year from the detector itself. The ATLAS Computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we present ATLAS Grid Information System (AGIS) designed to integrate configuration and status information about resources, services and topology of whole ATLAS Grid needed by ATLAS Distributed Computing applications and services.

  15. A DAQ system for the experiment of physics based on G-Link

    International Nuclear Information System (INIS)

    Jiang Xiao; Jin Ge

    2007-01-01

    In this paper, a high-speed fiber data transfer system based on G-Link for the experiment of physics is introduced. The architecture and configuration of the fiber link with core chips, HDMP-1022/ 1024, the driver circuit of laser diode and the CIMT coding technology are described. With this high- speed fiber data transfer technology, a 16-channel data acquisition system is designed and is used in an experiment of wind tunnel. (authors)

  16. The ATLAS distributed analysis system

    International Nuclear Information System (INIS)

    Legger, F

    2014-01-01

    In the LHC operations era, analysis of the multi-petabyte ATLAS data sample by globally distributed physicists is a challenging task. To attain the required scale the ATLAS Computing Model was designed around the concept of Grid computing, realized in the Worldwide LHC Computing Grid (WLCG), the largest distributed computational resource existing in the sciences. The ATLAS experiment currently stores over 140 PB of data and runs about 140,000 concurrent jobs continuously at WLCG sites. During the first run of the LHC, the ATLAS Distributed Analysis (DA) service has operated stably and scaled as planned. More than 1600 users submitted jobs in 2012, with 2 million or more analysis jobs per week, peaking at about a million jobs per day. The system dynamically distributes popular data to expedite processing and maximally utilize resources. The reliability of the DA service is high and steadily improving; Grid sites are continually validated against a set of standard tests, and a dedicated team of expert shifters provides user support and communicates user problems to the sites. Both the user support techniques and the direct feedback of users have been effective in improving the success rate and user experience when utilizing the distributed computing environment. In this contribution a description of the main components, activities and achievements of ATLAS distributed analysis is given. Several future improvements being undertaken will be described.

  17. The ATLAS distributed analysis system

    Science.gov (United States)

    Legger, F.; Atlas Collaboration

    2014-06-01

    In the LHC operations era, analysis of the multi-petabyte ATLAS data sample by globally distributed physicists is a challenging task. To attain the required scale the ATLAS Computing Model was designed around the concept of Grid computing, realized in the Worldwide LHC Computing Grid (WLCG), the largest distributed computational resource existing in the sciences. The ATLAS experiment currently stores over 140 PB of data and runs about 140,000 concurrent jobs continuously at WLCG sites. During the first run of the LHC, the ATLAS Distributed Analysis (DA) service has operated stably and scaled as planned. More than 1600 users submitted jobs in 2012, with 2 million or more analysis jobs per week, peaking at about a million jobs per day. The system dynamically distributes popular data to expedite processing and maximally utilize resources. The reliability of the DA service is high and steadily improving; Grid sites are continually validated against a set of standard tests, and a dedicated team of expert shifters provides user support and communicates user problems to the sites. Both the user support techniques and the direct feedback of users have been effective in improving the success rate and user experience when utilizing the distributed computing environment. In this contribution a description of the main components, activities and achievements of ATLAS distributed analysis is given. Several future improvements being undertaken will be described.

  18. Build of tri-crosscheck platform for complex HDL design in LHCb's DAQ system

    International Nuclear Information System (INIS)

    Hou Lei; Gong Guanghua; Shao Beibei

    2008-01-01

    TELL1 is the off-detector electronics acquisition readout board for the LHCb experiment. In the development of TELL1, three data stream systems are built to tri-crosscheck the complex VHDL implementation for the FPGAs employed by TELL1. This paper will introduce the tri-crosscheck platform as well as the way they are used in the testing. (authors)

  19. BioDAQ--a simple biosignal acquisition system for didactic use.

    Science.gov (United States)

    Csaky, Z; Mihalas, G I; Focsa, M

    2002-01-01

    A simple non expensive device for biosignal acquisition is presented. It mainly meets the requirements for didactic purposes specific in medical informatics laboratory classes. The system has two main types of devices: 'student unit'--the simplest one, used during lessons on real signals and 'demo unit', which can be also used in medical practice or for collecting biological signals. It is able to record: optical pulse, sphygmogram, ECG (1-4 leads) EEG or EMG (1-4 channels). For didactical purposes it has a large scale of recording options: variable sampling rate, gain and filtering. It can also be used in tele-acquisition via Internet.

  20. Characterization of a DAQ system for the readout of a SiPM based shashlik calorimeter

    International Nuclear Information System (INIS)

    Berra, A.; Bonvicini, V.; Bosisio, L.; Lietti, D.; Penzo, A.; Prest, M.; Rabaioli, S.; Rashevskaya, I.; Vallazza, E.

    2014-01-01

    Silicon PhotoMultipliers (SiPMs) are a recently developed type of silicon photodetector characterized by high gain and insensitivity to magnetic fields, which make them a suitable detector for the next generation high energy and space physics experiments. This paper presents the performance of a readout system for SiPMs based on the MAROC3 ASIC. The ASIC consists of 64 channels working in parallel, each one with a variable gain pre-amplifier, a tunable slow shaper with a sample and hold circuit for the analog readout and a tunable fast shaper for the digital one. In the tests described in this paper, only the analog part of the ASIC has been used. A frontend board based on the MAROC3 ASIC has been tested at CERN coupled to a scintillator-lead shashlik calorimeter, readout with 36 large area SiPMs. The performance of the system has been characterized in terms of linearity and energy resolution on the CERN PS-T9 and SPS-H2 beamlines, using different configurations of the ASIC parameters

  1. A PandaRoot interface for binary data in the PANDA prototype DAQ system

    Energy Technology Data Exchange (ETDEWEB)

    Fleischer, Soeren; Lange, Soeren; Kuehn, Wolfgang; Hahn, Christopher; Wagner, Milan [2. Physikalisches Institut, Uni Giessen (Germany); Collaboration: PANDA-Collaboration

    2015-07-01

    The PANDA experiment at FAIR will feature a raw data rate of more than 20 MHz. Only a small fraction of these events are of interest. Consequently, a sophisticated online data reduction setup is required, lowering the final output data rate by a factor of roughly 10{sup 3} by discarding data which does not fulfil certain criteria. The first stages of the data reduction will be implemented using FPGA-based Compute Nodes. For the planned tests with prototype detectors a small but scalable system is being set up which will allow to test the concept in a realistic environment with high rates. In this contribution, we present a PandaRoot implementation of a state-machine-based binary parser which receives detector data from the Compute Nodes via GbE links, converting the data stream into the PandaRoot format for further analysis and mass storage.

  2. The ATLAS Detector Control System

    International Nuclear Information System (INIS)

    Lantzsch, K; Braun, H; Hirschbuehl, D; Kersten, S; Arfaoui, S; Franz, S; Gutzwiller, O; Schlenker, S; Tsarouchas, C A; Mindur, B; Hartert, J; Zimmermann, S; Talyshev, A; Oliveira Damazio, D; Poblaguev, A; Martin, T; Thompson, P D; Caforio, D; Sbarra, C; Hoffmann, D

    2012-01-01

    The ATLAS experiment is one of the multi-purpose experiments at the Large Hadron Collider (LHC) at CERN, constructed to study elementary particle interactions in collisions of high-energy proton beams. Twelve different sub detectors as well as the common experimental infrastructure are controlled and monitored by the Detector Control System (DCS) using a highly distributed system of 140 server machines running the industrial SCADA product PVSS. Higher level control system layers allow for automatic control procedures, efficient error recognition and handling, manage the communication with external systems such as the LHC controls, and provide a synchronization mechanism with the ATLAS data acquisition system. Different databases are used to store the online parameters of the experiment, replicate a subset used for physics reconstruction, and store the configuration parameters of the systems. This contribution describes the computing architecture and software tools to handle this complex and highly interconnected control system.

  3. The ATLAS Detector Control System

    Science.gov (United States)

    Lantzsch, K.; Arfaoui, S.; Franz, S.; Gutzwiller, O.; Schlenker, S.; Tsarouchas, C. A.; Mindur, B.; Hartert, J.; Zimmermann, S.; Talyshev, A.; Oliveira Damazio, D.; Poblaguev, A.; Braun, H.; Hirschbuehl, D.; Kersten, S.; Martin, T.; Thompson, P. D.; Caforio, D.; Sbarra, C.; Hoffmann, D.; Nemecek, S.; Robichaud-Veronneau, A.; Wynne, B.; Banas, E.; Hajduk, Z.; Olszowska, J.; Stanecka, E.; Bindi, M.; Polini, A.; Deliyergiyev, M.; Mandic, I.; Ertel, E.; Marques Vinagre, F.; Ribeiro, G.; Santos, H. F.; Barillari, T.; Habring, J.; Huber, J.; Arabidze, G.; Boterenbrood, H.; Hart, R.; Iakovidis, G.; Karakostas, K.; Leontsinis, S.; Mountricha, E.; Ntekas, K.; Filimonov, V.; Khomutnikov, V.; Kovalenko, S.; Grassi, V.; Mitrevski, J.; Phillips, P.; Chekulaev, S.; D'Auria, S.; Nagai, K.; Tartarelli, G. F.; Aielli, G.; Marchese, F.; Lafarguette, P.; Brenner, R.

    2012-12-01

    The ATLAS experiment is one of the multi-purpose experiments at the Large Hadron Collider (LHC) at CERN, constructed to study elementary particle interactions in collisions of high-energy proton beams. Twelve different sub detectors as well as the common experimental infrastructure are controlled and monitored by the Detector Control System (DCS) using a highly distributed system of 140 server machines running the industrial SCADA product PVSS. Higher level control system layers allow for automatic control procedures, efficient error recognition and handling, manage the communication with external systems such as the LHC controls, and provide a synchronization mechanism with the ATLAS data acquisition system. Different databases are used to store the online parameters of the experiment, replicate a subset used for physics reconstruction, and store the configuration parameters of the systems. This contribution describes the computing architecture and software tools to handle this complex and highly interconnected control system.

  4. Flexible DAQ card for detector systems utilizing the CoaXPress communication standard

    International Nuclear Information System (INIS)

    Neue, G.; Hejtmánek, M.; Marčišovský, M.; Voleš, P.

    2015-01-01

    This work concerns the design and construction of a flexible FPGA based data acquisition system aimed for particle detectors. The interface card as presented was designed for large area detectors with millions of individual readout channels. Flexibility was achieved by partitioning the design into multiple PCBs, creating a set of modular blocks, allowing the creation of a wide variety of configurations by simply stacking functional PCBs together. This way the user can easily toggle the polarity of the high voltage bias supply or switch the downstream interface from CoaXPress to PCIe or stream directly HDMI. We addressed the issues of data throughput, data buffering, bias voltage generation, trigger timing and fine tuning of the whole readout chain enabling a smooth data transmission. On the current prototype, we have wire-bonded a MediPix2 MXR quad and connected it to a XILINX FPGA. For the downstream interface, we implemented the CoaXPress communication protocol, which enables us to stream data at 3.125 Gbps to a standard PC

  5. PCI Based Read-out Receiver Card in the ALICE DAQ System

    CERN Document Server

    Carena, W; Dénes, E; Divià, R; Schossmaier, K; Soós, C; Sulyán, J; Vascotto, Alessandro; Van de Vyvre, P

    2001-01-01

    The Detector Data Link (DDL) is the high-speed optical link for the ALICE experiment. This link shall transfer the data coming from the detectors at 100 MB/s rate. The main components of the link have been developed: the destination Interface Unit (DIU), the Source Interface Unit (SIU) and the Read-out Receiver Card (RORC). The first RORC version is based on the VME bus. The performance tests show that the maximum VME bandwidth could be reached. Meanwhile the PCI bus became very popular and is used in many platforms. The development of a PCI-based version has been started. The document describes the prototype version in three sections. An overview explains the main purpose of the card: to provide an interface between the DDL and the PCI bus. Acting as a 32bit/33MHz PCI master the card is able to write or read directly to or from the system memory from or to the DDL, respectively. Beside these functions the card can also be used as an autonomous data generator. The card has been designed to be well adapted to ...

  6. AGIS: The ATLAS Grid Information System

    OpenAIRE

    Anisenkov, Alexey; Belov, Sergey; Di Girolamo, Alessandro; Gayazov, Stavro; Klimentov, Alexei; Oleynik, Danila; Senchenko, Alexander

    2012-01-01

    ATLAS is a particle physics experiment at the Large Hadron Collider at CERN. The experiment produces petabytes of data annually through simulation production and tens petabytes of data per year from the detector itself. The ATLAS Computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we present ATLAS Grid Information System (AGIS) designed to integrate configurat...

  7. Upgrading the ATLAS control system

    International Nuclear Information System (INIS)

    Munson, F.H.; Ferraretto, M.

    1993-01-01

    Heavy-ion accelerators are tools used in the research of nuclear and atomic physics. The ATLAS facility at the Argonne National Laboratory is one such tool. The ATLAS control system serves as the primary operator interface to the accelerator. A project to upgrade the control system is presently in progress. Since this is an upgrade project and not a new installation, it was imperative that the development work proceed without interference to normal operations. An additional criteria for the development work was that the writing of additional ''in-house'' software should be kept to a minimum. This paper briefly describes the control system being upgraded, and explains some of the reasons for the decision to upgrade the control system. Design considerations and goals for the new system are described, and the present status of the upgrade is discussed

  8. The ATLAS Production System Evolution

    CERN Document Server

    Borodin, Mikhail; The ATLAS collaboration

    2017-01-01

    The second generation of the ATLAS Production System called ProdSys2 is a distributed workload manager that runs daily hundreds of thousands of jobs, from dozens of different ATLAS-specific workflows, across more than a hundred heterogeneous sites. It achieves high utilization by combining dynamic job definition based upon many criteria, such as input and output size, memory requirements and CPU consumption, with manageable scheduling policies and by supporting different kinds of computational resources, such as GRID, clouds, supercomputers and volunteer computers. The system dynamically assigns a group of jobs (task) to a group of geographically distributed computing resources. Dynamic assignment and resource utilization is one of the major features of the system. The Production System has a sophisticated job fault recovery mechanism, which efficiently allows running multi-terabyte tasks without human intervention. We have implemented new features which allow automatic task submission and chaining of differe...

  9. The ATLAS Distributed Analysis System

    CERN Document Server

    Legger, F; The ATLAS collaboration; Pacheco Pages, A; Stradling, A

    2013-01-01

    In the LHC operations era, analysis of the multi-petabyte ATLAS data sample by globally distributed physicists is a challenging task. To attain the required scale the ATLAS Computing Model was designed around the concept of grid computing, realized in the Worldwide LHC Computing Grid (WLCG), the largest distributed computational resource existing in the sciences. The ATLAS experiment currently stores over 140 PB of data and runs about 140,000 concurrent jobs continuously at WLCG sites. During the first run of the LHC, the ATLAS Distributed Analysis (DA) service has operated stably and scaled as planned. More than 1600 users submitted jobs in 2012, with 2 million or more analysis jobs per week, peaking at about a million jobs per day. The system dynamically distributes popular data to expedite processing and maximally utilize resources. The reliability of the DA service is high but steadily improving; grid sites are continually validated against a set of standard tests, and a dedicated team of expert shifters ...

  10. The ATLAS Distributed Analysis System

    CERN Document Server

    Legger, F; The ATLAS collaboration

    2014-01-01

    In the LHC operations era, analysis of the multi-petabyte ATLAS data sample by globally distributed physicists is a challenging task. To attain the required scale the ATLAS Computing Model was designed around the concept of grid computing, realized in the Worldwide LHC Computing Grid (WLCG), the largest distributed computational resource existing in the sciences. The ATLAS experiment currently stores over 140 PB of data and runs about 140,000 concurrent jobs continuously at WLCG sites. During the first run of the LHC, the ATLAS Distributed Analysis (DA) service has operated stably and scaled as planned. More than 1600 users submitted jobs in 2012, with 2 million or more analysis jobs per week, peaking at about a million jobs per day. The system dynamically distributes popular data to expedite processing and maximally utilize resources. The reliability of the DA service is high but steadily improving; grid sites are continually validated against a set of standard tests, and a dedicated team of expert shifters ...

  11. DATA ACQUISITION (DAQ)

    CERN Multimedia

    Attila Racz

    DAQ/On-Line Computing installation status After the installation and commissioning of the DAQ underground elements in 2006 and the first months of 2007, all the efforts are now directed to the installation and commissioning of the On-Line Computing farm (OLC) located on the first floor of SCX5 building at the CMS experimental site. In summer 2007, 640 Readout Unit servers (RUs) have been installed and commissioned along with 160 servers providing general services for the users (DCS, database, RCMS, data storage, etc). Since the global run of November 2007, the event fragments are assembled and processed by the OLC. Thanks to the flexibility of the trapezoidal event builder, some RUs are acting as Filter Units (FUs) and hence provide the full processing chain with a single type of server. With this temporary configuration, all FEDs can be readout at a few kHz. Since the March 08 global run, events are stored on the storage manager SAN in the OLC, and subsequently transferred over the dedicated CDR link (2 x...

  12. A system for managing information at ATLAS

    International Nuclear Information System (INIS)

    Tilbrook, I.R.

    1993-01-01

    In response to a need for better management of maintenance and document information at the Argonne Tandem-Linear Accelerating System (ATLAS), the ATLAS Information Management System (AIMS) has been created. The system is based on the relational database model. The system's applications use the Alpha-4 relational database management system, a commercially available software package. The system's function and design are described

  13. AGIS: The ATLAS Grid Information System

    Science.gov (United States)

    Anisenkov, A.; Di Girolamo, A.; Klimentov, A.; Oleynik, D.; Petrosyan, A.; Atlas Collaboration

    2014-06-01

    ATLAS, a particle physics experiment at the Large Hadron Collider at CERN, produced petabytes of data annually through simulation production and tens of petabytes of data per year from the detector itself. The ATLAS computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we describe the ATLAS Grid Information System (AGIS), designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by the ATLAS Distributed Computing applications and services.

  14. The ATLAS detector control system

    International Nuclear Information System (INIS)

    Schlenker, S.; Arfaoui, S.; Franz, S.

    2012-01-01

    The ATLAS experiment is one of the multi-purpose experiments at the Large Hadron Collider (LHC), constructed to study elementary particle interactions in collisions of high-energy proton beams. Twelve different sub-detectors as well as the common experimental infrastructure are supervised by the Detector Control System (DCS). The DCS enables equipment supervision of all ATLAS sub-detectors by using a system of more that 130 server machines running the industrial SCADA product PVSS. This highly distributed system reads, processes and archives of the order of 10 6 operational parameters. Higher level control system layers allow for automatic control procedures, efficient error recognition and handling, and manage the communication with external systems such as the LHC. First, this contribution describes the status of the ATLAS DCS and the experience gained during the LHC commissioning and the first physics data taking operation period. Secondly, the future evolution and maintenance constraints for the coming years and the LHC high luminosity upgrades are outlined. (authors)

  15. Overview of DAQ developments for the CBM experiment

    Energy Technology Data Exchange (ETDEWEB)

    Emschermann, David [GSI Helmholtzzentrum fuer Schwerionenforschung GmbH (Germany); Collaboration: CBM-Collaboration

    2015-07-01

    The Compressed Baryonic Matter experiment (CBM) at the future Facility for Antiproton and Ion Research (FAIR) is a a fixed-target setup operating at very high interaction rates up to 10 MHz. The high rate capability can be achieved with fast and radiation hard detectors equipped with free-streaming readout electronics. A high-speed data acquisition (DAQ) system will forward data volumes of up to 1 TB/s from the CBM cave to the first level event selector (FLES), located 400 m apart. This presentation showcases recent developments of DAQ components for CBM. We highlight the anticipated DAQ setup for beam tests scheduled for the end of 2015.

  16. Evolution of the ReadOut System of the ATLAS experiment

    CERN Document Server

    Borga, A; The ATLAS collaboration; Green, B; Kugel, A; Joos, M; Panduro Vazquez, W; Schumacher, J; Teixeira-Dias, P; Tremblet, L; Vandelli, W; Vermeulen, J; Werner, P; Wickens, F

    2014-01-01

    The ReadOut System (ROS) is a central and essential part of the ATLAS DAQ system. It receives and buffers data of events accepted by the first-level trigger from all subdetectors and first-level trigger subsystems. Event data are subsequently forwarded to the High-Level Trigger system and Event Builder via a 1 GbE-based network. The ATLAS ROS is completely renewed in view of the demanding conditions expected during LHC Run 2 and Run 3, to replace obsolete technologies and space constraints require it to be compact. The new ROS will consist of roughly 100 Linux-based 2U high rack mounted server PCs, each equipped with 2 PCIe I/O cards and two four 10 GbE interfaces. The FPGA-based PCIe I/O cards, developed by the ALICE collaboration, will be configured with ATLAS-specific firmware, the so-called RobinNP firmware. They will provide the connectivity to about 2000 optical point-to-point links conveying the ATLAS event data. This dense configuration provides an excellent test bench for studying I/O efficiency and ...

  17. Core component integration tests for the back-end software sub-system in the ATLAS data acquisition and event filter prototype -1 project

    International Nuclear Information System (INIS)

    Badescu, E.; Caprini, M.; Niculescu, M.; Radu, A.

    2000-01-01

    The ATLAS data acquisition (DAQ) and Event Filter (EF) prototype -1 project was intended to produce a prototype system for evaluating candidate technologies and architectures for the final ATLAS DAQ system on the LHC accelerator at CERN. Within the prototype project, the back-end sub-system encompasses the software for configuring, controlling and monitoring the DAQ. The back-end sub-system includes core components and detector integration components. The core components provide the basic functionality and had priority in terms of time-scale for development in order to have a baseline sub-system that can be used for integration with the data-flow sub-system and event filter. The following components are considered to be the core of the back-end sub-system: - Configuration databases, describe a large number of parameters of the DAQ system architecture, hardware and software components, running modes and status; - Message reporting system (MRS), allows all software components to report messages to other components in the distributed environment; - Information service (IS) allows the information exchange for software components; - Process manager (PMG), performs basic job control of software components (start, stop, monitoring the status); - Run control (RC), controls the data taking activities by coordinating the operations of the DAQ sub-systems, back-end software and external systems. Performance and scalability tests have been made for individual components. The back-end subsystem integration tests bring together all the core components and several trigger/DAQ/detector integration components to simulate the control and configuration of data taking sessions. For back-end integration tests a test plan was provided. The tests have been done using a shell script that goes through different phases as follows: - starting the back-end server processes to initialize communication services and PMG; - launching configuration specific processes via DAQ supervisor as

  18. Glance Information System for ATLAS Management

    International Nuclear Information System (INIS)

    Grael, F F; Maidantchik, C; Évora, L H R A; Karam, K; Moraes, L O F; Cirilli, M; Nessi, M; Pommès, K

    2011-01-01

    ATLAS Experiment is an international collaboration where more than 37 countries, 172 institutes and laboratories, 2900 physicists, engineers, and computer scientists plus 700 students participate. The management of this teamwork involves several aspects such as institute contribution, employment records, members' appointment, authors' list, preparation and publication of papers and speakers nomination. Previously, most of the information was accessible by a limited group and developers had to face problems such as different terminology, diverse data modeling, heterogeneous databases and unlike users needs. Moreover, the systems were not designed to handle new requirements. The maintenance has to be an easy task due to the long lifetime experiment and professionals turnover. The Glance system, a generic mechanism for accessing any database, acts as an intermediate layer isolating the user from the particularities of each database. It retrieves, inserts and updates the database independently of its technology and modeling. Relying on Glance, a group of systems were built to support the ATLAS management and operation aspects: ATLAS Membership, ATLAS Appointments, ATLAS Speakers, ATLAS Analysis Follow-Up, ATLAS Conference Notes, ATLAS Thesis, ATLAS Traceability and DSS Alarms Viewer. This paper presents the overview of the Glance information framework and describes the privilege mechanism developed to grant different level of access for each member and system.

  19. Glance Information System for ATLAS Management

    Science.gov (United States)

    Grael, F. F.; Maidantchik, C.; Évora, L. H. R. A.; Karam, K.; Moraes, L. O. F.; Cirilli, M.; Nessi, M.; Pommès, K.; ATLAS Collaboration

    2011-12-01

    ATLAS Experiment is an international collaboration where more than 37 countries, 172 institutes and laboratories, 2900 physicists, engineers, and computer scientists plus 700 students participate. The management of this teamwork involves several aspects such as institute contribution, employment records, members' appointment, authors' list, preparation and publication of papers and speakers nomination. Previously, most of the information was accessible by a limited group and developers had to face problems such as different terminology, diverse data modeling, heterogeneous databases and unlike users needs. Moreover, the systems were not designed to handle new requirements. The maintenance has to be an easy task due to the long lifetime experiment and professionals turnover. The Glance system, a generic mechanism for accessing any database, acts as an intermediate layer isolating the user from the particularities of each database. It retrieves, inserts and updates the database independently of its technology and modeling. Relying on Glance, a group of systems were built to support the ATLAS management and operation aspects: ATLAS Membership, ATLAS Appointments, ATLAS Speakers, ATLAS Analysis Follow-Up, ATLAS Conference Notes, ATLAS Thesis, ATLAS Traceability and DSS Alarms Viewer. This paper presents the overview of the Glance information framework and describes the privilege mechanism developed to grant different level of access for each member and system.

  20. The HLT, DAQ and DCS TDR

    CERN Multimedia

    Wickens, F. J

    At the end of June the Trigger-DAQ community achieved a major milestone with the submission to the LHCC of the Technical Design Report (TDR) for DAQ, HLT and DCS. The first unbound copies were handed to the LHCC referees on the scheduled date of 30th June, this was followed a few days later by a limited print run which produced the first bound copies (see Figure 1). As had previously been announced both to the LHCC and the ATLAS Collaboration it was not possible on this timescale to give a complete validation of all of the aspects of the architecture in the TDR. So it had been agreed that further work would continue over the summer to provide more complete results for the formal review by the LHCC of the TDR in September. Thus there followed an intense programme of measurements and analysis: especially to provide results for HLT both in testbeds and for the event selection software itself; to provide additional information on scaling of the dataflow aspects; to provide first results on the new prototype ROBin...

  1. Research and development of common DAQ platform

    International Nuclear Information System (INIS)

    Higuchi, T.; Igarashi, Y.; Nakao, M.; Suzuki, S.Y.; Tanaka, M.; Nagasaka, Y.; Varner, G.

    2003-01-01

    The upgrade of the KEKB accelerator toward L=10 35 cm -2 s -1 requires an upgrade of the Belle data acquisition system. To match the market trend, we develop a DAQ platform based on the PCI bus that enables fastest DAQ with longer lifetime of the system. The platform is a VME-9U motherboard comprising of four slots for signal digitization modules and three PMC slots to house CPU for data compression. The platform is equipped with event FIFOs for data buffering to minimize the dead-time. A trigger module residing on VME-6U size rear board is connected to the 9U board via PCI-PCI bridge to make an interrupt for the CPU upon the level-1 trigger. (author)

  2. The ATLAS Detector Safety System

    CERN Multimedia

    Helfried Burckhart; Kathy Pommes; Heidi Sandaker

    The ATLAS Detector Safety System (DSS) has the mandate to put the detector in a safe state in case an abnormal situation arises which could be potentially dangerous for the detector. It covers the CERN alarm severity levels 1 and 2, which address serious risks for the equipment. The highest level 3, which also includes danger for persons, is the responsibility of the CERN-wide system CSAM, which always triggers an intervention by the CERN fire brigade. DSS works independently from and hence complements the Detector Control System, which is the tool to operate the experiment. The DSS is organized in a Front- End (FE), which fulfills autonomously the safety functions and a Back-End (BE) for interaction and configuration. The overall layout is shown in the picture below. ATLAS DSS configuration The FE implementation is based on a redundant Programmable Logical Crate (PLC) system which is used also in industry for such safety applications. Each of the two PLCs alone, one located underground and one at the s...

  3. On-chamber readout system for the ATLAS MDT Muon Spectrometer

    CERN Document Server

    Chapman, J; Ball, R; Brandenburg, G; Hazen, E; Oliver, J; Posch, C

    2004-01-01

    The ATLAS MDT Muon Spectrometer is a system of approximately 380,000 pressurized cylindrical drift tubes of 3 cm diameter and up to 6 meters in length. These Monitored Drift Tubes (MDTs) are precision- glued to form super-layers, which in turn are assembled into precision chambers of up to 432 tubes each. Each chamber is equipped with a set of mezzanine cards containing analog and digital readout circuitry sufficient to read out 24 MDTs per card. Up to 18 of these cards are connected to an on-chamber DAQ element referred to as a Chamber Service Module, or CSM. The CSM multiplexes data from the mezzanine cards and outputs this data on an optical fiber which is received by the off-chamber DAQ system. Thus, the chamber forms a highly self-contained unit with DC power in and a single optical fiber out. The Monitored Drift Tubes, due to their length, require a terminating resistor at their far end to prevent reflections. The readout system has been designed so that thermal noise from this resistor remains the domi...

  4. artdaq: DAQ software development made simple

    Science.gov (United States)

    Biery, Kurt; Flumerfelt, Eric; Freeman, John; Ketchum, Wesley; Lukhanin, Gennadiy; Rechenmacher, Ron

    2017-10-01

    For a few years now, the artdaq data acquisition software toolkit has provided numerous experiments with ready-to-use components which allow for rapid development and deployment of DAQ systems. Developed within the Fermilab Scientific Computing Division, artdaq provides data transfer, event building, run control, and event analysis functionality. This latter feature includes built-in support for the art event analysis framework, allowing experiments to run art modules for real-time filtering, compression, disk writing and online monitoring. As art, also developed at Fermilab, is also used for offline analysis, a major advantage of artdaq is that it allows developers to easily switch between developing online and offline software. artdaq continues to be improved. Support for an alternate mode of running whereby data from some subdetector components are only streamed if requested has been added; this option will reduce unnecessary DAQ throughput. Real-time reporting of DAQ metrics has been implemented, along with the flexibility to choose the format through which experiments receive the reports; these formats include the Ganglia, Graphite and syslog software packages, along with flat ASCII files. Additionally, work has been performed investigating more flexible modes of online monitoring, including the capability to run multiple online monitoring processes on different hosts, each running its own set of art modules. Finally, a web-based GUI interface through which users can configure details of their DAQ system has been implemented, increasing the ease of use of the system. Already successfully deployed on the LArlAT, DarkSide-50, DUNE 35ton and Mu2e experiments, artdaq will be employed for SBND and is a strong candidate for use on ICARUS and protoDUNE. With each experiment comes new ideas for how artdaq can be made more flexible and powerful. The above improvements will be described, along with potential ideas for the future.

  5. BTeV trigger/DAQ innovations

    International Nuclear Information System (INIS)

    Votava, Margaret

    2005-01-01

    The BTeV experiment was a collider based high energy physics (HEP) B-physics experiment proposed at Fermilab. It included a large-scale, high speed trigger/data acquisition (DAQ) system, reading data off the detector at 500 Gbytes/sec and writing to mass storage at 200 Mbytes/sec. The online design was considered to be highly credible in terms of technical feasibility, schedule and cost. This paper will give an overview of the overall trigger/DAQ architecture, highlight some of the challenges, and describe the BTeV approach to solving some of the technical challenges. At the time of termination in early 2005, the experiment had just passed its baseline review. Although not fully implemented, many of the architecture choices, design, and prototype work for the online system (both trigger and DAQ) were well on their way to completion. Other large, high-speed online systems may have interest in the some of the design choices and directions of BTeV, including (a) a commodity-based tracking trigger running asynchronously at full rate, (b) the hierarchical control and fault tolerance in a large real time environment, (c) a partitioning model that supports offline processing on the online farms during idle periods with plans for dynamic load balancing, and (d) an independent parallel highway architecture

  6. The upgrade of the ATLAS High Level Trigger and Data Acquisition systems and their integration

    CERN Document Server

    Abreu, R; The ATLAS collaboration

    2014-01-01

    The Data Acquisition (DAQ) and High Level Trigger (HLT) systems that served the ATLAS experiment during LHC's first run are being upgraded in the first long LHC shutdown period, from 2013 to 2015. This contribution describes the elements that are vital for the new interaction between the two systems. The central architectural enhancement is the fusion of the once separate Level 2, Event Building (EB), and Event Filter steps. Through the factorization of previously disperse functionality and better exploitation of caching mechanisms, the inherent simplification carries with it an increase in performance. Flexibility to different running conditions is improved by an automatic balance of formerly separate tasks. Incremental EB is the principle of the new Data Collection, whereby the HLT farm avoids duplicate requests to the detector Read-Out System (ROS) by preserving and reusing previously obtained data. Moreover, requests are packed and fetched together to avoid redundant trips to the ROS. Anticipated EB is ac...

  7. The ATLAS IBL CO2 Cooling System

    CERN Document Server

    Verlaat, Bartholomeus; The ATLAS collaboration

    2016-01-01

    The Atlas Pixel detector has been equipped with an extra B-layer in the space obtained by a reduced beam pipe. This new pixel detector called the ATLAS Insertable B-Layer (IBL) is installed in 2014 and is operational in the current ATLAS data taking. The IBL detector is cooled with evaporative CO2 and is the first of its kind in ATLAS. The ATLAS IBL CO2 cooling system is designed for lower temperature operation (<-35⁰C) than the previous developed CO2 cooling systems in High Energy Physics experiments. The cold temperatures are required to protect the pixel sensors for the high expected radiation dose up to 550 fb^-1 integrated luminosity. This paper describes the design, development, construction and commissioning of the IBL CO2 cooling system. It describes the challenges overcome and the important lessons learned for the development of future systems which are now under design for the Phase-II upgrade detectors.

  8. The Next Generation ATLAS Production System

    CERN Document Server

    Borodin, Mikhail; The ATLAS collaboration; Golubkov, Dmitry; Klimentov, Alexei; Maeno, Tadashi; Mashinistov, Ruslan; Vaniachine, Alexandre

    2015-01-01

    The ATLAS experiment at LHC data processing and simulation grows continuously, as more data and more use cases emerge. For data processing the ATLAS experiment adopted the data transformation approach, where software applications transform the input data into outputs. In the ATLAS production system, each data transformation is represented by a task, a collection of many jobs, dynamically submitted by the ATLAS workload management system (PanDA/JEDI) and executed on the Grid, clouds and supercomputers. Patterns in ATLAS data transformation workflows composed of many tasks provided a scalable production system framework for template definitions of the many-tasks workflows. User interface and system logic of these workflows are being implemented in the Database Engine for Tasks (DEFT). Such development required using modern computing technologies and approaches. We report technical details of this development: database implementation, server logic and Web user interface technologies.

  9. Large scale and performance tests of the ATLAS online software

    International Nuclear Information System (INIS)

    Alexandrov; Kotov, V.; Mineev, M.; Roumiantsev, V.; Wolters, H.; Amorim, A.; Pedro, L.; Ribeiro, A.; Badescu, E.; Caprini, M.; Burckhart-Chromek, D.; Dobson, M.; Jones, R.; Kazarov, A.; Kolos, S.; Liko, D.; Lucio, L.; Mapelli, L.; Nassiakou, M.; Schweiger, D.; Soloviev, I.; Hart, R.; Ryabov, Y.; Moneta, L.

    2001-01-01

    One of the sub-systems of the Trigger/DAQ system of the future ATLAS experiment is the Online Software system. It encompasses the functionality needed to configure, control and monitor the DAQ. Its architecture is based on a component structure described in the ATLAS Trigger/DAQ technical proposal. Regular integration tests ensure its smooth operation in test beam setups during its evolutionary development towards the final ATLAS online system. Feedback is received and returned into the development process. Studies of the system behavior have been performed on a set of up to 111 PCs on a configuration which is getting closer to the final size. Large scale and performance test of the integrated system were performed on this setup with emphasis on investigating the aspects of the inter-dependence of the components and the performance of the communication software. Of particular interest were the run control state transitions in various configurations of the run control hierarchy. For the purpose of the tests, the software from other Trigger/DAQ sub-systems has been emulated. The author presents a brief overview of the online system structure, its components and the large scale integration tests and their results

  10. Support system for ATLAS distributed computing operations

    CERN Document Server

    Kishimoto, Tomoe; The ATLAS collaboration

    2018-01-01

    The ATLAS distributed computing system has allowed the experiment to successfully meet the challenges of LHC Run 2. In order for distributed computing to operate smoothly and efficiently, several support teams are organized in the ATLAS experiment. The ADCoS (ATLAS Distributed Computing Operation Shifts) is a dedicated group of shifters who follow and report failing jobs, failing data transfers between sites, degradation of ATLAS central computing services, and more. The DAST (Distributed Analysis Support Team) provides user support to resolve issues related to running distributed analysis on the grid. The CRC (Computing Run Coordinator) maintains a global view of the day-to-day operations. In this presentation, the status and operational experience of the support system for ATLAS distributed computing in LHC Run 2 will be reported. This report also includes operations experience from the grid site point of view, and an analysis of the errors that create the biggest waste of wallclock time. The report of oper...

  11. The ATLAS IBL CO2 Cooling System

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00237783; The ATLAS collaboration; Zwalinski, L.; Bortolin, C.; Vogt, S.; Godlewski, J.; Crespo-Lopez, O.; Van Overbeek, M.; Blaszcyk, T.

    2017-01-01

    The ATLAS Pixel detector has been equipped with an extra B-layer in the space obtained by a reduced beam pipe. This new pixel detector called the ATLAS Insertable B-Layer (IBL) is installed in 2014 and is operational in the current ATLAS data taking. The IBL detector is cooled with evaporative CO2 and is the first of its kind in ATLAS. The ATLAS IBL CO2 cooling system is designed for lower temperature operation (<-35⁰C) than the previous developed CO2 cooling systems in High Energy Physics experiments. The cold temperatures are required to protect the pixel sensors for the high expected radiation dose up to 550 fb^-1 integrated luminosity.

  12. Development of an ADC Radiation Tolerance Characterization System for the Upgrade of the ATLAS LAr Calorimeter

    CERN Document Server

    INSPIRE-00445642; Chen, Kai; Kierstead, James; Lanni, Francesco; Takai, Helio; Jin, Ge

    2016-01-01

    ATLAS LAr calorimeter will perform its Phase-I upgrade during the long shut down (LS2) in 2018, a new LAr Trigger Digitizer Board (LTDB) will be designed and installed. Several commercial-off-the-shelf (COTS) multichannel high-speed ADCs have been selected as possible backups of the radiation tolerant ADC ASICs for LTDB. In order to evaluate the radiation tolerance of these back up commercial ADCs, we developed an ADC radiation tolerance characterization system, which includes the ADC boards, data acquisition (DAQ) board, signal generator, external power supplies and a host computer. The ADC board is custom designed for different ADCs, which has ADC driver and clock distribution circuits integrated on board. The Xilinx ZC706 FPGA development board is used as DAQ board. The data from ADC are routed to the FPGA through the FMC (FPGA Mezzanine Card) connector, de-serialized and monitored by the FPGA, and then transmitted to the host computer through the Gigabit Ethernet. A software program has been developed wit...

  13. The ATLAS distributed analysis system

    OpenAIRE

    Legger, F.

    2014-01-01

    In the LHC operations era, analysis of the multi-petabyte ATLAS data sample by globally distributed physicists is a challenging task. To attain the required scale the ATLAS Computing Model was designed around the concept of grid computing, realized in the Worldwide LHC Computing Grid (WLCG), the largest distributed computational resource existing in the sciences. The ATLAS experiment currently stores over 140 PB of data and runs about 140,000 concurrent jobs continuously at WLCG sites. During...

  14. Test Management Framework for the ATLAS Experiment

    CERN Document Server

    Kazarov, Andrei; The ATLAS collaboration; Avolio, Giuseppe

    2018-01-01

    Test Management Framework for the Data Acquisition of the ATLAS Experiment Data Acquisition (DAQ) of the ATLAS experiment is a large distributed and inhomogeneous system: it consists of thousands of interconnected computers and electronics devices that operate coherently to read out and select relevant physics data. Advanced diagnostics capabilities of the TDAQ control system are a crucial feature which contributes significantly to smooth operation and fast recovery in case of the problems and, finally, to the high efficiency of the whole experiment. The base layer of the verification and diagnostic functionality is a test management framework. We have developed a flexible test management system that allows the experts to define and configure tests for different components, indicate follow-up actions to test failures and describe inter-dependencies between DAQ or detector elements. This development is based on the experience gained with the previous test system that was used during the first three years of th...

  15. Contributions to dataflow sub-system of the ATLAS data acquisition and event filter prototype-1 project

    International Nuclear Information System (INIS)

    Badescu, E.; Caprini, M.; Niculescu, M.; Radu, A.

    1998-01-01

    A project has been approved by the ATLAS Collaboration for the design and implementation of a Data Acquisition (DAQ) and Event Filter (EF) prototype. The prototype consists of a full 'vertical' slice of the ATLAS Data Acquisition and Event Filter architecture and can be seen as made of 4 sub-systems: the Detector Interface, the Dataflow, the Back-end DAQ and the Event Filter. The Bucharest group is member of DAQ/EF collaboration and during 1997 it was involved in the Dataflow activities. The Dataflow component of the ATLAS DAQ/EF prototype is responsible for moving the event data from the detector read-out links to the final mass storage. It also provides event data for monitoring purposes and implements local control for the various elements. The Dataflow system is designed to cover three main functions, namely: the collection and buffering of the data from the detector, the merging of fragments into full events and the interaction with event filter sub-farm. The event building function is covered by a Dataflow building block named Event Builder. All the other functions of the Dataflow system are covered by the two modular building blocks, the read-out crate (ROC) and the sub-farm DAQ (SFC). The Bucharest group was mainly involved in the activities related to the high level design, initial implementation and tests of the ROC supporting the read-out from one or more read-out drivers and having one or more connections to the event builder. The main data flow within the ROC is handled by three input/output modules named IOMs: the trigger module (TRG), the event builder interface module (EBIF) and the read-out buffer module (ROB). The TRG receives and buffers data control messages from level 1 and from level 2 trigger system, the EBIF builds fragments and makes them available to the event building sub-system and the ROB receives and buffers ROB fragments from the read-out link, S-LINK. In order to estimate the performance which could be achieved with the actual

  16. LHCb DAQ network upgrade tests

    CERN Document Server

    Pisani, Flavio

    2013-01-01

    My project concerned the evaluation of new technologies for the DAQ network upgrade of LHCb. The first part consisted in developing and Open Flow-based Clos network. This new technology is very interesting and powerful but, as shown by the results, it still needs further improvements. The second part consisted in testing and benchmarking 40GbE network equipment: Mellanox MT27500, Chelsio T580 and Huawei Cloud Engine 12804. An event-building simulation is currently been performed in order to check the feasibility of the DAQ network upgrade in LS2. The first results are promising.

  17. Contributions to the back-end software sub-system of the ATLAS data acquisition of event filter prototype -1 project

    International Nuclear Information System (INIS)

    Badescu, E.; Caprini, M.; Niculescu, M.; Radu, A.

    1998-01-01

    A project has been approved by the ATLAS Collaboration for the design and implementation of a Data Acquisition (DAQ) and Event Filter (EF) prototype, based on the functional architecture described in the ATLAS Technical Proposal. The prototype consists of a full 'vertical' slice of the ATLAS Data Acquisition and Event Filter architecture and can be seen as made of 4 sub-systems: the Detector Interface, the Dataflow, the Back-end DAQ and the Event Filter. The Bucharest group is member of DAQ/EF collaboration and during 1997 was involved in the Back-end activities. The back-end software encompasses the software for configuring, controlling and monitoring the DAQ but specifically excludes the management, processing or transportation of physics data. The user requirements gathered for the back-end sub-system have been divided into groups related to activities providing similar functionality. The groups have been further developed into components of the Back-end with a well defined purpose and boundaries. Each component offers some unique functionality and has its own architecture. The actual Back-end component model includes 5 core components (run control, configuration databases, message reporting system, process manager and information service) and 6 detector integration components (partition and resource manager, status display, run bookkeeper, event dump, test manager and diagnostic package). The Bucharest group participated to the high level design, implementation and testing of three components (information service, message reporting system and status display). The Information Service (IS) provides an information exchange facility for software components of the DAQ. Information (defined by the supplier) from many sources can be categorized and made available to requesting applications asynchronously or on demand. The design of the information service followed an object oriented approach. It is a multiple server configuration in which servers are dedicated to

  18. ATLAS Magnet System Nearing Completion

    CERN Document Server

    ten Kate, H H J

    2008-01-01

    The ATLAS Detector at the Large Hadron Collider at CERN is equipped with a superconducting magnet system that consists of a Barrel Toroid, two End-Cap Toroids and a Central Solenoid. The four magnets generate the magnetic field for the muon- and inner tracking detectors, respectively. After 10 years of construction in industry, integration and on-surface tests at CERN, the magnets are now in the underground cavern where they undergo the ultimate test before data taking in the detector can start during the course of next year. The system with outer dimensions of 25 m length and 22 m diameter is based on using conduction cooled aluminum stabilized NbTi conductors operating at 4.6 K and 20.5 kA maximum coil current with peak magnetic fields in the windings of 4.1 T and a system stored magnetic energy of 1.6 GJ. The Barrel Toroid and Central Solenoid were already successfully charged after installation to full current in autumn 2006. This year the system is completed with two End Cap Toroids. The ultimate test of...

  19. The ATLAS Fast Tracker system

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00353645; The ATLAS collaboration

    2017-01-01

    From 2010 to 2012 the Large Hadron Collider (LHC) operated at a centre-of-mass energy of 7 TeV and 8 TeV, colliding bunches of particles every 50 ns. During operation, the ATLAS trigger system has performed efficiently contributing to important results, including the discovery of the Higgs boson in 2012. The LHC restarted in 2015 and will operate for four years at a center of mass energy of 13 TeV and bunch crossing of 50 ns and 25 ns. These running conditions result in the mean number of overlapping proton-proton interactions per bunch crossing increasing from 20 to 60. The Fast Tracker (FTK) system is designed to deliver full event track reconstruction for all tracks with transverse momentum above 1 GeV at a Level-1 rate of 100 kHz with an average latency below 100 microseconds. This will allow the trigger to utilize tracking information from the entire detector at an earlier event selection stage than ever before, allowing for more efficient event rejection. To achieve this goal the system uses a parallel ...

  20. The ALICE DAQ infoLogger

    Science.gov (United States)

    Chapeland, S.; Carena, F.; Carena, W.; Chibante Barroso, V.; Costa, F.; Dénes, E.; Divià, R.; Fuchs, U.; Grigore, A.; Ionita, C.; Delort, C.; Simonetti, G.; Soós, C.; Telesca, A.; Vande Vyvre, P.; Von Haller, B.; Alice Collaboration

    2014-04-01

    ALICE (A Large Ion Collider Experiment) is a heavy-ion experiment studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). The ALICE DAQ (Data Acquisition System) is based on a large farm of commodity hardware consisting of more than 600 devices (Linux PCs, storage, network switches). The DAQ reads the data transferred from the detectors through 500 dedicated optical links at an aggregated and sustained rate of up to 10 Gigabytes per second and stores at up to 2.5 Gigabytes per second. The infoLogger is the log system which collects centrally the messages issued by the thousands of processes running on the DAQ machines. It allows to report errors on the fly, and to keep a trace of runtime execution for later investigation. More than 500000 messages are stored every day in a MySQL database, in a structured table keeping track for each message of 16 indexing fields (e.g. time, host, user, ...). The total amount of logs for 2012 exceeds 75GB of data and 150 million rows. We present in this paper the architecture and implementation of this distributed logging system, consisting of a client programming API, local data collector processes, a central server, and interactive human interfaces. We review the operational experience during the 2012 run, in particular the actions taken to ensure shifters receive manageable and relevant content from the main log stream. Finally, we present the performance of this log system, and future evolutions.

  1. Completion of the ATLAS control system upgrade

    International Nuclear Information System (INIS)

    Munson, F. H.

    1998-01-01

    In the fall of 1992 at the SNEAP(Symposium of North Eastern Accelerator Personnel) a project to up grade the ATLAS (Argonne Tandem Linear Accelerator System) control system was first reported. Not unlike the accelerator it services the control system will continue to evolve. However, the first of this year has marked the completion of this most recent upgrade project. Since the control system upgrade took place during a period when ATLAS was operating at a record number of hours, special techniques were necessary to enable the development of the new control system ''on line'' while still saving the needs of normal operations. This paper reviews the techniques used for upgrading the ATLAS control system while the system was in use. In addition a summary of the upgrade project and final configuration, as well as some of the features of the new control system is provided

  2. Evolution of the Trigger and Data Acquisition System for the ATLAS experiment

    CERN Document Server

    Negri, A; The ATLAS collaboration

    2012-01-01

    The ATLAS experiment at the Large Hadron Collider at CERN relies on a complex and highly distributed Trigger and Data Acquisition (TDAQ) system to gather and select particle collision data at unprecedented energy and rates. The TDAQ is composed of three levels which reduces the event rate from the design bunch-crossing rate of 40 MHz to an average event recording rate of about 200 Hz. The first part of this paper gives an overview of the operational performance of the DAQ system during 2011 and the first months of data taking in 2012. It describes how the flexibility inherent in the design of the system has be exploited to meet the changing needs of ATLAS data taking and in some cases push performance beyond the original design performance specification. The experience accumulated in the TDAQ system operation during these years stimulated also interest to explore possible evolutions, despite the success of the current design. One attractive direction is to merge three systems - the second trigger level (L2), ...

  3. 3D virtual world remote laboratory to assist in designing advanced user defined DAQ systems based on FlexRIO and EPICS

    Energy Technology Data Exchange (ETDEWEB)

    Carpeño, A., E-mail: antonio.cruiz@upm.es [Universidad Politécnica de Madrid UPM, Madrid (Spain); Contreras, D.; López, S.; Ruiz, M.; Sanz, D.; Arcas, G. de; Esquembri, S. [Universidad Politécnica de Madrid UPM, Madrid (Spain); Vega, J.; Castro, R. [Laboratorio Nacional de Fusión CIEMAT, Madrid (Spain)

    2016-11-15

    Highlights: • Assist in the design of FPGA-based data acquisition systems using EPICS and FlexRIO. • Virtual Reality technologies are highly effective at creating rich training scenarios. • Virtual actions simulate the behavior of a real system to enhance the training process. • Virtual actions can make real changes remotely in the physical ITER’s Fast Controller. - Abstract: iRIO-3DLab is a platform devised to assist developers in the design and implementation of intelligent and reconfigurable FPGA-based data acquisition systems using EPICS and FlexRIO technologies. Although these architectures are very powerful in defining the behavior of DAQ systems, this advantage comes at the price of greater difficulty in understanding how the system works, and how it should be configured and built according to the hardware available and the processing demanded by the requirements of the diagnostics. In this regard, Virtual Reality technologies are highly effective at creating rich training scenarios due to their ability to provide immersive training experiences and collaborative environments. The designed remote laboratory is based on a 3D virtual world developed in Opensim, which is accessible through a standard free 3D viewer. Using a client-server architecture, the virtual world connects with a service running in a Linux-based computer executing EPICS. Through their avatars, users interact with virtual replicas of this equipment as they would in real-life situations. Some actions can be used to simulate the behavior of a real system to enhance the training process, while others can be used to make real changes remotely in the physical system.

  4. 3D virtual world remote laboratory to assist in designing advanced user defined DAQ systems based on FlexRIO and EPICS

    International Nuclear Information System (INIS)

    Carpeño, A.; Contreras, D.; López, S.; Ruiz, M.; Sanz, D.; Arcas, G. de; Esquembri, S.; Vega, J.; Castro, R.

    2016-01-01

    Highlights: • Assist in the design of FPGA-based data acquisition systems using EPICS and FlexRIO. • Virtual Reality technologies are highly effective at creating rich training scenarios. • Virtual actions simulate the behavior of a real system to enhance the training process. • Virtual actions can make real changes remotely in the physical ITER’s Fast Controller. - Abstract: iRIO-3DLab is a platform devised to assist developers in the design and implementation of intelligent and reconfigurable FPGA-based data acquisition systems using EPICS and FlexRIO technologies. Although these architectures are very powerful in defining the behavior of DAQ systems, this advantage comes at the price of greater difficulty in understanding how the system works, and how it should be configured and built according to the hardware available and the processing demanded by the requirements of the diagnostics. In this regard, Virtual Reality technologies are highly effective at creating rich training scenarios due to their ability to provide immersive training experiences and collaborative environments. The designed remote laboratory is based on a 3D virtual world developed in Opensim, which is accessible through a standard free 3D viewer. Using a client-server architecture, the virtual world connects with a service running in a Linux-based computer executing EPICS. Through their avatars, users interact with virtual replicas of this equipment as they would in real-life situations. Some actions can be used to simulate the behavior of a real system to enhance the training process, while others can be used to make real changes remotely in the physical system.

  5. DZERO Level 3 DAQ/Trigger Closeout

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The Tevatron Collider, located at the Fermi National Accelerator Laboratory, delivered its last 1.96 TeV proton-antiproton collisions on September 30th, 2011. The DZERO experiment continues to take cosmic data for final alignment for several more months . Since Run 2 started, in March 2001, all DZERO data has been collected by the DZERO Level 3 Trigger/DAQ System. The system is a modern, networked, commodity hardware trigger and data acquisition system based around a large central switch with about 60 front ends and 200 trigger computers. DZERO front end crates are VME based. Single Board Computer interfaces between detector data on VME and the network transport for the DAQ system. Event flow is controlled by the Routing Master which can steer events to clusters of farm nodes based on the low level trigger bits that fired. The farm nodes are multi-core commodity computer boxes, without special hardware, that run isolated software to make the final Level 3 trigger decision. Passed events are transferred to th...

  6. First-year experience with the ATLAS online monitoring framework

    International Nuclear Information System (INIS)

    Corso-Radu, A

    2010-01-01

    ATLAS is one of the four experiments in the Large Hadron Collider (LHC) at CERN, which has been put in operation this year. The challenging experimental environment and the extreme detector complexity required development of a highly scalable distributed monitoring framework, which is currently being used to monitor the quality of the data being taken as well as operational conditions of the hardware and software elements of the detector, trigger and data acquisition systems. At the moment the ATLAS Trigger/DAQ system is distributed over more than 1000 computers, which is about one third of the final ATLAS size. At every minute of an ATLAS data taking session the monitoring framework serves several thousands physics events to monitoring data analysis applications, handles more than 4 million histograms updates coming from more than 4 thousands applications, executes 10 thousands advanced data quality checks for a subset of those histograms, displays histograms and results of these checks on several dozens of monitors installed in main and satellite ATLAS control rooms. This note presents the overview of the online monitoring software framework, and describes the experience, which was gained during an extensive commissioning period as well as at the first phase of LHC beam in September 2008. Performance results, obtained on the current ATLAS DAQ system will also be presented, showing that the performance of the framework is adequate for the final ATLAS system.

  7. Overview of the ATLAS distributed computing system

    CERN Document Server

    Elmsheuser, Johannes; The ATLAS collaboration

    2018-01-01

    The CERN ATLAS experiment successfully uses a worldwide computing infrastructure to support the physics program during LHC Run 2. The grid workflow system PanDA routinely manages 250 to 500 thousand concurrently running production and analysis jobs to process simulation and detector data. In total more than 300 PB of data is distributed over more than 150 sites in the WLCG and handled by the ATLAS data management system Rucio. To prepare for the ever growing LHC luminosity in future runs new developments are underway to even more efficiently use opportunistic resources such as HPCs and utilize new technologies. This presentation will review and explain the outline and the performance of the ATLAS distributed computing system and give an outlook to new workflow and data management ideas for the beginning of the LHC Run 3.

  8. Advanced Alignment of the ATLAS Tracking System

    CERN Document Server

    Pedraza Lopez, S; The ATLAS collaboration

    2012-01-01

    In order to reconstruct trajectories of charged particles, ATLAS is equipped with a tracking system built using different technologies embedded in a 2T solenoidal magnetic field. ATLAS physics goals require high resolution, unbiased measurement of all charged particle kinematic parameters in order to assure accurate invariant mass reconstruction and interaction and decay vertex finding. These critically depend on the systematic effects related to the alignment of the tracking system. In order to eliminate malicious systematic deformations, various advanced tools and techniques have been put in place. These include information from known mass resonances, energy of electrons and positrons measured by the electromagnetic calorimeters, etc. Despite being stable under normal running conditions, ATLAS tracking system responses to sudden environ-mental changes (temperature, magnetic field) by small collective deformations. These have to be identified and corrected in order to assure uniform, highest quality tracking...

  9. Flexible custom designs for CMS DAQ

    CERN Document Server

    Arcidiacono, Roberta; Boyer, Vincent; Brett, Angela Mary; Cano, Eric; Carboni, Andrea; Ciganek, Marek; Cittolin, Sergio; Erhan, Samim; Gigi, Dominique; Glege, Frank; Gomez-Reino Garrido, Robert; Gulmini, Michele; Gutleber, Johannes; Jacobs, Claude; Maron, Gaetano; Meijers, Frans; Meschi, Emilio; Murray, Steven John; Oh, Alexander; Orsini, Luciano; Paus, Christoph; Petrucci, Andrea; Piedra Gomez, Jonatan; Pieri, Marco; Pollet, Lucien; Racz, Attila; Sakulin, Hannes; Schwick, Christoph; Sumorok, Konstanty; Suzuki, Ichiro; Tsirigkas, Dimitrios; Varela, Joao

    2006-01-01

    The CMS central DAQ system is built using commercial hardware (PCs and networking equipment), except for two components: the Front-end Readout Link (FRL) and the Fast Merger Module (FMM). The FRL interfaces the sub-detector specific front-end electronics to the central DAQ system in a uniform way. The FRL is a compact-PCI module with an additional PCI 64bit connector to host a Network Interface Card (NIC). On the sub-detector side, the data are written to the link using a FIFO-like protocol (SLINK64). The link uses the Low Voltage Differential Signal (LVDS) technology to transfer data with a throughput of up to 400 MBytes/s. The FMM modules collect status signals from the front-end electronics of the sub-detectors, merge and monitor them and provide the resulting signals with low latency to the first level trigger electronics. In particular, the throttling signals allow the trigger to avoid buffer overflows and data corruption in the front-end electronics when the data produced in the front-end exceeds the c...

  10. AGIS: Evolution of Distributed Computing Information system for ATLAS

    CERN Document Server

    Anisenkov, Alexey; The ATLAS collaboration; Alandes Pradillo, Maria; Karavakis, Edward

    2015-01-01

    The variety of the ATLAS Computing Infrastructure requires a central information system to define the topology of computing resources and to store the different parameters and configuration data which are needed by the various ATLAS software components. The ATLAS Grid Information System is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services.

  11. Evolution of the ATLAS Nightly Build System

    International Nuclear Information System (INIS)

    Undrus, A

    2012-01-01

    The ATLAS Nightly Build System is a major component in the ATLAS collaborative software organization, validation, and code approval scheme. For over 10 years of development it has evolved into a factory for automatic release production and grid distribution. The 50 multi-platform branches of ATLAS releases provide vast opportunities for testing new packages, verification of patches to existing software, and migration to new platforms and compilers for ATLAS code that currently contains 2200 packages with 4 million C++ and 1.4 million python scripting lines written by about 1000 developers. Recent development was focused on the integration of ATLAS Nightly Build and Installation systems. The nightly releases are distributed and validated and some are transformed into stable releases used for data processing worldwide. The ATLAS Nightly System is managed by the NICOS control tool on a computing farm with 50 powerful multiprocessor nodes. NICOS provides the fully automated framework for the release builds, testing, and creation of distribution kits. The ATN testing framework of the Nightly System runs unit and integration tests in parallel suites, fully utilizing the resources of multi-core machines, and provides the first results even before compilations complete. The NICOS error detection system is based on several techniques and classifies the compilation and test errors according to their severity. It is periodically tuned to place greater emphasis on certain software defects by highlighting the problems on NICOS web pages and sending automatic e-mail notifications to responsible developers. These and other recent developments will be presented and future plans will be described.

  12. Multilevel Workflow System in the ATLAS Experiment

    International Nuclear Information System (INIS)

    Borodin, M; De, K; Navarro, J Garcia; Golubkov, D; Klimentov, A; Maeno, T; Vaniachine, A

    2015-01-01

    The ATLAS experiment is scaling up Big Data processing for the next LHC run using a multilevel workflow system comprised of many layers. In Big Data processing ATLAS deals with datasets, not individual files. Similarly a task (comprised of many jobs) has become a unit of the ATLAS workflow in distributed computing, with about 0.8M tasks processed per year. In order to manage the diversity of LHC physics (exceeding 35K physics samples per year), the individual data processing tasks are organized into workflows. For example, the Monte Carlo workflow is composed of many steps: generate or configure hard-processes, hadronize signal and minimum-bias (pileup) events, simulate energy deposition in the ATLAS detector, digitize electronics response, simulate triggers, reconstruct data, convert the reconstructed data into ROOT ntuples for physics analysis, etc. Outputs are merged and/or filtered as necessary to optimize the chain. The bi-level workflow manager - ProdSys2 - generates actual workflow tasks and their jobs are executed across more than a hundred distributed computing sites by PanDA - the ATLAS job-level workload management system. On the outer level, the Database Engine for Tasks (DEfT) empowers production managers with templated workflow definitions. On the next level, the Job Execution and Definition Interface (JEDI) is integrated with PanDA to provide dynamic job definition tailored to the sites capabilities. We report on scaling up the production system to accommodate a growing number of requirements from main ATLAS areas: Trigger, Physics and Data Preparation. (paper)

  13. Automating the CMS DAQ

    International Nuclear Information System (INIS)

    Bauer, G; Darlea, G-L; Gomez-Ceballos, G; Bawej, T; Chaze, O; Coarasa, J A; Deldicque, C; Dobson, M; Dupont, A; Gigi, D; Glege, F; Gomez-Reino, R; Hartl, C; Hegeman, J; Masetti, L; Behrens, U; Branson, J; Cittolin, S; Holzner, A; Erhan, S

    2014-01-01

    We present the automation mechanisms that have been added to the Data Acquisition and Run Control systems of the Compact Muon Solenoid (CMS) experiment during Run 1 of the LHC, ranging from the automation of routine tasks to automatic error recovery and context-sensitive guidance to the operator. These mechanisms helped CMS to maintain a data taking efficiency above 90% and to even improve it to 95% towards the end of Run 1, despite an increase in the occurrence of single-event upsets in sub-detector electronics at high LHC luminosity.

  14. Status of the ATLAS control system upgrade

    International Nuclear Information System (INIS)

    Munson, F.H.; Ferraretto, M.; Rutherford, B.

    1992-01-01

    Certain components of the ATLAS control system are two generations behind today's technology. It has been decided to upgrade the control system. in part, by replacing Digital Equipment Corporation (DEC) PDP-11 computers with present-day VAX technology. Two primary goals have been defined for the upgraded control system. The first of these goals is to keep additional ''in-house'' written software to a minimum, while providing the portability necessary to ensure the continued use of existing software. In an attempt to achieve this goal, commercially-available software has been utilized to provide a foundation for the final control-system configuration. The second goal is to develop the new control system, while not interfering with accelerator operations. This paper describes some of the motivation for upgrading the ATLAS control system, the basic features of the new control system, and the present status of the system's development

  15. Use of High Resolution DAQ System to Aid Diagnosis of HD2b, a High Performance Nb3Sn Dipole

    Energy Technology Data Exchange (ETDEWEB)

    Lizarazo, J.; Doering, D.; Doolittle, L.; Galvin, J.; Caspi, S.; Dietderich, D. R.; Felice, H.; Ferracin, P.; Godeke, A.; Joseph, J.; Lietzke, A. F.; Ratti, A.; Sabbi, G. L.; Trillaud, F.; Wang, X.; Zimmerman, S.

    2008-08-17

    A novel voltage monitoring system to record voltage transients in superconducting magnets is being developed at LBNL. This system has 160 monitoring channels capable of measuring differential voltages of up to 1.5kV with 100kHz bandwidth and 500kS/s digitizing rate. This paper presents analysis results from data taken with a 16 channel prototype system. From that analysis we were able to diagnose a change in the current-temperature margin of the superconducting cable by analyzing Flux-Jump data collected after a magnet energy extraction failure during testing of a high field Nb{sub 3}Sn dipole.

  16. Use of High Resolution DAQ System to Aid Diagnosis of HD2b, a High Performance Nb3Sn Dipole

    International Nuclear Information System (INIS)

    Lizarazo, J.; Doering, D.; Doolittle, L.; Galvin, J.; Caspi, S.; Dietderich, D.R.; Felice, H.; Ferracin, P.; Godeke, A.; Joseph, J.; Lietzke, A.F.; Ratti, A.; Sabbi, G.L.; Trillaud, F.; Wang, X.; Zimmerman, S.

    2008-01-01

    A novel voltage monitoring system to record voltage transients in superconducting magnets is being developed at LBNL. This system has 160 monitoring channels capable of measuring differential voltages of up to 1.5kV with 100kHz bandwidth and 500kS/s digitizing rate. This paper presents analysis results from data taken with a 16 channel prototype system. From that analysis we were able to diagnose a change in the current-temperature margin of the superconducting cable by analyzing Flux-Jump data collected after a magnet energy extraction failure during testing of a high field Nb 3 Sn dipole.

  17. Design of the ANTARES LCM-DAQ board test bench using a FPGA-based system-on-chip approach

    Energy Technology Data Exchange (ETDEWEB)

    Anvar, S. [CEA Saclay, DAPNIA/SEDI, 91191 Gif-sur-Yvette Cedex (France); Kestener, P. [CEA Saclay, DAPNIA/SEDI, 91191 Gif-sur-Yvette Cedex (France)]. E-mail: pierre.kestener@cea.fr; Le Provost, H. [CEA Saclay, DAPNIA/SEDI, 91191 Gif-sur-Yvette Cedex (France)

    2006-11-15

    The System-on-Chip (SoC) approach consists in using state-of-the-art FPGA devices with embedded RISC processor cores, high-speed differential LVDS links and ready-to-use multi-gigabit transceivers allowing development of compact systems with substantial number of IO channels. Required performances are obtained through a subtle separation of tasks between closely cooperating programmable hardware logic and user-friendly software environment. We report about our experience in using the SoC approach for designing the production test bench of the off-shore readout system for the ANTARES neutrino experiment.

  18. System administration of ATLAS TDAQ computing environment

    Science.gov (United States)

    Adeel-Ur-Rehman, A.; Bujor, F.; Benes, J.; Caramarcu, C.; Dobson, M.; Dumitrescu, A.; Dumitru, I.; Leahu, M.; Valsan, L.; Oreshkin, A.; Popov, D.; Unel, G.; Zaytsev, A.

    2010-04-01

    This contribution gives a thorough overview of the ATLAS TDAQ SysAdmin group activities which deals with administration of the TDAQ computing environment supporting High Level Trigger, Event Filter and other subsystems of the ATLAS detector operating on LHC collider at CERN. The current installation consists of approximately 1500 netbooted nodes managed by more than 60 dedicated servers, about 40 multi-screen user interface machines installed in the control rooms and various hardware and service monitoring machines as well. In the final configuration, the online computer farm will be capable of hosting tens of thousands applications running simultaneously. The software distribution requirements are matched by the two level NFS based solution. Hardware and network monitoring systems of ATLAS TDAQ are based on NAGIOS and MySQL cluster behind it for accounting and storing the monitoring data collected, IPMI tools, CERN LANDB and the dedicated tools developed by the group, e.g. ConfdbUI. The user management schema deployed in TDAQ environment is founded on the authentication and role management system based on LDAP. External access to the ATLAS online computing facilities is provided by means of the gateways supplied with an accounting system as well. Current activities of the group include deployment of the centralized storage system, testing and validating hardware solutions for future use within the ATLAS TDAQ environment including new multi-core blade servers, developing GUI tools for user authentication and roles management, testing and validating 64-bit OS, and upgrading the existing TDAQ hardware components, authentication servers and the gateways.

  19. A Control System and Streaming DAQ Platform with Image-Based Trigger for X-ray Imaging

    Science.gov (United States)

    Stevanovic, Uros; Caselle, Michele; Cecilia, Angelica; Chilingaryan, Suren; Farago, Tomas; Gasilov, Sergey; Herth, Armin; Kopmann, Andreas; Vogelgesang, Matthias; Balzer, Matthias; Baumbach, Tilo; Weber, Marc

    2015-06-01

    High-speed X-ray imaging applications play a crucial role for non-destructive investigations of the dynamics in material science and biology. On-line data analysis is necessary for quality assurance and data-driven feedback, leading to a more efficient use of a beam time and increased data quality. In this article we present a smart camera platform with embedded Field Programmable Gate Array (FPGA) processing that is able to stream and process data continuously in real-time. The setup consists of a Complementary Metal-Oxide-Semiconductor (CMOS) sensor, an FPGA readout card, and a readout computer. It is seamlessly integrated in a new custom experiment control system called Concert that provides a more efficient way of operating a beamline by integrating device control, experiment process control, and data analysis. The potential of the embedded processing is demonstrated by implementing an image-based trigger. It records the temporal evolution of physical events with increased speed while maintaining the full field of view. The complete data acquisition system, with Concert and the smart camera platform was successfully integrated and used for fast X-ray imaging experiments at KIT's synchrotron radiation facility ANKA.

  20. DAQ

    CERN Multimedia

    F. Meijers

    2012-01-01

      Preparations for the 2012 physics run The HLT farm currently comprises 720 PC nodes with dual E5430 4-core CPUs (installed in 2009) and 288 PC nodes with dual X5650 6-core CPUs (installed in early 2011). This gives a total HLT capacity of 9216 cores and 18 TB of memory. It provides a capacity for HLT of about 100 ms/event (on a 2.7 GHz E5430 core) at 100 kHz L1 rate in pp collisions. In order to be able to handle the expected higher instantaneous luminosities in 2012 (up to 7E33 at 50 ns bunch spacing) with a pile-up of ~35 events, a further extension of the HLT is necessary. This extension aims at a capacity of about 150 ms/event. The 2012 extension will consist of 256 nodes dual 8-core CPUs of the new ‘Sandy-Bridge’ architecture and is foreseen to be ready for deployment after the first LHC MD period (end April). In order to connect the new PC nodes to the existing data network switches, the event builder network has been re-cabled (see Image 3) to reduce the number of dat...

  1. Distributed inter process communication framework of BES III DAQ online software

    International Nuclear Information System (INIS)

    Li Fei; Liu Yingjie; Ren Zhenyu; Wang Liang; Chinese Academy of Sciences, Beijing; Chen Mali; Zhu Kejun; Zhao Jingwei

    2006-01-01

    DAQ (Data Acquisition) system is one important part of BES III, which is the large scale high-energy physics detector on the BEPC. The inter process communication (IPC) of online software in distributed environments is very pivotal for design and implement of DAQ system. This article will introduce one distributed inter process communication framework, which is based on CORBA and used in BES III DAQ online software. The article mainly presents the design and implementation of the IPC framework and application based on IPC. (authors)

  2. ATLAS Tile calorimeter calibration and monitoring systems

    CERN Document Server

    Cortes-Gonzalez, Arely; The ATLAS collaboration

    2017-01-01

    The ATLAS Tile Calorimeter is the central section of the hadronic calorimeter of the ATLAS experiment and provides important information for reconstruction of hadrons, jets, hadronic decays of tau leptons and missing transverse energy. This sampling calorimeter uses steel plates as absorber and scintillating tiles as active medium. The light produced by the passage of charged particles is transmitted by wavelength shifting fibres to photomultiplier tubes, located in the outer part of the calorimeter. The readout is segmented into about 5000 cells (longitudinally and transversally), each of them being read out by two photomultiplier in parallel. To calibrate and monitor the stability and performance of each part of the readout chain during the data taking, a set of calibration systems is used. The calibration system comprises Cesium radioactive sources, laser, charge injection elements and an integrator based readout system. Combined information from all systems allows to monitor and equalise the calorimeter r...

  3. ATLAS Tile calorimeter calibration and monitoring systems

    CERN Document Server

    Marjanovic, Marija; The ATLAS collaboration

    2018-01-01

    The ATLAS Tile Calorimeter (TileCal) is the central section of the hadronic calorimeter of the ATLAS experiment. This sampling calorimeter uses steel plates as absorber and scintillating tiles as active medium. The light produced by the passage of charged particles is transmitted by wavelength shifting fibers to photo-multiplier tubes (PMTs), located in the outer part of the calorimeter. The readout is segmented into about 5000 cells, each one being read out by two PMTs in parallel. To calibrate and monitor the stability and performance of the full readout chain during the data taking, a set of calibration sub-systems is used. The TileCal calibration system comprises Cesium radioactive sources, laser, charge injection elements, and an integrator based readout system. Combined information from all systems allows to monitor and to equalize the calorimeter response at each stage of the signal evolution, from scintillation light to digitization. Calibration runs are monitored from a data quality perspective and u...

  4. Multilevel Workflow System in the ATLAS Experiment

    CERN Document Server

    Borodin, M; The ATLAS collaboration; Golubkov, D; Klimentov, A; Maeno, T; Vaniachine, A

    2015-01-01

    The ATLAS experiment is scaling up Big Data processing for the next LHC run using a multilevel workflow system comprised of many layers. In Big Data processing ATLAS deals with datasets, not individual files. Similarly a task (comprised of many jobs) has become a unit of the ATLAS workflow in distributed computing, with about 0.8M tasks processed per year. In order to manage the diversity of LHC physics (exceeding 35K physics samples per year), the individual data processing tasks are organized into workflows. For example, the Monte Carlo workflow is composed of many steps: generate or configure hard-processes, hadronize signal and minimum-bias (pileup) events, simulate energy deposition in the ATLAS detector, digitize electronics response, simulate triggers, reconstruct data, convert the reconstructed data into ROOT ntuples for physics analysis, etc. Outputs are merged and/or filtered as necessary to optimize the chain. The bi-level workflow manager - ProdSys2 - generates actual workflow tasks and their jobs...

  5. The ATLAS Distributed Data Management System & Databases

    CERN Document Server

    Garonne, V; The ATLAS collaboration; Barisits, M; Beermann, T; Vigne, R; Serfon, C

    2013-01-01

    The ATLAS Distributed Data Management (DDM) System is responsible for the global management of petabytes of high energy physics data. The current system, DQ2, has a critical dependency on Relational Database Management Systems (RDBMS), like Oracle. RDBMS are well-suited to enforcing data integrity in online transaction processing applications, however, concerns have been raised about the scalability of its data warehouse-like workload. In particular, analysis of archived data or aggregation of transactional data for summary purposes is problematic. Therefore, we have evaluated new approaches to handle vast amounts of data. We have investigated a class of database technologies commonly referred to as NoSQL databases. This includes distributed filesystems, like HDFS, that support parallel execution of computational tasks on distributed data, as well as schema-less approaches via key-value stores, like HBase. In this talk we will describe our use cases in ATLAS, share our experiences with various databases used ...

  6. The C-RORC PCIe Card and its Application in the ALICE and ATLAS Experiments

    CERN Document Server

    Engel, H; Costa, F; Crone, G J; Eschweiler, D; Francis, D; Green, B; Joos, M; Kebschull, U; Kiss, T; Kugel, A; Panduro Vasquez, J G; Soos, C; Teixeira-Dias, P; Tremblet, L; Vande Vyvre, P; Vandelli, W; Vermeulen, J C; Werner, P; Wickens, F J

    2015-01-01

    The ALICE and ATLAS DAQ systems read out detector data via point-to-point serial links into custom hardware modules, the ALICE RORC and ATLAS ROBIN. To meet the increase in operational requirements both experiments are replacing their respective modules with a new common module, the C-RORC. This card, developed by ALICE, implements a PCIe Gen 2 x8 interface and interfaces to twelve optical links via three QSFP transceivers. This paper presents the design of the C-RORC, its performance and its application in the ALICE and ATLAS experiments.

  7. Readout and Trigger for the AFP Detector at the ATLAS Experiment

    CERN Document Server

    Kocian, Martin; The ATLAS collaboration

    2018-01-01

    AFP, the ATLAS Forward Proton consists of silicon detectors at 205 m and 217 m on each side of ATLAS. In 2016 two detectors in one side were installed. The FEI4 chips are read at 160 Mbps over the optical fibers. The DAQ system uses a FPGA board with Artix chip and a mezzanine card with RCE data processing module based on a Zynq chip with ARM processor running Linux. In this contribution we give an overview of the AFP detector with the commissioning steps taken to integrate with the ATLAS TDAQ. Furthermore first performance results are presented.

  8. ATLAS Tile calorimeter calibration and monitoring systems

    CERN Document Server

    Boumediene, Djamel Eddine; The ATLAS collaboration

    2017-01-01

    The ATLAS Tile Calorimeter (TileCal) is the central section of the hadronic calorimeter of the ATLAS experiment and provides important information for reconstruction of hadrons, jets, hadronic decays of tau leptons and missing transverse energy. This sampling calorimeter uses steel plates as absorber and scintillating tiles as active medium. The light produced by the passage of charged particles is transmitted by wavelength shifting fibres to photomultiplier tubes (PMTs). PMT signals are then digitized at 40 MHz and stored on detector and are only transferred off detector once the first level trigger acceptance has been confirmed. The readout is segmented into about 5000 cells (longitudinally and transversally), each of them being read out by two PMTs in parallel. To calibrate and monitor the stability and performance of each part of the readout chain, a set of calibration systems is used. The TileCal calibration system comprises Cesium radioactive sources, laser, charge injection elements and an integrator b...

  9. ATLAS Tile calorimeter calibration and monitoring systems

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00445232; The ATLAS collaboration

    2016-01-01

    The ATLAS Tile Calorimeter (TileCal) is the central section of the hadronic calorimeter of the ATLAS experiment and provides important information for reconstruction of hadrons, jets, hadronic decays of tau leptons and missing transverse energy. This sampling calorimeter uses steel plates as absorber and scintillating tiles as active medium. The light produced by the passage of charged particles is transmitted by wavelength shifting fibres to photomultiplier tubes (PMTs), located on the outside of the calorimeter. The readout is segmented into about 5000 cells (longitudinally and transversally), each of them being read out by two PMTs in parallel. To calibrate and monitor the stability and performance of each part of the readout chain during the data taking, a set of calibration systems is used. The TileCal calibration system comprises Cesium radioactive sources, laser and charge injection elements and it allows to monitor and equalize the calorimeter response at each stage of the signal production, from scin...

  10. ATLAS Tile calorimeter calibration and monitoring systems

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00445232; The ATLAS collaboration

    2017-01-01

    The ATLAS Tile Calorimeter (TileCal) is the central section of the hadronic calorimeter of the ATLAS experiment and provides important information for reconstruction of hadrons, jets, hadronic decays of tau leptons and missing transverse energy. This sampling calorimeter uses steel plates as absorber and scintillating tiles as active medium. The light produced by the passage of charged particles is transmitted by wavelength shifting fibres to photomultiplier tubes (PMTs), located on the outside of the calorimeter. The readout is segmented into about 5000 cells (longitudinally and transversally), each of them being read out by two PMTs in parallel. To calibrate and monitor the stability and performance of each part of the readout chain during the data taking, a set of calibration systems is used. The TileCal calibration system comprises cesium radioactive sources, Laser and charge injection elements, and allows for monitoring and equalization of the calorimeter response at each stage of the signal production, ...

  11. The ATLAS Data Acquisition and High Level Trigger Systems: Experience and Upgrade Plans

    CERN Document Server

    Hauser, R; The ATLAS collaboration

    2012-01-01

    The ATLAS DAQ/HLT system reduces the Level 1 rate of 75 kHz to a few kHz event build rate after Level 2 and a few hundred Hz out output rate to disk. It has operated with an average data taking efficiency of about 94% during the recent years. The performance has far exceeded the initial requirements, with about 5 kHz event building rate and 500 Hz of output rate in 2012, driven mostly by physics requirements. Several improvements and upgrades are foreseen in the upcoming long shutdowns, both to simplify the existing architecture and improve the performance. On the network side new core switches will be deployed and possible use of 10GBit Ethernet links for critical areas is foreseen. An improved read-out system to replace the existing solution based on PCI is under development. A major evolution of the high level trigger system foresees a merging of the Level 2 and Event Filter functionality on a single node, including the event building. This will represent a big simplification of the existing system, while ...

  12. Advanced Technology Lifecycle Analysis System (ATLAS)

    Science.gov (United States)

    O'Neil, Daniel A.; Mankins, John C.

    2004-01-01

    Developing credible mass and cost estimates for space exploration and development architectures require multidisciplinary analysis based on physics calculations, and parametric estimates derived from historical systems. Within the National Aeronautics and Space Administration (NASA), concurrent engineering environment (CEE) activities integrate discipline oriented analysis tools through a computer network and accumulate the results of a multidisciplinary analysis team via a centralized database or spreadsheet Each minute of a design and analysis study within a concurrent engineering environment is expensive due the size of the team and supporting equipment The Advanced Technology Lifecycle Analysis System (ATLAS) reduces the cost of architecture analysis by capturing the knowledge of discipline experts into system oriented spreadsheet models. A framework with a user interface presents a library of system models to an architecture analyst. The analyst selects models of launchers, in-space transportation systems, and excursion vehicles, as well as space and surface infrastructure such as propellant depots, habitats, and solar power satellites. After assembling the architecture from the selected models, the analyst can create a campaign comprised of missions spanning several years. The ATLAS controller passes analyst specified parameters to the models and data among the models. An integrator workbook calls a history based parametric analysis cost model to determine the costs. Also, the integrator estimates the flight rates, launched masses, and architecture benefits over the years of the campaign. An accumulator workbook presents the analytical results in a series of bar graphs. In no way does ATLAS compete with a CEE; instead, ATLAS complements a CEE by ensuring that the time of the experts is well spent Using ATLAS, an architecture analyst can perform technology sensitivity analysis, study many scenarios, and see the impact of design decisions. When the analyst is

  13. Operational performance of the ATLAS trigger and data acquisition system and its possible evolution

    CERN Document Server

    Negri, A; The ATLAS collaboration

    2012-01-01

    The experience accumulated in the ATLAS DAQ/HLT system operation during these years stimulated interest to explore possible evolutions, despite the success of the current design. One attractive direction is to merge three systems - the second trigger level (L2), the Event Builder (EB), and the Event Filter (EF) - within a single homogeneous one in which each HLT node executes all the steps required by the trigger and data acquisition process. Each L1 event is assigned to an available HLT node which executes the L2 algorithms using a subset of the event data and, upon positive selection, builds the event, which is further processed by the EF algorithms. Appealing aspects of this design are: a simplification of the software architecture and of its configuration, a better exploitation of the computing resources, the caching of fragments already collected for L2 processing, the automated load balancing between L2 and EF selection steps, the sharing of code and services on HLT nodes. Furthermore, the full treatmen...

  14. LabVIEW DAQ for NE213 Neutron Detector

    International Nuclear Information System (INIS)

    Al-Adeeb, Mohammed

    2003-01-01

    A neutron spectroscopy system, based on a NE213 liquid scintillation detector, to be placed at the Stanford Linear Accelerator Center to measure neutron spectra from a few MeV up to 800 MeV, beyond shielding. The NE213 scintillator, coupled with a Photomultiplier Tube (PMT), detects and converts radiation into current for signal processing. Signals are processed through Nuclear Instrument Modules (NIM) and Computer Automated Measurement and Control (CAMAC) modules. CAMAC is a computer automated data acquisition and handling system. Pulses are properly prepared and fed into an analog to digital converter (ADC), a standard CAMAC module. The ADC classifies the incoming analog pulses into 1 of 2048 digital channels. Data acquisition (DAQ) software based on LabVIEW, version 7.0, acquires and organizes data from the CAMAC ADC. The DAQ system presents a spectrum showing a relationship between pulse events and respective charge (digital channel number). Various photon sources, such as Co-60, Y-88, and AmBe-241, are used to calibrate the NE213 detector. For each source, a Compton edge and reference energy [units of MeVee] is obtained. A complete calibration curve results (at a given applied voltage to the PMT and pre-amplification gain) when the Compton edge and reference energy for each source is plotted. This project is focused to development of a DAQ system and control setup to collect and process information from a NE213 liquid scintillation detector. A manual is created to document the process of the development and interpretation of the LabVIEW-based DAQ system. Future high-energy neutron measurements can be referenced and normalized according to this calibration curve

  15. The D0 online monitoring and automatic DAQ recovery

    International Nuclear Information System (INIS)

    Haas, A.

    2004-01-01

    The DZERO experiment, located at the Fermi National Accelerator Laboratory, has recently started the Run 2 physics program. The detector upgrade included a new Data Acquisition/Level 3 Trigger system. Part of the design for the DAQ/Trigger system was a new monitoring infrastructure. The monitoring was designed to satisfy real-time requirements with 1-second resolution as well as nonreal-time data. It was also designed to handle a large number of displays without putting undue load on the sources of monitoring information. The resulting protocol is based on XML, is easily extensible, and has spawned a large number of displays, clients, and other applications. It is also one of the few sources of detector performance available outside the Online System's security wall. A tool, based on this system, which provides for auto-recovery of DAQ errors, has been designed. This talk will include a description of the DZERO DAQ/Online monitor server, based on the ACE framework, the protocol, the auto-recovery tool, and several of the unique displays which include an ORACLE-based archiver and numerous GUIs

  16. ATLAS silicon microstrip detector system (SCT)

    International Nuclear Information System (INIS)

    Unno, Y.

    2003-01-01

    The S CT together with the pixel and the transition radiation tracker systems and with a central solenoid forms the central tracking system of the ATLAS detector at LHC. Series production of SCT Silicon microstrip sensors is near completion. The sensors have been shown to be robust against high voltage operation to the 500 V required after fluences of 3x10 14 protons/cm 2 . SCT barrel modules are in series production. A low-noise CCD camera has been used to debug the onset of leakage currents

  17. Advanced Alignment of the ATLAS Tracking System

    CERN Document Server

    Butti, P; The ATLAS collaboration

    2014-01-01

    In order to reconstruct the trajectories of charged particles, the ATLAS experiment exploits a tracking system built using different technologies, planar silicon modules or microstrips (PIX and SCT detectors) and gaseous drift tubes (TRT), all embedded in a 2T solenoidal magnetic field. Misalignments and deformations of the active detector elements deteriorate the track reconstruction resolution and lead to systematic biases on the measured track parameters. The alignment procedures exploits various advanced tools and techniques in order to determine for module positions and correct for deformations. For the LHC Run II, the system is being upgraded with the installation of a new pixel layer, the Insertable B-layer (IBL).

  18. FELIX: The New Approach for Interfacing to Front-end Electronics for the ATLAS Experiment

    CERN Document Server

    AUTHOR|(SzGeCERN)754725; The ATLAS collaboration; Anderson, John Thomas; Borga, Andrea; Boterenbrood, Hendrik; Chen, Hucheng; Chen, Kai; Drake, Gary; Donszelmann, Mark; Francis, David; Gorini, Benedetto; Guest, Daniel; Lanni, Francesco; Lehmann Miotto, Giovanna; Levinson, Lorne; Roich, Alexander; Schreuder, Frans Philip; Schumacher, J\\"orn; Vandelli, Wainer; Zhang, Jinlong

    2016-01-01

    From the ATLAS Phase-I upgrade and onward, new or upgraded detectors and trigger systems will be interfaced to the data acquisition, detector control and timing (TTC) systems by the Front-End Link eXchange (FELIX). FELIX is the core of the new ATLAS Trigger/DAQ architecture. Functioning as a router between custom serial links and a commodity network, FELIX is implemented by server PCs with commodity network interfaces and PCIe cards with large FPGAs and many high speed serial fiber transceivers. By separating data transport from data manipulation, the latter can be done by software in commodity servers attached to the network. Replacing traditional point-to-point links between Front-end components and the DAQ system by a switched network, FELIX provides scaling, flexibility uniformity and upgradability and reduces the diversity of custom hardware solutions in favour of software.

  19. Control in the ATLAS TDAQ System

    CERN Document Server

    Liko, D; Flammer, J; Dobson, M; Jones, R; Mapelli, L; Alexandrov, I; Korobov, S; Kotov, V; Mineev, M; Amorim, A; Fiuza de Barros, N; Klose, D; Pedro, L; Badescu, E; Caprini, M; Kolos, S; Kazarov, A; Ryabov, Yu; Soloviev, I; Computing In High Energy Physics

    2005-01-01

    TDAQ system requires a comprehensive and flexible control system. Its role ranges from the so-called run-control, e.g. starting and stopping the data taking, to error handling and fault tolerance. It also includes initialization and verification of the overall system. Following the traditional approach a hierarchical system of customizable controllers has been proposed. For the final system all functionality will be therefore available in a distributed manner, with the possibility of local customization. After a technology survey the open source expert system CLIPS has been chosen as a basis for the implementation of the supervision and the verification system. The CLIPS interpreter has been extended to provide a general control framework. Other ATLAS Online software components have been integrated as plug-ins and provide the mechanism for configuration and communication. Several components have been implemented sharing this technology. The dynamic behavior of the individual component is fully described by th...

  20. ATLAS TDAQ system administration: Master of Puppets

    CERN Document Server

    AUTHOR|(SzGeCERN)727357; The ATLAS collaboration; Ballestrero, Sergio; Brasolin, Franco; Fazio, Daniel; Gament, Costin-Eugen; Scannicchio, Diana; Twomey, Matthew Shaun

    2017-01-01

    Within the ATLAS detector, the Trigger and Data Acquisition system is responsible for the online processing of data streamed from the detector during collisions at the Large Hadron Collider at CERN. The online farm is comprised of ∼4000 servers processing the data read out from ∼100 million detector channels through multiple trigger levels. The configurtion of these servers is not an easy task, especially since the detector itself is made up of multiple different sub-detectors, each with their own particular requirements. The previous method of configuring these servers, using Quattor and a hierarchical scripts system was cumbersome and restrictive. A better, unified system was therefore required to simplify the tasks of the TDAQ Systems Administrators, for both the local and net-booted systems, and to be able to fulfil the requirements of TDAQ, Detector Control Systems and the sub-detectors groups. Various configuration management systems were evaluated, though in the end, Puppet was chosen as the applic...

  1. The Run-2 ATLAS Trigger System

    International Nuclear Information System (INIS)

    Martínez, A Ruiz

    2016-01-01

    The ATLAS trigger successfully collected collision data during the first run of the LHC between 2009-2013 at different centre-of-mass energies between 900 GeV and 8TeV. The trigger system consists of a hardware Level-1 and a software-based high level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. In Run-2, the LHC will operate at centre-of-mass energies of 13 and 14 TeV and higher luminosity, resulting in up to five times higher rates of processes of interest. A brief review of the ATLAS trigger system upgrades that were implemented between Run-1 and Run-2, allowing to cope with the increased trigger rates while maintaining or even improving the efficiency to select physics processes of interest, will be given. This includes changes to the Level-1 calorimeter and muon trigger systems, the introduction of a new Level-1 topological trigger module and the merging of the previously two-level HLT system into a single event processing farm. A few examples will be shown, such as the impressive performance improvements in the HLT trigger algorithms used to identify leptons, hadrons and global event quantities like missing transverse energy. Finally, the status of the commissioning of the trigger system and its performance during the 2015 run will be presented. (paper)

  2. Argonne Tandem Linac Accelerator System (ATLAS)

    Data.gov (United States)

    Federal Laboratory Consortium — ATLAS is a national user facility at Argonne National Laboratory in Argonne, Illinois. The ATLAS facility is a leading facility for nuclear structure research in the...

  3. FPGA-based 10-Gbit Ethernet Data Acquisition Interface for the Upgraded Electronics of the ATLAS Liquid Argon Calorimeters

    CERN Document Server

    Grohs, J P; The ATLAS collaboration

    2013-01-01

    The readout of the trigger signals of the ATLAS Liquid Argon (LAr) calorimeters is foreseen to be upgraded in order to prepare for operation during the first high-luminosity phase of the Large Hadron Collider (LHC). Signals with improved spatial granularity are planned to be received from the detector by a Digitial Processing System (DPS) in ATCA technology and will be sent in real-time to the ATLAS trigger system using custom optical links. These data are also sampled by the DPS for monitoring and will be read out by the regular Data Acquisition (DAQ) system of ATLAS which is a network-based PC-farm. The bandwidth between DPS module and DAQ system is expected to be in the order of 10 Gbit/s per module and a standard Ethernet protocol is foreseen to be used. DSP data will be prepared and sent by a modern FPGA either through a switch or directly to a Read-Out System (ROS) PC serving as buffer interface of the ATLAS DAQ. In a prototype setup, an ATCA blade equipped with a Xilinx Virtex-5 FPGA is used to send da...

  4. Design and Implementation of the ATLAS Detector Control System

    CERN Document Server

    Boterenbrood, H; Cook, J; Filimonov, V; Hallgren, B I; Heubers, W P J; Khomoutnikov, V; Ryabov, Yu; Varela, F

    2004-01-01

    The overall dimensions of the ATLAS experiment and its harsh environment, due to radiation and magnetic field, represent new challenges for the implementation of the Detector Control System. It supervises all hardware of the ATLAS detector, monitors the infrastructure of the experiment, and provides information exchange with the LHC accelerator. The system must allow for the operation of the different ATLAS sub-detectors in stand-alone mode, as required for calibration and debugging, as well as the coherent and integrated operation of all sub-detectors for physics data taking. For this reason, the Detector Control System is logically arranged to map the hierarchical organization of the ATLAS detector. Special requirements are placed onto the ATLAS Detector Control System because of the large number of distributed I/O channels and of the inaccessibility of the equipment during operation. Standardization is a crucial issue for the design and implementation of the control system because of the large variety of e...

  5. LASER monitoring system for the ATLAS Tile Calorimeter

    International Nuclear Information System (INIS)

    Viret, S.

    2010-01-01

    The ATLAS detector at the Large Hadron Collider (LHC) at CERN uses a scintillator-iron technique for its hadronic Tile Calorimeter (TileCal). Scintillating light is readout via 9852 photomultiplier tubes (PMTs). Calibration and monitoring of these PMTs are made using a LASER based system. Short light pulses are sent simultaneously into all the TileCal photomultiplier's tubes (PMTs) during ATLAS physics runs, thus providing essential information for ATLAS data quality and monitoring analyses. The experimental setup developed for this purpose is described as well as preliminary results obtained during ATLAS commissioning phase in 2008.

  6. The Run-2 ATLAS Trigger System

    CERN Document Server

    Ruiz-Martinez, Aranzazu; The ATLAS collaboration

    2016-01-01

    The ATLAS trigger has been successfully collecting collision data during the first run of the LHC between 2009-2013 at a centre-of-mass energy between 900 GeV and 8 TeV. The trigger system consists of a hardware Level-1 (L1) and a software based high-level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. In Run-2, the LHC will operate at centre-of-mass energies of 13 and 14 TeV resulting in roughly five times higher trigger rates. We will briefly review the ATLAS trigger system upgrades that were implemented during the shutdown, allowing us to cope with the increased trigger rates while maintaining or even improving our efficiency to select relevant physics processes. This includes changes to the L1 calorimeter and muon trigger systems, the introduction of a new L1 topological trigger module and the merging of the previously two-level HLT system into a single event filter farm. At hand of a few examples, we will show the ...

  7. The Run-2 ATLAS Trigger System

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00222798; The ATLAS collaboration

    2016-01-01

    The ATLAS trigger successfully collected collision data during the first run of the LHC between 2009-2013 at different centre-of-mass energies between 900 GeV and 8 TeV. The trigger system consists of a hardware Level-1 and a software-based high level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. In Run-2, the LHC will operate at centre-of-mass energies of 13 and 14 TeV and higher luminosity, resulting in roughly five times higher trigger rates. A brief review of the ATLAS trigger system upgrades that were implemented between Run-1 and Run-2, allowing to cope with the increased trigger rates while maintaining or even improving the efficiency to select physics processes of interest, will be given. This includes changes to the Level-1 calorimeter and muon trigger systems, the introduction of a new Level-1 topological trigger module and the merging of the previously two-level HLT system into a single event filter farm. A ...

  8. Final Test at the Surface of the ATLAS Endcap Muon Trigger Chamber Electronics

    CERN Document Server

    Kubota, T; Kanaya, N; Kawamoto, T; Kobayashi, T; Kuwabara, T; Nomoto, H; Sakamoto, H; Yamaguchi, T; Fukunaga, C; Ikeno, M; Iwasaki, H; Nagano, K; Nozaki, M; Sasaki, O; Tanaka, S; Yasu, Y; Hasegawa, Y; Oshita, H; Takeshita, T; Nomachi, M; Sugaya, Y; Sugimoto, T; Okumura, Y; Takahashi, Y; Tomoto, M; Kadosaka, T; Kawagoe, K; Kiyamura, H; Kurashige, H; Niwa, T; Ochi, A; Omachi, C; Takeda, H; Lifshitz, R; Lupu, N; Bressler, S; Tarem, S; Kajomovitz, E; Ben Ami, S; Bahat Treidel, O; Benhammou, Ya; Etzion, E; Lellouch, D; Levinson, L; Mikenberg, G; Roich, A

    2007-01-01

    For the detector commissioning planned in 2007, sector assembly of the ATLAS muon-endcap trigger chambers and final test at the surface for the assembled electronics are being done in CERN and almost completed. For the test, we built up the Data Acquisition (DAQ) system using test pulse of two types and cosmic rays in order to check functionality of the various aspects of the electronics mounted on a sector. So far, 99% of all 320,000 channels have been tested and most of them were installed into the ATLAS cavern. In this presentation, we will describe the DAQ systems and mass-test procedure in detail, and report the result of electronics test with some actual experiences

  9. The Database Driven ATLAS Trigger Configuration System

    CERN Document Server

    Martyniuk, Alex; The ATLAS collaboration

    2015-01-01

    This contribution describes the trigger selection configuration system of the ATLAS low- and high-level trigger (HLT) and the upgrades it received in preparation for LHC Run 2. The ATLAS trigger configuration system is responsible for applying the physics selection parameters for the online data taking at both trigger levels and the proper connection of the trigger lines across those levels. Here the low-level trigger consists of the already existing central trigger (CT) and the new Level-1 Topological trigger (L1Topo), which has been added for Run 2. In detail the tasks of the configuration system during the online data taking are Application of the selection criteria, e.g. energy cuts, minimum multiplicities, trigger object correlation, at the three trigger components L1Topo, CT, and HLT On-the-fly, e.g. rate-dependent, generation and application of prescale factors to the CT and HLT to adjust the trigger rates to the data taking conditions, such as falling luminosity or rate spikes in the detector readout ...

  10. ATLAS Tile Calorimeter calibration and monitoring systems

    Science.gov (United States)

    Cortés-González, Arely

    2018-01-01

    The ATLAS Tile Calorimeter is the central section of the hadronic calorimeter of the ATLAS experiment and provides important information for reconstruction of hadrons, jets, hadronic decays of tau leptons and missing transverse energy. This sampling calorimeter uses steel plates as absorber and scintillating tiles as active medium. The light produced by the passage of charged particles is transmitted by wavelength shifting fibres to photomultiplier tubes, located in the outer part of the calorimeter. Neutral particles may also produce a signal after interacting with the material and producing charged particles. The readout is segmented into about 5000 cells, each of them being read out by two photomultipliers in parallel. To calibrate and monitor the stability and performance of each part of the readout chain during the data taking, a set of calibration systems is used. This comprises Cesium radioactive sources, Laser, charge injection elements and an integrator based readout system. Information from all systems allows to monitor and equalise the calorimeter response at each stage of the signal production, from scintillation light to digitisation. Calibration runs are monitored from a data quality perspective and used as a cross-check for physics runs. The data quality efficiency achieved during 2016 was 98.9%. These calibration and stability of the calorimeter reported here show that the TileCal performance is within the design requirements and has given essential contribution to reconstructed objects and physics results.

  11. ATLAS Tile calorimeter calibration and monitoring systems

    Science.gov (United States)

    Chomont, Arthur; ATLAS Collaboration

    2017-11-01

    The ATLAS Tile Calorimeter (TileCal) is the central section of the hadronic calorimeter of the ATLAS experiment and provides important information for reconstruction of hadrons, jets, hadronic decays of tau leptons and missing transverse energy. This sampling calorimeter uses steel plates as absorber and scintillating tiles as active medium. The light produced by the passage of charged particles is transmitted by wavelength shifting fibres to photomultiplier tubes (PMTs), located on the outside of the calorimeter. The readout is segmented into about 5000 cells (longitudinally and transversally), each of them being read out by two PMTs in parallel. To calibrate and monitor the stability and performance of each part of the readout chain during the data taking, a set of calibration systems is used. The TileCal calibration system comprises cesium radioactive sources, Laser and charge injection elements, and allows for monitoring and equalization of the calorimeter response at each stage of the signal production, from scintillation light to digitization. Based on LHC Run 1 experience, several calibration systems were improved for Run 2. The lessons learned, the modifications, and the current LHC Run 2 performance are discussed.

  12. ATLAS Maintenance and Operation management system

    CERN Document Server

    Copy, B

    2007-01-01

    The maintenance and operation of the ATLAS detector will involve thousands of contributors from 170 physics institutes. Planning and coordinating the action of ATLAS members, ensuring their expertise is properly leveraged and that no parts of the detector are understaffed or overstaffed will be a challenging task. The ATLAS Maintenance and Operation application (referred to as Operation Task Planner inside the ATLAS experiment) offers a fluent web based interface that combines the flexibility and comfort of a desktop application, intuitive data visualization and navigation techniques, with a lightweight service oriented architecture. We will review the application, its usage within the ATLAS experiment, its underlying design and implementation.

  13. The Tilecal/ATLAS detector control system

    CERN Document Server

    Tomasio Pina, João Antonio

    2004-01-01

    Tilecal is the barrel hadronic calorimeter of the ATLAS detector that is presently being built at CERN to operate at the LHC accelerator. The main task of the Tilecal detector control system (DCS) is to enable the coherent and safe operation of the detector. All actions initiated by the operator and all errors, warnings, and alarms concerning the hardware of the detector are handled by DCS. The DCS has to continuously monitor all operational parameters, give warnings and alarms concerning the hardware of the detector. The DCS architecture consists of a distributed back-end (BE) system running on PC's and different front-end (FE) systems. The implementation of the BE will he achieved with a commercial supervisory control and data acquisition system (SCADA) and the FE instrumentation will consist on a wide variety of equipment. The connection between the FE and BE is provided by fieldbus or L

  14. LAND/R3B DAQ developments

    Energy Technology Data Exchange (ETDEWEB)

    Toernqvist, Hans; Aumann, Thomas; Loeher, Bastian [Technische Universitaet Darmstadt, Darmstadt (Germany); Simon, Haik [GSI Helmholtzzentrum fuer Schwerionenforschung GmbH, Darmstadt (Germany); Johansson, Haakan [Chalmers Institute of Technology, Goeteborg (Sweden); Collaboration: R3B-Collaboration

    2015-07-01

    Existing experimental setups aim to exploit most of the improved capabilities and specifications of the upcoming FAIR facility at GSI. Their DAQ designs will require some re-evaluation and upgrades. This presentation summarizes the R3B experimental campaigns in 2014, where the R3B DAQ was subject to testing of several new features that will aid researchers in using larger and more complicated experimental setups in the future. It also acted as part of a small testing ground for the NUSTAR DAQ infrastructure. In order to allow to extract correlations between several experimental sites, new suggested triggering and timestamping implementations were tested over significant distances. Also, with growing experimental complexity comes a greater risk of problems that may be difficult to characterize and solve. To this end, essential remote monitoring and debugging tools have been used successfully.

  15. The liquid helium system of ATLAS

    International Nuclear Information System (INIS)

    Nixon, J.M.; Bollinger, L.M.

    1989-01-01

    Starting in 1978 with one small refrigerator and distribution line, the LHe system of ATLAS has gradually grown into a complex network, as required by several enlargements of the superconducting linac. The cryogenic system now comprises 3 refrigerators, 11 helium compressors, /approximately/340 ft. of coaxial LHe transfer line, 3 1000-l dewars, and /approximately/76 LHe valves that deliver steady-state flowing LHe to 16 beam-line cryostats. In normal operation, the 3 refrigerators are linked so as to provide cooling where needed. LHe heat exchangers in distribution lines play an important role. This paper discusses design features of the system, including the logic of the controls that permit the coupled refrigerators to operate stably in the presence of large and sudden changes in heat load. 8 refs., 3 figs

  16. ATLAS Detector Control System Data Viewer

    CERN Document Server

    Tsarouchas, Charilaos; Roe, S; Bitenc, U; Fehling-Kaschek, ML; Winkelmann, S; D’Auria, S; Hoffmann, D; Pisano, O

    2011-01-01

    The ATLAS experiment at CERN is one of the four Large Hadron Collider experiments. DCS Data Viewer (DDV) is a web interface application that provides access to historical data of ATLAS Detector Control System [1] (DCS) parameters written to the database (DB). It has a modular andflexible design and is structured using a clientserver architecture. The server can be operated stand alone with a command-line interface to the data while the client offers a user friendly, browser independent interface. The selection of the metadata of DCS parameters is done via a column-tree view or with a powerful search engine. The final visualisation of the data is done using various plugins such as “value over time” charts, data tables, raw ASCII or structured export to ROOT. Excessive access or malicious use of the database is prevented by dedicated protection mechanisms, allowing the exposure of the tool to hundreds of inexperienced users. The metadata selection and data output features can be used separately by XML con...

  17. Alignment of the ATLAS Inner Detector Tracking System

    CERN Document Server

    Lacuesta, V; The ATLAS collaboration

    2010-01-01

    ATLAS is a multipurpose experiment that records the LHC collisions. To reconstruct trajectories of charged particles produced in these collisions, ATLAS tracking system is equipped with silicon planar sensors and drift‐tube based detectors. They constitute the ATLAS Inner Detector. In order to achieve its scientific goals, the alignment of the ATLAS tracking system requires the determine accurately its almost 36000 degrees of freedom. Thus the demanded precision for the alignment of the silicon sensors is below 10 micrometers. This implies to use a large sample of high momentum and isolated charge particle tracks. The high level trigger selects those tracks online. Then the raw data with the hits information of the triggered tracks is stored in a calibration stream. Tracks from cosmic trigger during empty LHC bunches are also used as input for the alignment. The implementation of the track based alignment within the ATLAS software framework unifies different alignment approaches and allows the alignment of ...

  18. The ATLAS event filter

    CERN Document Server

    Beck, H P; Boissat, C; Davis, R; Duval, P Y; Etienne, F; Fede, E; Francis, D; Green, P; Hemmer, F; Jones, R; MacKinnon, J; Mapelli, Livio P; Meessen, C; Mommsen, R K; Mornacchi, Giuseppe; Nacasch, R; Negri, A; Pinfold, James L; Polesello, G; Qian, Z; Rafflin, C; Scannicchio, D A; Stanescu, C; Touchard, F; Vercesi, V

    1999-01-01

    An overview of the studies for the ATLAS Event Filter is given. The architecture and the high level design of the DAQ-1 prototype is presented. The current status if the prototypes is briefly given. Finally, future plans and milestones are given. (11 refs).

  19. Glance Information System for ATLAS Management

    CERN Document Server

    De Oliveira Fernandes Moraes, L; The ATLAS collaboration; Ramos De Azevedo Evora, LH; Karam, K; Fink Grael, F; Pommes, K; Nessi, M; Cirilli, M

    2011-01-01

    ATLAS Experiment is an international collaboration where more than 37 countries, 172 institutes and laboratories, 2900 physicists, engineers, and computer scientists plus 700 students participate. The management of this teamwork involves several aspects such as institute contribution, employment records, members' appointment, authors' list, preparation and publication of papers and speakers nomination. Previously, most of the information was accessible by a limited group of people and the system used was not designed to handle new requirements easily. Moreover, developers had to face problems such as different terminology, diverse data modeling, heterogeneous databases and unlike users needs. Besides that, the maintenance has to be an easy task considering the long lifetime experiment and professionals turnover. The Glance system, a generic mechanism for accessing any database, acts as an intermediate layer isolating the user from the particularities of each database. It retrieves, inserts and updates the dat...

  20. ATLAS

    Data.gov (United States)

    Federal Laboratory Consortium — ATLAS is a particle physics experiment at the Large Hadron Collider at CERN, the European Organization for Nuclear Research. Scientists from Brookhaven have played...

  1. The ATLAS beam pick-up based timing system

    International Nuclear Information System (INIS)

    Ohm, C.; Pauly, T.

    2010-01-01

    The ATLAS BPTX stations are composed of electrostatic button pick-up detectors, located 175 m away along the beam pipe on both sides of ATLAS. The pick-ups are installed as a part of the LHC beam instrumentation and used by ATLAS for timing purposes. The usage of the BPTX signals in ATLAS is twofold: they are used both in the trigger system and for LHC beam monitoring. The BPTX signals are discriminated with a constant-fraction discriminator to provide a Level-1 trigger when a bunch passes through ATLAS. Furthermore, the BPTX detectors are used by a stand-alone monitoring system for the LHC bunches and timing signals. The BPTX monitoring system measures the phase between collisions and clock with a precision better than 100 ps in order to guarantee a stable phase relationship for optimal signal sampling in the sub-detector front-end electronics. In addition to monitoring this phase, the properties of the individual bunches are measured and the structure of the beams is determined. On September 10, 2008, the first LHC beams reached the ATLAS experiment. During this period with beam, the ATLAS BPTX system was used extensively to time in the read-out of the sub-detectors. In this paper, we present the performance of the BPTX system and its measurements of the first LHC beams.

  2. AGIS: Evolution of Distributed Computing information system for ATLAS

    Science.gov (United States)

    Anisenkov, A.; Di Girolamo, A.; Alandes, M.; Karavakis, E.

    2015-12-01

    ATLAS, a particle physics experiment at the Large Hadron Collider at CERN, produces petabytes of data annually through simulation production and tens of petabytes of data per year from the detector itself. The ATLAS computing model embraces the Grid paradigm and a high degree of decentralization of computing resources in order to meet the ATLAS requirements of petabytes scale data operations. It has been evolved after the first period of LHC data taking (Run-1) in order to cope with new challenges of the upcoming Run- 2. In this paper we describe the evolution and recent developments of the ATLAS Grid Information System (AGIS), developed in order to integrate configuration and status information about resources, services and topology of the computing infrastructure used by the ATLAS Distributed Computing applications and services.

  3. Advanced alignment of the ATLAS tracking system

    CERN Document Server

    AUTHOR|(CDS)2085334; The ATLAS collaboration

    2016-01-01

    In order to reconstruct the trajectories of charged particles, the ATLAS experiment exploits a tracking system built using different technologies, silicon planar modules or microstrips (PIX and SCT detectors) and gaseous drift tubes (TRT), all embedded in a 2T solenoidal magnetic field. Misalignments of the active detector elements and deformations of the structures (which can lead to \\textit{Weak Modes}) deteriorate resolution of the track reconstruction and lead to systematic biases on the measured track parameters. The applied alignment procedures exploit various advanced techniques in order to minimise track-hit residuals and remove detector deformations. For the LHC Run II, the Pixel Detector has been refurbished and upgraded with the installation of a new pixel layer, the Insertable B-layer (IBL).

  4. The ATLAS/TILECAL Detector Control System

    CERN Document Server

    Santos, H; The ATLAS collaboration

    2010-01-01

    Tilecal, the barrel hadronic calorimeter of ATLAS, is a sampling calorimeter where scintillating tiles are embedded in an iron matrix. The tiles are optically coupled to wavelength shifting fibers that carry the optical signal to photo-multipliers. It has a cylindrical shape and is made out of 3 cylinders, the Long Barrel with the LBA and LBC partitions, and the two Extended Barrel with the EBA and EBC partitions. The main task of the Tile calorimeter Detector Control System (DCS) is to enable the coherent and safe operation of the calorimeter. All actions initiated by the operator, as well as all errors, warnings and alarms concerning the hardware of the detector are handled by DCS. The Tile calorimeter DCS controls and monitors mainly the low voltage and high voltage power supply systems, but it is also interfaced with the infrastructure (cooling system and racks), the laser and cesium calibration systems, the data acquisition system, configuration and conditions databases and the detector safety system. In...

  5. The Detector Safety System of the ATLAS experiment

    International Nuclear Information System (INIS)

    Beltramello, O; Burckhart, H J; Franz, S; Jaekel, M; Jeckel, M; Lueders, S; Morpurgo, G; Santos Pedrosa, F dos; Pommes, K; Sandaker, H

    2009-01-01

    The ATLAS detector at the Large Hadron Collider at CERN is one of the most advanced detectors for High Energy Physics experiments ever built. It consists of the order of ten functionally independent sub-detectors, which all have dedicated services like power, cooling, gas supply. A Detector Safety System has been built to detect possible operational problems and abnormal and potentially dangerous situations at an early stage and, if needed, to bring the relevant part of ATLAS automatically into a safe state. The procedures and the configuration specific to ATLAS are described in detail and first operational experience is given.

  6. The consistency service of the ATLAS Distributed Data Management system

    CERN Document Server

    Serfon, C; The ATLAS collaboration

    2011-01-01

    With the continuously increasing volume of data produced by ATLAS and stored on the WLCG sites, the probability of data corruption or data losses, due to software and hardware failures is increasing. In order to ensure the consistency of all data produced by ATLAS a Consistency Service has been developed as part of the DQ2 Distributed Data Management system. This service is fed by the different ATLAS tools, i.e. the analysis tools, production tools, DQ2 site services or by site administrators that report corrupted or lost files. It automatically corrects the errors reported and informs the users in case of irrecoverable file loss.

  7. The Consistency Service of the ATLAS Distributed Data Management system

    CERN Document Server

    Serfon, C; The ATLAS collaboration

    2010-01-01

    With the continuously increasing volume of data produced by ATLAS and stored on the WLCG sites, the probability of data corruption or data losses, due to software and hardware failure is increasing. In order to ensure the consistency of all data produced by ATLAS a Consistency Service has been developed as part of the DQ2 Distributed Data Management system. This service is fed by the different ATLAS tools, i.e. the analysis tools, production tools, DQ2 site services or by site administrators that report corrupted or lost files. It automatically correct the errors reported and informs the users in case of irrecoverable file loss.

  8. Upgrades of the ATLAS trigger system

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00221618; The ATLAS collaboration

    2018-01-01

    In coming years the LHC is expected to undergo upgrades to increase both the energy of proton-proton collisions and the instantaneous luminosity. In order to cope with these more challenging LHC conditions, upgrades of the ATLAS trigger system will be required. This talk will focus on some of the key aspects of these upgrades. Firstly, the upgrade period between 2019-2021 will see an increase in instantaneous luminosity to $3\\times10^{34} \\rm{cm^{-2}s^{-1}}$. Upgrades to the Level 1 trigger system during this time will include improvements for both the muon and calorimeter triggers. These include the upgrade of the first-level Endcap Muon trigger, the calorimeter trigger electronics and the addition of new calorimeter feature extractor hardware, such as the Global Feature Extractor (gFEX). An overview will be given on the design and development status the aforementioned systems, along with the latest testing and validation results. \\\\ By 2026, the High Luminosity LHC will be able to deliver 14 TeV collisions ...

  9. ATLAS: A High-cadence All-sky Survey System

    Science.gov (United States)

    Tonry, J. L.; Denneau, L.; Heinze, A. N.; Stalder, B.; Smith, K. W.; Smartt, S. J.; Stubbs, C. W.; Weiland, H. J.; Rest, A.

    2018-06-01

    Technology has advanced to the point that it is possible to image the entire sky every night and process the data in real time. The sky is hardly static: many interesting phenomena occur, including variable stationary objects such as stars or QSOs, transient stationary objects such as supernovae or M dwarf flares, and moving objects such as asteroids and the stars themselves. Funded by NASA, we have designed and built a sky survey system for the purpose of finding dangerous near-Earth asteroids (NEAs). This system, the “Asteroid Terrestrial-impact Last Alert System” (ATLAS), has been optimized to produce the best survey capability per unit cost, and therefore is an efficient and competitive system for finding potentially hazardous asteroids (PHAs) but also for tracking variables and finding transients. While carrying out its NASA mission, ATLAS now discovers more bright (m day cadence. ATLAS discovered the afterglow of a gamma-ray burst independent of the high energy trigger and has released a variable star catalog of 5 × 106 sources. This is the first of a series of articles describing ATLAS, devoted to the design and performance of the ATLAS system. Subsequent articles will describe in more detail the software, the survey strategy, ATLAS-derived NEA population statistics, transient detections, and the first data release of variable stars and transient light curves.

  10. ATLAS TDAQ System Integration and Commissioning

    CERN Document Server

    Negri, A

    2010-01-01

    The ATLAS detector will be exposed to proton proton collisions at a center of mass energy of 14 TeV with the bunch crossing rate of 40 MHz. A three-level trigger system has been designed to reduce this rate down to the level at which only interesting events are fully reconstructed. The level 1 trigger reduces the rate down to 75 kHz via custom-built electronics. The Region of Interest Builder delivers the Region of Interest records to the second level trigger which runs the selection algorithms with the commodity processors and brings the rate further down to ~ 3.5 kHz. Finally the Event Filter reduces the rate down to ~ 200 Hz for permanent storage. We review the trigger and data acquisition architecture and its in situ commissioning using almost full detectors. Results on system functionality and performance based on the cosmic data, early experience on LHC beam in 2008 and preselected simulated events are presented.

  11. ATLAS LTCS Vertically Challenged System Lessons Learned

    Science.gov (United States)

    Patel, Deepak; Garrison, Matt; Ku, Jentung

    2014-01-01

    Re-planning of LTCS TVAC testing and supporting RTA (Receiver Telescope Assembly) Test Plan and Procedure document preparation. The Laser Thermal Control System (LTCS) is designed to maintain the lasers onboard Advanced Topographic Laser Altimeter System (ATLAS) at their operational temperatures. In order to verify the functionality of the LTCS, a thermal balance test of the thermal hardware was performed. During the first cold start of the LTCS, the Loop Heat Pipe (LHP) was unable to control the laser mass simulators temperature. The control heaters were fully on and the loop temperature remained well below the desired setpoint. Thermal analysis of the loop did not show these results. This unpredicted behavior of the LTCS was brought up to a panel of LHP experts. Based on the testing and a review of all the data, there were multiple diagnostic performed in order to narrow down the cause. The prevailing theory is that gravity is causing oscillating flow within the loop, which artificially increased the control power needs. This resulted in a replan of the LTCS test flow and the addition of a GSE heater to allow vertical operation.

  12. Alignment of the ATLAS Inner Detector Tracking System

    CERN Document Server

    Heller, C; The ATLAS collaboration

    2011-01-01

    ATLAS is one of the multipurpose experiments that records the products of the LHC proton-proton and heavy ion collisions. In order to reconstruct trajectories of charged particles produced in these collisions, ATLAS is equipped with a tracking system built using two different technologies, silicon planar sensors (pixel and microstrips) and drift-tube based detectors. Together they constitute the ATLAS Inner Detector, which is embedded in a 2 T axial field. Efficiently reconstructing tracks from charged particles traversing the detector, and precisely measure their momenta is of crucial importance for physics analyses. In order to achieve its scientific goals, an alignment of the ATLAS Inner Detector is required to accurately determine its more than 700,000 degrees of freedom. The goal of the alignment is set such that the limited knowledge of the sensor locations should not deteriorate the resolution of track parameters by more than 20% with respect to the intrinsic tracker resolution. The implementation of t...

  13. Experiences with the new ATLAS Distributed Data Management System

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00214543; The ATLAS collaboration

    2016-01-01

    The ATLAS Distributed Data Management (DDM) system has evolved drastically in the last two years with the Rucio software fully replacing the previous system before the start of LHC Run-2. The ATLAS DDM system manages now more than 200 petabytes spread on 130 storage sites and can handle file transfer rates of up to 30Hz. In this talk, we discuss our experience acquired in developing, commissioning, running and maintaining such a large system. First, we describe the general architecture of the system, our integration with external services like the WLCG File Transfer Service and the evolution of the system over its first year of production. Then, we show the performance of the system, describe the integration of new technologies such as object stores, and outline future developments which mainly focus on performance and automation. Finally we discuss the long term evolution of ATLAS data management.

  14. Performance of the ATLAS Trigger System in 2010

    CERN Document Server

    Aad, Georges; Abdallah, Jalal; Abdelalim, Ahmed Ali; Abdesselam, Abdelouahab; Abdinov, Ovsat; Abi, Babak; Abolins, Maris; Abramowicz, Halina; Abreu, Henso; Acerbi, Emilio; Acharya, Bobby Samir; Adams, David; Addy, Tetteh; Adelman, Jahred; Aderholz, Michael; Adomeit, Stefanie; Adragna, Paolo; Adye, Tim; Aefsky, Scott; Aguilar-Saavedra, Juan Antonio; Aharrouche, Mohamed; Ahlen, Steven; Ahles, Florian; Ahmad, Ashfaq; Ahsan, Mahsana; Aielli, Giulio; Akdogan, Taylan; Åkesson, Torsten Paul Ake; Akimoto, Ginga; Akimov, Andrei; Akiyama, Kunihiro; Alam, Mohammad; Alam, Muhammad Aftab; Albrand, Solveig; Aleksa, Martin; Aleksandrov, Igor; Alessandria, Franco; Alexa, Calin; Alexander, Gideon; Alexandre, Gauthier; Alexopoulos, Theodoros; Alhroob, Muhammad; Aliev, Malik; Alimonti, Gianluca; Alison, John; Aliyev, Magsud; Allport, Phillip; Allwood-Spiers, Sarah; Almond, John; Aloisio, Alberto; Alon, Raz; Alonso, Alejandro; Alviggi, Mariagrazia; Amako, Katsuya; Amaral, Pedro; Amelung, Christoph; Ammosov, Vladimir; Amorim, Antonio; Amorós, Gabriel; Amram, Nir; Anastopoulos, Christos; Andari, Nansi; Andeen, Timothy; Anders, Christoph Falk; Anderson, Kelby; Andreazza, Attilio; Andrei, George Victor; Andrieux, Marie-Laure; Anduaga, Xabier; Angerami, Aaron; Anghinolfi, Francis; Anjos, Nuno; Annovi, Alberto; Antonaki, Ariadni; Antonelli, Mario; Antonelli, Stefano; Antonov, Alexey; Antos, Jaroslav; Anulli, Fabio; Aoun, Sahar; Aperio Bella, Ludovica; Apolle, Rudi; Arabidze, Giorgi; Aracena, Ignacio; Arai, Yasuo; Arce, Ayana; Archambault, John-Paul; Arfaoui, Samir; Arguin, Jean-Francois; Arik, Engin; Arik, Metin; Armbruster, Aaron James; Arnaez, Olivier; Arnault, Christian; Artamonov, Andrei; Artoni, Giacomo; Arutinov, David; Asai, Shoji; Asfandiyarov, Ruslan; Ask, Stefan; Åsman, Barbro; Asquith, Lily; Assamagan, Ketevi; Astbury, Alan; Astvatsatourov, Anatoli; Atoian, Grigor; Aubert, Bernard; Auerbach, Benjamin; Auge, Etienne; Augsten, Kamil; Aurousseau, Mathieu; Austin, Nicholas; Avolio, Giuseppe; Avramidou, Rachel Maria; Axen, David; Ay, Cano; Azuelos, Georges; Azuma, Yuya; Baak, Max; Baccaglioni, Giuseppe; Bacci, Cesare; Bach, Andre; Bachacou, Henri; Bachas, Konstantinos; Bachy, Gerard; Backes, Moritz; Backhaus, Malte; Badescu, Elisabeta; Bagnaia, Paolo; Bahinipati, Seema; Bai, Yu; Bailey, David; Bain, Travis; Baines, John; Baker, Oliver Keith; Baker, Mark; Baker, Sarah; Baltasar Dos Santos Pedrosa, Fernando; Banas, Elzbieta; Banerjee, Piyali; Banerjee, Swagato; Banfi, Danilo; Bangert, Andrea Michelle; Bansal, Vikas; Bansil, Hardeep Singh; Barak, Liron; Baranov, Sergei; Barashkou, Andrei; Barbaro Galtieri, Angela; Barber, Tom; Barberio, Elisabetta Luigia; Barberis, Dario; Barbero, Marlon; Bardin, Dmitri; Barillari, Teresa; Barisonzi, Marcello; Barklow, Timothy; Barlow, Nick; Barnett, Bruce; Barnett, Michael; Baroncelli, Antonio; Barone, Gaetano; Barr, Alan; Barreiro, Fernando; Barreiro Guimarães da Costa, João; Barrillon, Pierre; Bartoldus, Rainer; Barton, Adam Edward; Bartsch, Detlef; Bartsch, Valeria; Bates, Richard; Batkova, Lucia; Batley, Richard; Battaglia, Andreas; Battistin, Michele; Battistoni, Giuseppe; Bauer, Florian; Bawa, Harinder Singh; Beare, Brian; Beau, Tristan; Beauchemin, Pierre-Hugues; Beccherle, Roberto; Bechtle, Philip; Beck, Hans Peter; Beckingham, Matthew; Becks, Karl-Heinz; Beddall, Andrew; Beddall, Ayda; Bedikian, Sourpouhi; Bednyakov, Vadim; Bee, Christopher; Begel, Michael; Behar Harpaz, Silvia; Behera, Prafulla; Beimforde, Michael; Belanger-Champagne, Camille; Bell, Paul; Bell, William; Bella, Gideon; Bellagamba, Lorenzo; Bellina, Francesco; Bellomo, Massimiliano; Belloni, Alberto; Beloborodova, Olga; Belotskiy, Konstantin; Beltramello, Olga; Ben Ami, Sagi; Benary, Odette; Benchekroun, Driss; Benchouk, Chafik; Bendel, Markus; Benedict, Brian Hugues; Benekos, Nektarios; Benhammou, Yan; Benjamin, Douglas; Benoit, Mathieu; Bensinger, James; Benslama, Kamal; Bentvelsen, Stan; Berge, David; Bergeaas Kuutmann, Elin; Berger, Nicolas; Berghaus, Frank; Berglund, Elina; Beringer, Jürg; Bernardet, Karim; Bernat, Pauline; Bernhard, Ralf; Bernius, Catrin; Berry, Tracey; Bertin, Antonio; Bertinelli, Francesco; Bertolucci, Federico; Besana, Maria Ilaria; Besson, Nathalie; Bethke, Siegfried; Bhimji, Wahid; Bianchi, Riccardo-Maria; Bianco, Michele; Biebel, Otmar; Bieniek, Stephen Paul; Biesiada, Jed; Biglietti, Michela; Bilokon, Halina; Bindi, Marcello; Binet, Sebastien; Bingul, Ahmet; Bini, Cesare; Biscarat, Catherine; Bitenc, Urban; Black, Kevin; Blair, Robert; Blanchard, Jean-Baptiste; Blanchot, Georges; Blazek, Tomas; Blocker, Craig; Blocki, Jacek; Blondel, Alain; Blum, Walter; Blumenschein, Ulrike; Bobbink, Gerjan; Bobrovnikov, Victor; Bocchetta, Simona Serena; Bocci, Andrea; Boddy, Christopher Richard; Boehler, Michael; Boek, Jennifer; Boelaert, Nele; Böser, Sebastian; Bogaerts, Joannes Andreas; Bogdanchikov, Alexander; Bogouch, Andrei; Bohm, Christian; Boisvert, Veronique; Bold, Tomasz; Boldea, Venera; Bolnet, Nayanka Myriam; Bona, Marcella; Bondarenko, Valery; Boonekamp, Maarten; Boorman, Gary; Booth, Chris; Bordoni, Stefania; Borer, Claudia; Borisov, Anatoly; Borissov, Guennadi; Borjanovic, Iris; Borroni, Sara; Bos, Kors; Boscherini, Davide; Bosman, Martine; Boterenbrood, Hendrik; Botterill, David; Bouchami, Jihene; Boudreau, Joseph; Bouhova-Thacker, Evelina Vassileva; Boulahouache, Chaouki; Bourdarios, Claire; Bousson, Nicolas; Boveia, Antonio; Boyd, James; Boyko, Igor; Bozhko, Nikolay; Bozovic-Jelisavcic, Ivanka; Bracinik, Juraj; Braem, André; Branchini, Paolo; Brandenburg, George; Brandt, Andrew; Brandt, Gerhard; Brandt, Oleg; Bratzler, Uwe; Brau, Benjamin; Brau, James; Braun, Helmut; Brelier, Bertrand; Bremer, Johan; Brenner, Richard; Bressler, Shikma; Breton, Dominique; Britton, Dave; Brochu, Frederic; Brock, Ian; Brock, Raymond; Brodbeck, Timothy; Brodet, Eyal; Broggi, Francesco; Bromberg, Carl; Brooijmans, Gustaaf; Brooks, William; Brown, Gareth; Brown, Heather; Brubaker, Erik; Bruckman de Renstrom, Pawel; Bruncko, Dusan; Bruneliere, Renaud; Brunet, Sylvie; Bruni, Alessia; Bruni, Graziano; Bruschi, Marco; Buanes, Trygve; Bucci, Francesca; Buchanan, James; Buchanan, Norman; Buchholz, Peter; Buckingham, Ryan; Buckley, Andrew; Buda, Stelian Ioan; Budagov, Ioulian; Budick, Burton; Büscher, Volker; Bugge, Lars; Buira-Clark, Daniel; Bulekov, Oleg; Bunse, Moritz; Buran, Torleiv; Burckhart, Helfried; Burdin, Sergey; Burgess, Thomas; Burke, Stephen; Busato, Emmanuel; Bussey, Peter; Buszello, Claus-Peter; Butin, François; Butler, Bart; Butler, John; Buttar, Craig; Butterworth, Jonathan; Buttinger, William; Byatt, Tom; Cabrera Urbán, Susana; Caforio, Davide; Cakir, Orhan; Calafiura, Paolo; Calderini, Giovanni; Calfayan, Philippe; Calkins, Robert; Caloba, Luiz; Caloi, Rita; Calvet, David; Calvet, Samuel; Camacho Toro, Reina; Camard, Arnaud; Camarri, Paolo; Cambiaghi, Mario; Cameron, David; Cammin, Jochen; Campana, Simone; Campanelli, Mario; Canale, Vincenzo; Canelli, Florencia; Canepa, Anadi; Cantero, Josu; Capasso, Luciano; Capeans Garrido, Maria Del Mar; Caprini, Irinel; Caprini, Mihai; Capriotti, Daniele; Capua, Marcella; Caputo, Regina; Caramarcu, Costin; Cardarelli, Roberto; Carli, Tancredi; Carlino, Gianpaolo; Carminati, Leonardo; Caron, Bryan; Caron, Sascha; Carrillo Montoya, German D; Carter, Antony; Carter, Janet; Carvalho, João; Casadei, Diego; Casado, Maria Pilar; Cascella, Michele; Caso, Carlo; Castaneda Hernandez, Alfredo Martin; Castaneda-Miranda, Elizabeth; Castillo Gimenez, Victoria; Castro, Nuno Filipe; Cataldi, Gabriella; Cataneo, Fernando; Catinaccio, Andrea; Catmore, James; Cattai, Ariella; Cattani, Giordano; Caughron, Seth; Cauz, Diego; Cavalleri, Pietro; Cavalli, Donatella; Cavalli-Sforza, Matteo; Cavasinni, Vincenzo; Cazzato, Antonio; Ceradini, Filippo; Santiago Cerqueira, Augusto; Cerri, Alessandro; Cerrito, Lucio; Cerutti, Fabio; Cetin, Serkant Ali; Cevenini, Francesco; Chafaq, Aziz; Chakraborty, Dhiman; Chan, Kevin; Chapleau, Bertrand; Chapman, John Derek; Chapman, John Wehrley; Chareyre, Eve; Charlton, Dave; Chavda, Vikash; Cheatham, Susan; Chekanov, Sergei; Chekulaev, Sergey; Chelkov, Gueorgui; Chelstowska, Magda Anna; Chen, Chunhui; Chen, Hucheng; Chen, Li; Chen, Shenjian; Chen, Tingyang; Chen, Xin; Cheng, Shaochen; Cheplakov, Alexander; Chepurnov, Vladimir; Cherkaoui El Moursli, Rajaa; Chernyatin, Valeriy; Cheu, Elliott; Cheung, Sing-Leung; Chevalier, Laurent; Chiefari, Giovanni; Chikovani, Leila; Childers, John Taylor; Chilingarov, Alexandre; Chiodini, Gabriele; Chislett, Rebecca Thalatta; Chizhov, Mihail; Choudalakis, Georgios; Chouridou, Sofia; Christidi, Illectra-Athanasia; Christov, Asen; Chromek-Burckhart, Doris; Chu, Ming-Lee; Chudoba, Jiri; Ciapetti, Guido; Ciba, Krzysztof; Ciftci, Abbas Kenan; Ciftci, Rena; Cinca, Diane; Cindro, Vladimir; Ciobotaru, Matei Dan; Ciocca, Claudia; Ciocio, Alessandra; Cirilli, Manuela; Ciubancan, Mihai; Clark, Allan G; Clark, Philip; Cleland, Bill; Clemens, Jean-Claude; Clement, Benoit; Clement, Christophe; Clifft, Roger; Coadou, Yann; Cobal, Marina; Coccaro, Andrea; Cochran, James H; Coe, Paul; Cogan, Joshua Godfrey; Coggeshall, James; Cogneras, Eric; Cojocaru, Claudiu; Colas, Jacques; Colijn, Auke-Pieter; Collard, Caroline; Collins, Neil; Collins-Tooth, Christopher; Collot, Johann; Colon, German; Conde Muiño, Patricia; Coniavitis, Elias; Conidi, Maria Chiara; Consonni, Michele; Consorti, Valerio; Constantinescu, Serban; Conta, Claudio; Conventi, Francesco; Cook, James; Cooke, Mark; Cooper, Ben; Cooper-Sarkar, Amanda; Cooper-Smith, Neil; Copic, Katherine; Cornelissen, Thijs; Corradi, Massimo; Corriveau, Francois; Cortes-Gonzalez, Arely; Cortiana, Giorgio; Costa, Giuseppe; Costa, María José; Costanzo, Davide; Costin, Tudor; Côté, David; Coura Torres, Rodrigo; Courneyea, Lorraine; Cowan, Glen; Cowden, Christopher; Cox, Brian; Cranmer, Kyle; Crescioli, Francesco; Cristinziani, Markus; Crosetti, Giovanni; Crupi, Roberto; Crépé-Renaudin, Sabine; Cuciuc, Constantin-Mihai; Cuenca Almenar, Cristóbal; Cuhadar Donszelmann, Tulay; Cuneo, Stefano; Curatolo, Maria; Curtis, Chris; Cwetanski, Peter; Czirr, Hendrik; Czyczula, Zofia; D'Auria, Saverio; D'Onofrio, Monica; D'Orazio, Alessia; Da Rocha Gesualdi Mello, Aline; Da Silva, Paulo Vitor; Da Via, Cinzia; Dabrowski, Wladyslaw; Dahlhoff, Andrea; Dai, Tiesheng; Dallapiccola, Carlo; Dam, Mogens; Dameri, Mauro; Damiani, Daniel; Danielsson, Hans Olof; Dannheim, Dominik; Dao, Valerio; Darbo, Giovanni; Darlea, Georgiana Lavinia; Daum, Cornelis; Dauvergne, Jean-Pierre; Davey, Will; Davidek, Tomas; Davidson, Nadia; Davidson, Ruth; Davies, Eleanor; Davies, Merlin; Davison, Adam; Davygora, Yuriy; Dawe, Edmund; Dawson, Ian; Dawson, John; Daya, Rozmin; De, Kaushik; de Asmundis, Riccardo; De Castro, Stefano; De Castro Faria Salgado, Pedro; De Cecco, Sandro; de Graat, Julien; De Groot, Nicolo; de Jong, Paul; De La Taille, Christophe; De la Torre, Hector; De Lotto, Barbara; De Mora, Lee; De Nooij, Lucie; De Oliveira Branco, Miguel; De Pedis, Daniele; de Saintignon, Paul; De Salvo, Alessandro; De Sanctis, Umberto; De Santo, Antonella; De Vivie De Regie, Jean-Baptiste; Dean, Simon; Dedovich, Dmitri; Degenhardt, James; Dehchar, Mohamed; Deile, Mario; Del Papa, Carlo; Del Peso, Jose; Del Prete, Tarcisio; Dell'Acqua, Andrea; Dell'Asta, Lidia; Della Pietra, Massimo; della Volpe, Domenico; Delmastro, Marco; Delpierre, Pierre; Delruelle, Nicolas; Delsart, Pierre-Antoine; Deluca, Carolina; Demers, Sarah; Demichev, Mikhail; Demirkoz, Bilge; Deng, Jianrong; Denisov, Sergey; Derendarz, Dominik; Derkaoui, Jamal Eddine; Derue, Frederic; Dervan, Paul; Desch, Klaus Kurt; Devetak, Erik; Deviveiros, Pier-Olivier; Dewhurst, Alastair; DeWilde, Burton; Dhaliwal, Saminder; Dhullipudi, Ramasudhakar; Di Ciaccio, Anna; Di Ciaccio, Lucia; Di Girolamo, Alessandro; Di Girolamo, Beniamino; Di Luise, Silvestro; Di Mattia, Alessandro; Di Micco, Biagio; Di Nardo, Roberto; Di Simone, Andrea; Di Sipio, Riccardo; Diaz, Marco Aurelio; Diblen, Faruk; Diehl, Edward; Dieli, Michele Vincenzo; Dietl, Hans; Dietrich, Janet; Dietzsch, Thorsten; Diglio, Sara; Dindar Yagci, Kamile; Dingfelder, Jochen; Dionisi, Carlo; Dita, Petre; Dita, Sanda; Dittus, Fridolin; Djama, Fares; Djilkibaev, Rashid; Djobava, Tamar; Barros do Vale, Maria Aline; Do Valle Wemans, André; Doan, Thi Kieu Oanh; Dobbs, Matt; Dobinson, Robert; Dobos, Daniel; Dobson, Ellie; Dobson, Marc; Dodd, Jeremy; Dogan, Ozgen Berkol; Doglioni, Caterina; Doherty, Tom; Doi, Yoshikuni; Dolejsi, Jiri; Dolenc, Irena; Dolezal, Zdenek; Dolgoshein, Boris; Dohmae, Takeshi; Donadelli, Marisilvia; Donega, Mauro; Donini, Julien; Dopke, Jens; Doria, Alessandra; Dos Anjos, Andre; Dosil, Mireia; Dotti, Andrea; Dova, Maria-Teresa; Dowell, John; Doxiadis, Alexander; Doyle, Tony; Drasal, Zbynek; Drees, Jürgen; Dressnandt, Nandor; Drevermann, Hans; Driouichi, Chafik; Dris, Manolis; Dubbert, Jörg; Dubbs, Tim; Dube, Sourabh; Duchovni, Ehud; Duckeck, Guenter; Dudarev, Alexey; Dudziak, Fanny; Dührssen, Michael; Duerdoth, Ian; Duflot, Laurent; Dufour, Marc-Andre; Dunford, Monica; Duran Yildiz, Hatice; Duxfield, Robert; Dwuznik, Michal; Dydak, Friedrich; Dzahini, Daniel; Düren, Michael; Ebenstein, William; Ebke, Johannes; Eckert, Simon; Eckweiler, Sebastian; Edmonds, Keith; Edwards, Clive; Edwards, Nicholas Charles; Ehrenfeld, Wolfgang; Ehrich, Thies; Eifert, Till; Eigen, Gerald; Einsweiler, Kevin; Eisenhandler, Eric; Ekelof, Tord; El Kacimi, Mohamed; Ellert, Mattias; Elles, Sabine; Ellinghaus, Frank; Ellis, Katherine; Ellis, Nicolas; Elmsheuser, Johannes; Elsing, Markus; Ely, Robert; Emeliyanov, Dmitry; Engelmann, Roderich; Engl, Albert; Epp, Brigitte; Eppig, Andrew; Erdmann, Johannes; Ereditato, Antonio; Eriksson, Daniel; Ernst, Jesse; Ernst, Michael; Ernwein, Jean; Errede, Deborah; Errede, Steven; Ertel, Eugen; Escalier, Marc; Escobar, Carlos; Espinal Curull, Xavier; Esposito, Bellisario; Etienne, Francois; Etienvre, Anne-Isabelle; Etzion, Erez; Evangelakou, Despoina; Evans, Hal; Fabbri, Laura; Fabre, Caroline; Fakhrutdinov, Rinat; Falciano, Speranza; Falou, Alain; Fang, Yaquan; Fanti, Marcello; Farbin, Amir; Farilla, Addolorata; Farley, Jason; Farooque, Trisha; Farrington, Sinead; Farthouat, Philippe; Fassnacht, Patrick; Fassouliotis, Dimitrios; Fatholahzadeh, Baharak; Favareto, Andrea; Fayard, Louis; Fazio, Salvatore; Febbraro, Renato; Federic, Pavol; Fedin, Oleg; Fedorko, Ivan; Fedorko, Woiciech; Fehling-Kaschek, Mirjam; Feligioni, Lorenzo; Fellmann, Denis; Felzmann, Ulrich; Feng, Cunfeng; Feng, Eric; Fenyuk, Alexander; Ferencei, Jozef; Ferland, Jonathan; Fernando, Waruna; Ferrag, Samir; Ferrando, James; Ferrara, Valentina; Ferrari, Arnaud; Ferrari, Pamela; Ferrari, Roberto; Ferrer, Antonio; Ferrer, Maria Lorenza; Ferrere, Didier; Ferretti, Claudio; Ferretto Parodi, Andrea; Fiascaris, Maria; Fiedler, Frank; Filipčič, Andrej; Filippas, Anastasios; Filthaut, Frank; Fincke-Keeler, Margret; Fiolhais, Miguel; Fiorini, Luca; Firan, Ana; Fischer, Gordon; Fischer, Peter; Fisher, Matthew; Fisher, Steve; Flechl, Martin; Fleck, Ivor; Fleckner, Johanna; Fleischmann, Philipp; Fleischmann, Sebastian; Flick, Tobias; Flores Castillo, Luis; Flowerdew, Michael; Föhlisch, Florian; Fokitis, Manolis; Fonseca Martin, Teresa; Forbush, David Alan; Formica, Andrea; Forti, Alessandra; Fortin, Dominique; Foster, Joe; Fournier, Daniel; Foussat, Arnaud; Fowler, Andrew; Fowler, Ken; Fox, Harald; Francavilla, Paolo; Franchino, Silvia; Francis, David; Frank, Tal; Franklin, Melissa; Franz, Sebastien; Fraternali, Marco; Fratina, Sasa; French, Sky; Froeschl, Robert; Froidevaux, Daniel; Frost, James; Fukunaga, Chikara; Fullana Torregrosa, Esteban; Fuster, Juan; Gabaldon, Carolina; Gabizon, Ofir; Gadfort, Thomas; Gadomski, Szymon; Gagliardi, Guido; Gagnon, Pauline; Galea, Cristina; Gallas, Elizabeth; Gallas, Manuel; Gallo, Valentina Santina; Gallop, Bruce; Gallus, Petr; Galyaev, Eugene; Gan, KK; Gao, Yongsheng; Gapienko, Vladimir; Gaponenko, Andrei; Garberson, Ford; Garcia-Sciveres, Maurice; García, Carmen; García Navarro, José Enrique; Gardner, Robert; Garelli, Nicoletta; Garitaonandia, Hegoi; Garonne, Vincent; Garvey, John; Gatti, Claudio; Gaudio, Gabriella; Gaumer, Olivier; Gaur, Bakul; Gauthier, Lea; Gavrilenko, Igor; Gay, Colin; Gaycken, Goetz; Gayde, Jean-Christophe; Gazis, Evangelos; Ge, Peng; Gee, Norman; Geerts, Daniël Alphonsus Adrianus; Geich-Gimbel, Christoph; Gellerstedt, Karl; Gemme, Claudia; Gemmell, Alistair; Genest, Marie-Hélène; Gentile, Simonetta; George, Matthias; George, Simon; Gerlach, Peter; Gershon, Avi; Geweniger, Christoph; Ghazlane, Hamid; Ghez, Philippe; Ghodbane, Nabil; Giacobbe, Benedetto; Giagu, Stefano; Giakoumopoulou, Victoria; Giangiobbe, Vincent; Gianotti, Fabiola; Gibbard, Bruce; Gibson, Adam; Gibson, Stephen; Gilbert, Laura; Gilchriese, Murdock; Gilewsky, Valentin; Gillberg, Dag; Gillman, Tony; Gingrich, Douglas; Ginzburg, Jonatan; Giokaris, Nikos; Giordano, Raffaele; Giorgi, Francesco Michelangelo; Giovannini, Paola; Giraud, Pierre-Francois; Giugni, Danilo; Giunta, Michele; Giusti, Paolo; Gjelsten, Børge Kile; Gladilin, Leonid; Glasman, Claudia; Glatzer, Julian; Glazov, Alexandre; Glitza, Karl-Walter; Glonti, George; Godfrey, Jennifer; Godlewski, Jan; Goebel, Martin; Göpfert, Thomas; Goeringer, Christian; Gössling, Claus; Göttfert, Tobias; Goldfarb, Steven; Goldin, Daniel; Golling, Tobias; Golovnia, Serguei; Gomes, Agostinho; Gomez Fajardo, Luz Stella; Gonçalo, Ricardo; Goncalves Pinto Firmino Da Costa, Joao; Gonella, Laura; Gonidec, Allain; Gonzalez, Saul; González de la Hoz, Santiago; Gonzalez Silva, Laura; Gonzalez-Sevilla, Sergio; Goodson, Jeremiah Jet; Goossens, Luc; Gorbounov, Petr Andreevich; Gordon, Howard; Gorelov, Igor; Gorfine, Grant; Gorini, Benedetto; Gorini, Edoardo; Gorišek, Andrej; Gornicki, Edward; Gorokhov, Serguei; Goryachev, Vladimir; Gosdzik, Bjoern; Gosselink, Martijn; Gostkin, Mikhail Ivanovitch; Gouanère, Michel; Gough Eschrich, Ivo; Gouighri, Mohamed; Goujdami, Driss; Goulette, Marc Phillippe; Goussiou, Anna; Goy, Corinne; Grabowska-Bold, Iwona; Grabski, Varlen; Grafström, Per; Grah, Christian; Grahn, Karl-Johan; Grancagnolo, Francesco; Grancagnolo, Sergio; Grassi, Valerio; Gratchev, Vadim; Grau, Nathan; Gray, Heather; Gray, Julia Ann; Graziani, Enrico; Grebenyuk, Oleg; Greenfield, Debbie; Greenshaw, Timothy; Greenwood, Zeno Dixon; Gregor, Ingrid-Maria; Grenier, Philippe; Griesmayer, Erich; Griffiths, Justin; Grigalashvili, Nugzar; Grillo, Alexander; Grinstein, Sebastian; Grishkevich, Yaroslav; Grivaz, Jean-Francois; Grognuz, Joel; Groh, Manfred; Gross, Eilam; Grosse-Knetter, Joern; Groth-Jensen, Jacob; Grybel, Kai; Guarino, Victor; Guest, Daniel; Guicheney, Christophe; Guida, Angelo; Guillemin, Thibault; Guindon, Stefan; Guler, Hulya; Gunther, Jaroslav; Guo, Bin; Guo, Jun; Gupta, Ambreesh; Gusakov, Yury; Gushchin, Vladimir; Gutierrez, Andrea; Gutierrez, Phillip; Guttman, Nir; Gutzwiller, Olivier; Guyot, Claude; Gwenlan, Claire; Gwilliam, Carl; Haas, Andy; Haas, Stefan; Haber, Carl; Hackenburg, Robert; Hadavand, Haleh Khani; Hadley, David; Haefner, Petra; Hahn, Ferdinand; Haider, Stefan; Hajduk, Zbigniew; Hakobyan, Hrachya; Haller, Johannes; Hamacher, Klaus; Hamal, Petr; Hamilton, Andrew; Hamilton, Samuel; Han, Hongguang; Han, Liang; Hanagaki, Kazunori; Hance, Michael; Handel, Carsten; Hanke, Paul; Hansen, John Renner; Hansen, Jørgen Beck; Hansen, Jorn Dines; Hansen, Peter Henrik; Hansson, Per; Hara, Kazuhiko; Hare, Gabriel; Harenberg, Torsten; Harkusha, Siarhei; Harper, Devin; Harrington, Robert; Harris, Orin; Harrison, Karl; Hartert, Jochen; Hartjes, Fred; Haruyama, Tomiyoshi; Harvey, Alex; Hasegawa, Satoshi; Hasegawa, Yoji; Hassani, Samira; Hatch, Mark; Hauff, Dieter; Haug, Sigve; Hauschild, Michael; Hauser, Reiner; Havranek, Miroslav; Hawes, Brian; Hawkes, Christopher; Hawkings, Richard John; Hawkins, Donovan; Hayakawa, Takashi; Hayden, Daniel; Hayward, Helen; Haywood, Stephen; Hazen, Eric; He, Mao; Head, Simon; Hedberg, Vincent; Heelan, Louise; Heim, Sarah; Heine, Kristin; Heinemann, Beate; Heisterkamp, Simon; Helary, Louis; Heldmann, Michael; Heller, Mathieu; Hellman, Sten; Helsens, Clement; Henderson, Robert; Henke, Michael; Henrichs, Anna; Henriques Correia, Ana Maria; Henrot-Versille, Sophie; Henry-Couannier, Frédéric; Hensel, Carsten; Henß, Tobias; Medina Hernandez, Carlos; Hernández Jiménez, Yesenia; Herrberg, Ruth; Hershenhorn, Alon David; Herten, Gregor; Hertenberger, Ralf; Hervas, Luis; Hessey, Nigel; Hidvegi, Attila; Higón-Rodriguez, Emilio; Hill, Daniel; Hill, John; Hill, Norman; Hiller, Karl Heinz; Hillert, Sonja; Hillier, Stephen; Hinchliffe, Ian; Hines, Elizabeth; Hirose, Minoru; Hirsch, Florian; Hirschbuehl, Dominic; Hobbs, John; Hod, Noam; Hodgkinson, Mark; Hodgson, Paul; Hoecker, Andreas; Hoeferkamp, Martin; Hoffman, Julia; Hoffmann, Dirk; Hohlfeld, Marc; Holder, Martin; Holmes, Alan; Holmgren, Sven-Olof; Holy, Tomas; Holzbauer, Jenny; Homma, Yasuhiro; Hong, Tae Min; Hooft van Huysduynen, Loek; Horazdovsky, Tomas; Horn, Claus; Horner, Stephan; Horton, Katherine; Hostachy, Jean-Yves; Hou, Suen; Houlden, Michael; Hoummada, Abdeslam; Howarth, James; Howell, David; Hristova, Ivana; Hrivnac, Julius; Hruska, Ivan; Hryn'ova, Tetiana; Hsu, Pai-hsien Jennifer; Hsu, Shih-Chieh; Huang, Guang Shun; Hubacek, Zdenek; Hubaut, Fabrice; Huegging, Fabian; Huffman, Todd Brian; Hughes, Emlyn; Hughes, Gareth; Hughes-Jones, Richard; Huhtinen, Mika; Hurst, Peter; Hurwitz, Martina; Husemann, Ulrich; Huseynov, Nazim; Huston, Joey; Huth, John; Iacobucci, Giuseppe; Iakovidis, Georgios; Ibbotson, Michael; Ibragimov, Iskander; Ichimiya, Ryo; Iconomidou-Fayard, Lydia; Idarraga, John; Idzik, Marek; Iengo, Paolo; Igonkina, Olga; Ikegami, Yoichi; Ikeno, Masahiro; Ilchenko, Yuri; Iliadis, Dimitrios; Imbault, Didier; Imhaeuser, Martin; Imori, Masatoshi; Ince, Tayfun; Inigo-Golfin, Joaquin; Ioannou, Pavlos; Iodice, Mauro; Ionescu, Gelu; Irles Quiles, Adrian; Ishii, Koji; Ishikawa, Akimasa; Ishino, Masaya; Ishmukhametov, Renat; Issever, Cigdem; Istin, Serhat; Itoh, Yuki; Ivashin, Anton; Iwanski, Wieslaw; Iwasaki, Hiroyuki; Izen, Joseph; Izzo, Vincenzo; Jackson, Brett; Jackson, John; Jackson, Paul; Jaekel, Martin; Jain, Vivek; Jakobs, Karl; Jakobsen, Sune; Jakubek, Jan; Jana, Dilip; Jankowski, Ernest; Jansen, Eric; Jantsch, Andreas; Janus, Michel; Jarlskog, Göran; Jeanty, Laura; Jelen, Kazimierz; Jen-La Plante, Imai; Jenni, Peter; Jeremie, Andrea; Jež, Pavel; Jézéquel, Stéphane; Jha, Manoj Kumar; Ji, Haoshuang; Ji, Weina; Jia, Jiangyong; Jiang, Yi; Jimenez Belenguer, Marcos; Jin, Ge; Jin, Shan; Jinnouchi, Osamu; Joergensen, Morten Dam; Joffe, David; Johansen, Lars; Johansen, Marianne; Johansson, Erik; Johansson, Per; Johnert, Sebastian; Johns, Kenneth; Jon-And, Kerstin; Jones, Graham; Jones, Roger; Jones, Tegid; Jones, Tim; Jonsson, Ove; Joram, Christian; Jorge, Pedro; Joseph, John; Ju, Xiangyang; Juranek, Vojtech; Jussel, Patrick; Kabachenko, Vasily; Kabana, Sonja; Kaci, Mohammed; Kaczmarska, Anna; Kadlecik, Peter; Kado, Marumi; Kagan, Harris; Kagan, Michael; Kaiser, Steffen; Kajomovitz, Enrique; Kalinin, Sergey; Kalinovskaya, Lidia; Kama, Sami; Kanaya, Naoko; Kaneda, Michiru; Kanno, Takayuki; Kantserov, Vadim; Kanzaki, Junichi; Kaplan, Benjamin; Kapliy, Anton; Kaplon, Jan; Kar, Deepak; Karagoz, Muge; Karnevskiy, Mikhail; Karr, Kristo; Kartvelishvili, Vakhtang; Karyukhin, Andrey; Kashif, Lashkar; Kasmi, Azzedine; Kass, Richard; Kastanas, Alex; Kataoka, Mayuko; Kataoka, Yousuke; Katsoufis, Elias; Katzy, Judith; Kaushik, Venkatesh; Kawagoe, Kiyotomo; Kawamoto, Tatsuo; Kawamura, Gen; Kayl, Manuel; Kazanin, Vassili; Kazarinov, Makhail; Keates, James Robert; Keeler, Richard; Kehoe, Robert; Keil, Markus; Kekelidze, George; Kelly, Marc; Kennedy, John; Kenney, Christopher John; Kenyon, Mike; Kepka, Oldrich; Kerschen, Nicolas; Kerševan, Borut Paul; Kersten, Susanne; Kessoku, Kohei; Ketterer, Christian; Keung, Justin; Khakzad, Mohsen; Khalil-zada, Farkhad; Khandanyan, Hovhannes; Khanov, Alexander; Kharchenko, Dmitri; Khodinov, Alexander; Kholodenko, Anatoli; Khomich, Andrei; Khoo, Teng Jian; Khoriauli, Gia; Khoroshilov, Andrey; Khovanskiy, Nikolai; Khovanskiy, Valery; Khramov, Evgeniy; Khubua, Jemal; Kim, Hyeon Jin; Kim, Min Suk; Kim, Peter; Kim, Shinhong; Kimura, Naoki; Kind, Oliver; King, Barry; King, Matthew; King, Robert Steven Beaufoy; Kirk, Julie; Kirsch, Guillaume; Kirsch, Lawrence; Kiryunin, Andrey; Kisielewska, Danuta; Kittelmann, Thomas; Kiver, Andrey; Kiyamura, Hironori; Kladiva, Eduard; Klaiber-Lodewigs, Jonas; Klein, Max; Klein, Uta; Kleinknecht, Konrad; Klemetti, Miika; Klier, Amit; Klimentov, Alexei; Klingenberg, Reiner; Klinkby, Esben; Klioutchnikova, Tatiana; Klok, Peter; Klous, Sander; Kluge, Eike-Erik; Kluge, Thomas; Kluit, Peter; Kluth, Stefan; Kneringer, Emmerich; Knobloch, Juergen; Knoops, Edith; Knue, Andrea; Ko, Byeong Rok; Kobayashi, Tomio; Kobel, Michael; Kocian, Martin; Kocnar, Antonin; Kodys, Peter; Köneke, Karsten; König, Adriaan; Koenig, Sebastian; Köpke, Lutz; Koetsveld, Folkert; Koevesarki, Peter; Koffas, Thomas; Koffeman, Els; Kohn, Fabian; Kohout, Zdenek; Kohriki, Takashi; Koi, Tatsumi; Kokott, Thomas; Kolachev, Guennady; Kolanoski, Hermann; Kolesnikov, Vladimir; Koletsou, Iro; Koll, James; Kollar, Daniel; Kollefrath, Michael; Kolya, Scott; Komar, Aston; Komaragiri, Jyothsna Rani; Komori, Yuto; Kondo, Takahiko; Kono, Takanori; Kononov, Anatoly; Konoplich, Rostislav; Konstantinidis, Nikolaos; Kootz, Andreas; Koperny, Stefan; Kopikov, Sergey; Korcyl, Krzysztof; Kordas, Kostantinos; Koreshev, Victor; Korn, Andreas; Korol, Aleksandr; Korolkov, Ilya; Korolkova, Elena; Korotkov, Vladislav; Kortner, Oliver; Kortner, Sandra; Kostyukhin, Vadim; Kotamäki, Miikka Juhani; Kotov, Sergey; Kotov, Vladislav; Kotwal, Ashutosh; Kourkoumelis, Christine; Kouskoura, Vasiliki; Koutsman, Alex; Kowalewski, Robert Victor; Kowalski, Tadeusz; Kozanecki, Witold; Kozhin, Anatoly; Kral, Vlastimil; Kramarenko, Viktor; Kramberger, Gregor; Krasel, Olaf; Krasny, Mieczyslaw Witold; Krasznahorkay, Attila; Kraus, James; Kreisel, Arik; Krejci, Frantisek; Kretzschmar, Jan; Krieger, Nina; Krieger, Peter; Kroeninger, Kevin; Kroha, Hubert; Kroll, Joe; Kroseberg, Juergen; Krstic, Jelena; Kruchonak, Uladzimir; Krüger, Hans; Kruker, Tobias; Krumshteyn, Zinovii; Kruth, Andre; Kubota, Takashi; Kuehn, Susanne; Kugel, Andreas; Kuhl, Thorsten; Kuhn, Dietmar; Kukhtin, Victor; Kulchitsky, Yuri; Kuleshov, Sergey; Kummer, Christian; Kuna, Marine; Kundu, Nikhil; Kunkle, Joshua; Kupco, Alexander; Kurashige, Hisaya; Kurata, Masakazu; Kurochkin, Yurii; Kus, Vlastimil; Kuykendall, William; Kuze, Masahiro; Kuzhir, Polina; Kvasnicka, Ondrej; Kvita, Jiri; Kwee, Regina; La Rosa, Alessandro; La Rotonda, Laura; Labarga, Luis; Labbe, Julien; Lablak, Said; Lacasta, Carlos; Lacava, Francesco; Lacker, Heiko; Lacour, Didier; Lacuesta, Vicente Ramón; Ladygin, Evgueni; Lafaye, Rémi; Laforge, Bertrand; Lagouri, Theodota; Lai, Stanley; Laisne, Emmanuel; Lamanna, Massimo; Lampen, Caleb; Lampl, Walter; Lancon, Eric; Landgraf, Ulrich; Landon, Murrough; Landsman, Hagar; Lane, Jenna; Lange, Clemens; Lankford, Andrew; Lanni, Francesco; Lantzsch, Kerstin; Lapin, Vladimir; Laplace, Sandrine; Lapoire, Cecile; Laporte, Jean-Francois; Lari, Tommaso; Larionov, Anatoly; Larner, Aimee; Lasseur, Christian; Lassnig, Mario; Lau, Wing; Laurelli, Paolo; Lavorato, Antonia; Lavrijsen, Wim; Laycock, Paul; Lazarev, Alexandre; Lazzaro, Alfio; Le Dortz, Olivier; Le Guirriec, Emmanuel; Le Maner, Christophe; Le Menedeu, Eve; Lebedev, Alexander; Lebel, Céline; LeCompte, Thomas; Ledroit-Guillon, Fabienne Agnes Marie; Lee, Hurng-Chun; Lee, Jason; Lee, Shih-Chang; Lee, Lawrence; Lefebvre, Michel; Legendre, Marie; Leger, Annie; LeGeyt, Benjamin; Legger, Federica; Leggett, Charles; Lehmacher, Marc; Lehmann Miotto, Giovanna; Lei, Xiaowen; Leite, Marco Aurelio Lisboa; Leitner, Rupert; Lellouch, Daniel; Leltchouk, Mikhail; Lendermann, Victor; Leney, Katharine; Lenz, Tatiana; Lenzen, Georg; Lenzi, Bruno; Leonhardt, Kathrin; Leontsinis, Stefanos; Leroy, Claude; Lessard, Jean-Raphael; Lesser, Jonas; Lester, Christopher; Leung Fook Cheong, Annabelle; Levêque, Jessica; Levin, Daniel; Levinson, Lorne; Levitski, Mikhail; Lewandowska, Marta; Lewis, Adrian; Lewis, George; Leyko, Agnieszka; Leyton, Michael; Li, Bo; Li, Haifeng; Li, Shu; Li, Xuefei; Liang, Zhihua; Liang, Zhijun; Liberti, Barbara; Lichard, Peter; Lichtnecker, Markus; Lie, Ki; Liebig, Wolfgang; Lifshitz, Ronen; Lilley, Joseph; Limbach, Christian; Limosani, Antonio; Limper, Maaike; Lin, Simon; Linde, Frank; Linnemann, James; Lipeles, Elliot; Lipinsky, Lukas; Lipniacka, Anna; Liss, Tony; Lissauer, David; Lister, Alison; Litke, Alan; Liu, Chuanlei; Liu, Dong; Liu, Hao; Liu, Jianbei; Liu, Minghui; Liu, Shengli; Liu, Yanwen; Livan, Michele; Livermore, Sarah; Lleres, Annick; Llorente Merino, Javier; Lloyd, Stephen; Lobodzinska, Ewelina; Loch, Peter; Lockman, William; Lockwitz, Sarah; Loddenkoetter, Thomas; Loebinger, Fred; Loginov, Andrey; Loh, Chang Wei; Lohse, Thomas; Lohwasser, Kristin; Lokajicek, Milos; Loken, James; Lombardo, Vincenzo Paolo; Long, Robin Eamonn; Lopes, Lourenco; Lopez Mateos, David; Losada, Marta; Loscutoff, Peter; Lo Sterzo, Francesco; Losty, Michael; Lou, Xinchou; Lounis, Abdenour; Loureiro, Karina; Love, Jeremy; Love, Peter; Lowe, Andrew; Lu, Feng; Lu, Liang; Lubatti, Henry; Luci, Claudio; Lucotte, Arnaud; Ludwig, Andreas; Ludwig, Dörthe; Ludwig, Inga; Ludwig, Jens; Luehring, Frederick; Luijckx, Guy; Lumb, Debra; Luminari, Lamberto; Lund, Esben; Lund-Jensen, Bengt; Lundberg, Björn; Lundberg, Johan; Lundquist, Johan; Lungwitz, Matthias; Lupi, Anna; Lutz, Gerhard; Lynn, David; Lys, Jeremy; Lytken, Else; Ma, Hong; Ma, Lian Liang; Macana Goia, Jorge Andres; Maccarrone, Giovanni; Macchiolo, Anna; Maček, Boštjan; Machado Miguens, Joana; Mackeprang, Rasmus; Madaras, Ronald; Mader, Wolfgang; Maenner, Reinhard; Maeno, Tadashi; Mättig, Peter; Mättig, Stefan; Magalhaes Martins, Paulo Jorge; Magnoni, Luca; Magradze, Erekle; Mahalalel, Yair; Mahboubi, Kambiz; Mahout, Gilles; Maiani, Camilla; Maidantchik, Carmen; Maio, Amélia; Majewski, Stephanie; Makida, Yasuhiro; Makovec, Nikola; Mal, Prolay; Malecki, Pawel; Malecki, Piotr; Maleev, Victor; Malek, Fairouz; Mallik, Usha; Malon, David; Maltezos, Stavros; Malyshev, Vladimir; Malyukov, Sergei; Mameghani, Raphael; Mamuzic, Judita; Manabe, Atsushi; Mandelli, Luciano; Mandić, Igor; Mandrysch, Rocco; Maneira, José; Mangeard, Pierre-Simon; Manjavidze, Ioseb; Mann, Alexander; Manning, Peter; Manousakis-Katsikakis, Arkadios; Mansoulie, Bruno; Manz, Andreas; Mapelli, Alessandro; Mapelli, Livio; March, Luis; Marchand, Jean-Francois; Marchese, Fabrizio; Marchiori, Giovanni; Marcisovsky, Michal; Marin, Alexandru; Marino, Christopher; Marroquim, Fernando; Marshall, Robin; Marshall, Zach; Martens, Kalen; Marti-Garcia, Salvador; Martin, Andrew; Martin, Brian; Martin, Brian Thomas; Martin, Franck Francois; Martin, Jean-Pierre; Martin, Philippe; Martin, Tim; Martin dit Latour, Bertrand; Martinez, Mario; Martinez Outschoorn, Verena; Martyniuk, Alex; Marx, Marilyn; Marzano, Francesco; Marzin, Antoine; Masetti, Lucia; Mashimo, Tetsuro; Mashinistov, Ruslan; Masik, Jiri; Maslennikov, Alexey; Maß, Martin; Massa, Ignazio; Massaro, Graziano; Massol, Nicolas; Mastrandrea, Paolo; Mastroberardino, Anna; Masubuchi, Tatsuya; Mathes, Markus; Matricon, Pierre; Matsumoto, Hiroshi; Matsunaga, Hiroyuki; Matsushita, Takashi; Mattravers, Carly; Maugain, Jean-Marie; Maxfield, Stephen; Maximov, Dmitriy; May, Edward; Mayne, Anna; Mazini, Rachid; Mazur, Michael; Mazzanti, Marcello; Mazzoni, Enrico; Mc Kee, Shawn Patrick; McCarn, Allison; McCarthy, Robert; McCarthy, Tom; McCubbin, Norman; McFarlane, Kenneth; Mcfayden, Josh; McGlone, Helen; Mchedlidze, Gvantsa; McLaren, Robert Andrew; Mclaughlan, Tom; McMahon, Steve; McPherson, Robert; Meade, Andrew; Mechnich, Joerg; Mechtel, Markus; Medinnis, Mike; Meera-Lebbai, Razzak; Meguro, Tatsuma; Mehdiyev, Rashid; Mehlhase, Sascha; Mehta, Andrew; Meier, Karlheinz; Meinhardt, Jens; Meirose, Bernhard; Melachrinos, Constantinos; Mellado Garcia, Bruce Rafael; Mendoza Navas, Luis; Meng, Zhaoxia; Mengarelli, Alberto; Menke, Sven; Menot, Claude; Meoni, Evelin; Mercurio, Kevin Michael; Mermod, Philippe; Merola, Leonardo; Meroni, Chiara; Merritt, Frank; Messina, Andrea; Metcalfe, Jessica; Mete, Alaettin Serhan; Meuser, Stefan; Meyer, Carsten; Meyer, Jean-Pierre; Meyer, Jochen; Meyer, Joerg; Meyer, Thomas Christian; Meyer, W Thomas; Miao, Jiayuan; Michal, Sebastien; Micu, Liliana; Middleton, Robin; Miele, Paola; Migas, Sylwia; Mijović, Liza; Mikenberg, Giora; Mikestikova, Marcela; Mikuž, Marko; Miller, David; Miller, Robert; Mills, Bill; Mills, Corrinne; Milov, Alexander; Milstead, David; Milstein, Dmitry; Minaenko, Andrey; Miñano, Mercedes; Minashvili, Irakli; Mincer, Allen; Mindur, Bartosz; Mineev, Mikhail; Ming, Yao; Mir, Lluisa-Maria; Mirabelli, Giovanni; Miralles Verge, Lluis; Misiejuk, Andrzej; Mitrevski, Jovan; Mitrofanov, Gennady; Mitsou, Vasiliki A; Mitsui, Shingo; Miyagawa, Paul; Miyazaki, Kazuki; Mjörnmark, Jan-Ulf; Moa, Torbjoern; Mockett, Paul; Moed, Shulamit; Moeller, Victoria; Mönig, Klaus; Möser, Nicolas; Mohapatra, Soumya; Mohn, Bjarte; Mohr, Wolfgang; Mohrdieck-Möck, Susanne; Moisseev, Artemy; Moles-Valls, Regina; Molina-Perez, Jorge; Monk, James; Monnier, Emmanuel; Montesano, Simone; Monticelli, Fernando; Monzani, Simone; Moore, Roger; Moorhead, Gareth; Mora Herrera, Clemencia; Moraes, Arthur; Morais, Antonio; Morange, Nicolas; Morel, Julien; Morello, Gianfranco; Moreno, Deywis; Moreno Llácer, María; Morettini, Paolo; Morii, Masahiro; Morin, Jerome; Morita, Youhei; Morley, Anthony Keith; Mornacchi, Giuseppe; Morone, Maria-Christina; Morozov, Sergey; Morris, John; Morvaj, Ljiljana; Moser, Hans-Guenther; Mosidze, Maia; Moss, Josh; Mount, Richard; Mountricha, Eleni; Mouraviev, Sergei; Moyse, Edward; Mudrinic, Mihajlo; Mueller, Felix; Mueller, James; Mueller, Klemens; Müller, Thomas; Muenstermann, Daniel; Muijs, Sandra; Muir, Alex; Munwes, Yonathan; Murakami, Koichi; Murray, Bill; Mussche, Ido; Musto, Elisa; Myagkov, Alexey; Myska, Miroslav; Nadal, Jordi; Nagai, Koichi; Nagano, Kunihiro; Nagasaka, Yasushi; Nairz, Armin Michael; Nakahama, Yu; Nakamura, Koji; Nakano, Itsuo; Nanava, Gizo; Napier, Austin; Nash, Michael; Nation, Nigel; Nattermann, Till; Naumann, Thomas; Navarro, Gabriela; Neal, Homer; Nebot, Eduardo; Nechaeva, Polina; Negri, Andrea; Negri, Guido; Nektarijevic, Snezana; Nelson, Andrew; Nelson, Silke; Nelson, Timothy Knight; Nemecek, Stanislav; Nemethy, Peter; Nepomuceno, Andre Asevedo; Nessi, Marzio; Nesterov, Stanislav; Neubauer, Mark; Neusiedl, Andrea; Neves, Ricardo; Nevski, Pavel; Newman, Paul; Nguyen Thi Hong, Van; Nickerson, Richard; Nicolaidou, Rosy; Nicolas, Ludovic; Nicquevert, Bertrand; Niedercorn, Francois; Nielsen, Jason; Niinikoski, Tapio; Nikiforov, Andriy; Nikolaenko, Vladimir; Nikolaev, Kirill; Nikolic-Audit, Irena; Nikolopoulos, Konstantinos; Nilsen, Henrik; Nilsson, Paul; Ninomiya, Yoichi; Nisati, Aleandro; Nishiyama, Tomonori; Nisius, Richard; Nodulman, Lawrence; Nomachi, Masaharu; Nomidis, Ioannis; Nomoto, Hiroshi; Nordberg, Markus; Nordkvist, Bjoern; Norton, Peter; Novakova, Jana; Nozaki, Mitsuaki; Nožička, Miroslav; Nozka, Libor; Nugent, Ian Michael; Nuncio-Quiroz, Adriana-Elizabeth; Nunes Hanninger, Guilherme; Nunnemann, Thomas; Nurse, Emily; Nyman, Tommi; O'Brien, Brendan Joseph; O'Neale, Steve; O'Neil, Dugan; O'Shea, Val; Oakham, Gerald; Oberlack, Horst; Ocariz, Jose; Ochi, Atsuhiko; Oda, Susumu; Odaka, Shigeru; Odier, Jerome; Ogren, Harold; Oh, Alexander; Oh, Seog; Ohm, Christian; Ohshima, Takayoshi; Ohshita, Hidetoshi; Ohska, Tokio Kenneth; Ohsugi, Takashi; Okada, Shogo; Okawa, Hideki; Okumura, Yasuyuki; Okuyama, Toyonobu; Olcese, Marco; Olchevski, Alexander; Oliveira, Miguel Alfonso; Oliveira Damazio, Denis; Oliver Garcia, Elena; Olivito, Dominick; Olszewski, Andrzej; Olszowska, Jolanta; Omachi, Chihiro; Onofre, António; Onyisi, Peter; Oram, Christopher; Oreglia, Mark; Oren, Yona; Orestano, Domizia; Orlov, Iliya; Oropeza Barrera, Cristina; Orr, Robert; Ortega, Eduardo; Osculati, Bianca; Ospanov, Rustem; Osuna, Carlos; Otero y Garzon, Gustavo; Ottersbach, John; Ouchrif, Mohamed; Ould-Saada, Farid; Ouraou, Ahmimed; Ouyang, Qun; Owen, Mark; Owen, Simon; Øye, Ola; Ozcan, Veysi Erkcan; Ozturk, Nurcan; Pacheco Pages, Andres; Padilla Aranda, Cristobal; Paganis, Efstathios; Paige, Frank; Pajchel, Katarina; Palestini, Sandro; Pallin, Dominique; Palma, Alberto; Palmer, Jody; Pan, Yibin; Panagiotopoulou, Evgenia; Panes, Boris; Panikashvili, Natalia; Panitkin, Sergey; Pantea, Dan; Panuskova, Monika; Paolone, Vittorio; Papadelis, Aras; Papadopoulou, Theodora; Paramonov, Alexander; Park, Woochun; Parker, Andy; Parodi, Fabrizio; Parsons, John; Parzefall, Ulrich; Pasqualucci, Enrico; Passeri, Antonio; Pastore, Fernanda; Pastore, Francesca; Pásztor, Gabriella; Pataraia, Sophio; Patel, Nikhul; Pater, Joleen; Patricelli, Sergio; Pauly, Thilo; Pecsy, Martin; Pedraza Morales, Maria Isabel; Peleganchuk, Sergey; Peng, Haiping; Pengo, Ruggero; Penson, Alexander; Penwell, John; Perantoni, Marcelo; Perez, Kerstin; Perez Cavalcanti, Tiago; Perez Codina, Estel; Pérez García-Estañ, María Teresa; Perez Reale, Valeria; Peric, Ivan; Perini, Laura; Pernegger, Heinz; Perrino, Roberto; Perrodo, Pascal; Persembe, Seda; Peshekhonov, Vladimir; Peters, Onne; Petersen, Brian; Petersen, Jorgen; Petersen, Troels; Petit, Elisabeth; Petridis, Andreas; Petridou, Chariclia; Petrolo, Emilio; Petrucci, Fabrizio; Petschull, Dennis; Petteni, Michele; Pezoa, Raquel; Phan, Anna; Phillips, Alan; Phillips, Peter William; Piacquadio, Giacinto; Piccaro, Elisa; Piccinini, Maurizio; Pickford, Andrew; Piec, Sebastian Marcin; Piegaia, Ricardo; Pilcher, James; Pilkington, Andrew; Pina, João Antonio; Pinamonti, Michele; Pinder, Alex; Pinfold, James; Ping, Jialun; Pinto, Belmiro; Pirotte, Olivier; Pizio, Caterina; Placakyte, Ringaile; Plamondon, Mathieu; Plano, Will; Pleier, Marc-Andre; Pleskach, Anatoly; Poblaguev, Andrei; Poddar, Sahill; Podlyski, Fabrice; Poggioli, Luc; Poghosyan, Tatevik; Pohl, Martin; Polci, Francesco; Polesello, Giacomo; Policicchio, Antonio; Polini, Alessandro; Poll, James; Polychronakos, Venetios; Pomarede, Daniel Marc; Pomeroy, Daniel; Pommès, Kathy; Pontecorvo, Ludovico; Pope, Bernard; Popeneciu, Gabriel Alexandru; Popovic, Dragan; Poppleton, Alan; Portell Bueso, Xavier; Porter, Robert; Posch, Christoph; Pospelov, Guennady; Pospisil, Stanislav; Potrap, Igor; Potter, Christina; Potter, Christopher; Poulard, Gilbert; Poveda, Joaquin; Prabhu, Robindra; Pralavorio, Pascal; Prasad, Srivas; Pravahan, Rishiraj; Prell, Soeren; Pretzl, Klaus Peter; Pribyl, Lukas; Price, Darren; Price, Lawrence; Price, Michael John; Prichard, Paul; Prieur, Damien; Primavera, Margherita; Prokofiev, Kirill; Prokoshin, Fedor; Protopopescu, Serban; Proudfoot, James; Prudent, Xavier; Przysiezniak, Helenka; Psoroulas, Serena; Ptacek, Elizabeth; Purdham, John; Purohit, Milind; Puzo, Patrick; Pylypchenko, Yuriy; Qian, Jianming; Qian, Zuxuan; Qin, Zhonghua; Quadt, Arnulf; Quarrie, David; Quayle, William; Quinonez, Fernando; Raas, Marcel; Radescu, Voica; Radics, Balint; Rador, Tonguc; Ragusa, Francesco; Rahal, Ghita; Rahimi, Amir; Rahm, David; Rajagopalan, Srinivasan; Rammensee, Michael; Rammes, Marcus; Ramstedt, Magnus; Randrianarivony, Koloina; Ratoff, Peter; Rauscher, Felix; Rauter, Emanuel; Raymond, Michel; Read, Alexander Lincoln; Rebuzzi, Daniela; Redelbach, Andreas; Redlinger, George; Reece, Ryan; Reeves, Kendall; Reichold, Armin; Reinherz-Aronis, Erez; Reinsch, Andreas; Reisinger, Ingo; Reljic, Dusan; Rembser, Christoph; Ren, Zhongliang; Renaud, Adrien; Renkel, Peter; Rensch, Bertram; Rescigno, Marco; Resconi, Silvia; Resende, Bernardo; Reznicek, Pavel; Rezvani, Reyhaneh; Richards, Alexander; Richter, Robert; Richter-Was, Elzbieta; Ridel, Melissa; Rieke, Stefan; Rijpstra, Manouk; Rijssenbeek, Michael; Rimoldi, Adele; Rinaldi, Lorenzo; Rios, Ryan Randy; Riu, Imma; Rivoltella, Giancesare; Rizatdinova, Flera; Rizvi, Eram; Robertson, Steven; Robichaud-Veronneau, Andree; Robinson, Dave; Robinson, James; Robinson, Mary; Robson, Aidan; Rocha de Lima, Jose Guilherme; Roda, Chiara; Roda Dos Santos, Denis; Rodier, Stephane; Rodriguez, Diego; Rodriguez Garcia, Yohany; Roe, Adam; Roe, Shaun; Røhne, Ole; Rojo, Victoria; Rolli, Simona; Romaniouk, Anatoli; Romanov, Victor; Romeo, Gaston; Romero Maltrana, Diego; Roos, Lydia; Ros, Eduardo; Rosati, Stefano; Rosbach, Kilian; Rose, Matthew; Rosenbaum, Gabriel; Rosenberg, Eli; Rosendahl, Peter Lundgaard; Rosselet, Laurent; Rossetti, Valerio; Rossi, Elvira; Rossi, Leonardo Paolo; Rossi, Lucio; Rotaru, Marina; Roth, Itamar; Rothberg, Joseph; Rousseau, David; Royon, Christophe; Rozanov, Alexander; Rozen, Yoram; Ruan, Xifeng; Rubinskiy, Igor; Ruckert, Benjamin; Ruckstuhl, Nicole; Rud, Viacheslav; Rudolph, Gerald; Rühr, Frederik; Ruggieri, Federico; Ruiz-Martinez, Aranzazu; Rulikowska-Zarebska, Elzbieta; Rumiantsev, Viktor; Rumyantsev, Leonid; Runge, Kay; Runolfsson, Ogmundur; Rurikova, Zuzana; Rusakovich, Nikolai; Rust, Dave; Rutherfoord, John; Ruwiedel, Christoph; Ruzicka, Pavel; Ryabov, Yury; Ryadovikov, Vasily; Ryan, Patrick; Rybar, Martin; Rybkin, Grigori; Ryder, Nick; Rzaeva, Sevda; Saavedra, Aldo; Sadeh, Iftach; Sadrozinski, Hartmut; Sadykov, Renat; Safai Tehrani, Francesco; Sakamoto, Hiroshi; Salamanna, Giuseppe; Salamon, Andrea; Saleem, Muhammad; Salihagic, Denis; Salnikov, Andrei; Salt, José; Salvachua Ferrando, Belén; Salvatore, Daniela; Salvatore, Pasquale Fabrizio; Salvucci, Antonio; Salzburger, Andreas; Sampsonidis, Dimitrios; Samset, Björn Hallvard; Sandaker, Heidi; Sander, Heinz Georg; Sanders, Michiel; Sandhoff, Marisa; Sandoval, Tanya; Sandstroem, Rikard; Sandvoss, Stephan; Sankey, Dave; Sansoni, Andrea; Santamarina Rios, Cibran; Santoni, Claudio; Santonico, Rinaldo; Santos, Helena; Saraiva, João; Sarangi, Tapas; Sarkisyan-Grinbaum, Edward; Sarri, Francesca; Sartisohn, Georg; Sasaki, Osamu; Sasaki, Takashi; Sasao, Noboru; Satsounkevitch, Igor; Sauvage, Gilles; Sauvan, Jean-Baptiste; Savard, Pierre; Savinov, Vladimir; Savu, Dan Octavian; Savva, Panagiota; Sawyer, Lee; Saxon, David; Says, Louis-Pierre; Sbarra, Carla; Sbrizzi, Antonio; Scallon, Olivia; Scannicchio, Diana; Scarcella, Mark; Schaarschmidt, Jana; Schacht, Peter; Schäfer, Uli; Schaepe, Steffen; Schaetzel, Sebastian; Schaffer, Arthur; Schaile, Dorothee; Schamberger, R. Dean; Schamov, Andrey; Scharf, Veit; Schegelsky, Valery; Scheirich, Daniel; Schernau, Michael; Scherzer, Max; Schiavi, Carlo; Schieck, Jochen; Schioppa, Marco; Schlenker, Stefan; Schlereth, James; Schmidt, Evelyn; Schmidt, Michael; Schmieden, Kristof; Schmitt, Christian; Schmitt, Sebastian; Schmitz, Martin; Schöning, André; Schott, Matthias; Schouten, Doug; Schovancova, Jaroslava; Schram, Malachi; Schroeder, Christian; Schroer, Nicolai; Schuh, Silvia; Schuler, Georges; Schultes, Joachim; Schultz-Coulon, Hans-Christian; Schulz, Holger; Schumacher, Jan; Schumacher, Markus; Schumm, Bruce; Schune, Philippe; Schwanenberger, Christian; Schwartzman, Ariel; Schwemling, Philippe; Schwienhorst, Reinhard; Schwierz, Rainer; Schwindling, Jerome; Scott, Bill; Searcy, Jacob; Sedykh, Evgeny; Segura, Ester; Seidel, Sally; Seiden, Abraham; Seifert, Frank; Seixas, José; Sekhniaidze, Givi; Seliverstov, Dmitry; Sellden, Bjoern; Sellers, Graham; Seman, Michal; Semprini-Cesari, Nicola; Serfon, Cedric; Serin, Laurent; Seuster, Rolf; Severini, Horst; Sevior, Martin; Sfyrla, Anna; Shabalina, Elizaveta; Shamim, Mansoora; Shan, Lianyou; Shank, James; Shao, Qi Tao; Shapiro, Marjorie; Shatalov, Pavel; Shaver, Leif; Shaw, Christian; Shaw, Kate; Sherman, Daniel; Sherwood, Peter; Shibata, Akira; Shichi, Hideharu; Shimizu, Shima; Shimojima, Makoto; Shin, Taeksu; Shmeleva, Alevtina; Shochet, Mel; Short, Daniel; Shupe, Michael; Sicho, Petr; Sidoti, Antonio; Siebel, Anca-Mirela; Siegert, Frank; Siegrist, James; Sijacki, Djordje; Silbert, Ohad; Silva, José; Silver, Yiftah; Silverstein, Daniel; Silverstein, Samuel; Simak, Vladislav; Simard, Olivier; Simic, Ljiljana; Simion, Stefan; Simmons, Brinick; Simonyan, Margar; Sinervo, Pekka; Sinev, Nikolai; Sipica, Valentin; Siragusa, Giovanni; Sisakyan, Alexei; Sivoklokov, Serguei; Sjölin, Jörgen; Sjursen, Therese; Skinnari, Louise Anastasia; Skovpen, Kirill; Skubic, Patrick; Skvorodnev, Nikolai; Slater, Mark; Slavicek, Tomas; Sliwa, Krzysztof; Sloan, Terrence; Sloper, John erik; Smakhtin, Vladimir; Smirnov, Sergei; Smirnova, Lidia; Smirnova, Oxana; Smith, Ben Campbell; Smith, Douglas; Smith, Kenway; Smizanska, Maria; Smolek, Karel; Snesarev, Andrei; Snow, Steve; Snow, Joel; Snuverink, Jochem; Snyder, Scott; Soares, Mara; Sobie, Randall; Sodomka, Jaromir; Soffer, Abner; Solans, Carlos; Solar, Michael; Solc, Jaroslav; Soldatov, Evgeny; Soldevila, Urmila; Solfaroli Camillocci, Elena; Solodkov, Alexander; Solovyanov, Oleg; Sondericker, John; Soni, Nitesh; Sopko, Vit; Sopko, Bruno; Sorbi, Massimo; Sosebee, Mark; Soukharev, Andrey; Spagnolo, Stefania; Spanò, Francesco; Spighi, Roberto; Spigo, Giancarlo; Spila, Federico; Spiriti, Eleuterio; Spiwoks, Ralf; Spousta, Martin; Spreitzer, Teresa; Spurlock, Barry; St Denis, Richard Dante; Stahl, Thorsten; Stahlman, Jonathan; Stamen, Rainer; Stanecka, Ewa; Stanek, Robert; Stanescu, Cristian; Stapnes, Steinar; Starchenko, Evgeny; Stark, Jan; Staroba, Pavel; Starovoitov, Pavel; Staude, Arnold; Stavina, Pavel; Stavropoulos, Georgios; Steele, Genevieve; Steinbach, Peter; Steinberg, Peter; Stekl, Ivan; Stelzer, Bernd; Stelzer, Harald Joerg; Stelzer-Chilton, Oliver; Stenzel, Hasko; Stevenson, Kyle; Stewart, Graeme; Stillings, Jan Andre; Stockmanns, Tobias; Stockton, Mark; Stoerig, Kathrin; Stoicea, Gabriel; Stonjek, Stefan; Strachota, Pavel; Stradling, Alden; Straessner, Arno; Strandberg, Jonas; Strandberg, Sara; Strandlie, Are; Strang, Michael; Strauss, Emanuel; Strauss, Michael; Strizenec, Pavol; Ströhmer, Raimund; Strom, David; Strong, John; Stroynowski, Ryszard; Strube, Jan; Stugu, Bjarne; Stumer, Iuliu; Stupak, John; Sturm, Philipp; Soh, Dart-yin; Su, Dong; Subramania, Halasya Siva; Succurro, Antonella; Sugaya, Yorihito; Sugimoto, Takuya; Suhr, Chad; Suita, Koichi; Suk, Michal; Sulin, Vladimir; Sultansoy, Saleh; Sumida, Toshi; Sun, Xiaohu; Sundermann, Jan Erik; Suruliz, Kerim; Sushkov, Serge; Susinno, Giancarlo; Sutton, Mark; Suzuki, Yu; Svatos, Michal; Sviridov, Yuri; Swedish, Stephen; Sykora, Ivan; Sykora, Tomas; Szeless, Balazs; Sánchez, Javier; Ta, Duc; Tackmann, Kerstin; Taffard, Anyes; Tafirout, Reda; Taga, Adrian; Taiblum, Nimrod; Takahashi, Yuta; Takai, Helio; Takashima, Ryuichi; Takeda, Hiroshi; Takeshita, Tohru; Talby, Mossadek; Talyshev, Alexey; Tamsett, Matthew; Tanaka, Junichi; Tanaka, Reisaburo; Tanaka, Satoshi; Tanaka, Shuji; Tanaka, Yoshito; Tani, Kazutoshi; Tannoury, Nancy; Tappern, Geoffrey; Tapprogge, Stefan; Tardif, Dominique; Tarem, Shlomit; Tarrade, Fabien; Tartarelli, Giuseppe Francesco; Tas, Petr; Tasevsky, Marek; Tassi, Enrico; Tatarkhanov, Mous; Tayalati, Yahya; Taylor, Christopher; Taylor, Frank; Taylor, Geoffrey; Taylor, Wendy; Teixeira Dias Castanheira, Matilde; Teixeira-Dias, Pedro; Temming, Kim Katrin; Ten Kate, Herman; Teng, Ping-Kun; Terada, Susumu; Terashi, Koji; Terron, Juan; Terwort, Mark; Testa, Marianna; Teuscher, Richard; Thadome, Jocelyn; Therhaag, Jan; Theveneaux-Pelzer, Timothée; Thioye, Moustapha; Thoma, Sascha; Thomas, Juergen; Thompson, Emily; Thompson, Paul; Thompson, Peter; Thompson, Stan; Thomson, Evelyn; Thomson, Mark; Thun, Rudolf; Tic, Tomáš; Tikhomirov, Vladimir; Tikhonov, Yury; Timmermans, Charles; Tipton, Paul; Tique Aires Viegas, Florbela De Jes; Tisserant, Sylvain; Tobias, Jürgen; Toczek, Barbara; Todorov, Theodore; Todorova-Nova, Sharka; Toggerson, Brokk; Tojo, Junji; Tokár, Stanislav; Tokunaga, Kaoru; Tokushuku, Katsuo; Tollefson, Kirsten; Tomoto, Makoto; Tompkins, Lauren; Toms, Konstantin; Tong, Guoliang; Tonoyan, Arshak; Topfel, Cyril; Topilin, Nikolai; Torchiani, Ingo; Torrence, Eric; Torró Pastor, Emma; Toth, Jozsef; Touchard, Francois; Tovey, Daniel; Traynor, Daniel; Trefzger, Thomas; Treis, Johannes; Tremblet, Louis; Tricoli, Alesandro; Trigger, Isabel Marian; Trincaz-Duvoid, Sophie; Trinh, Thi Nguyet; Tripiana, Martin; Trischuk, William; Trivedi, Arjun; Trocmé, Benjamin; Troncon, Clara; Trottier-McDonald, Michel; Trzupek, Adam; Tsarouchas, Charilaos; Tseng, Jeffrey; Tsiakiris, Menelaos; Tsiareshka, Pavel; Tsionou, Dimitra; Tsipolitis, Georgios; Tsiskaridze, Vakhtang; Tskhadadze, Edisher; Tsukerman, Ilya; Tsulaia, Vakhtang; Tsung, Jieh-Wen; Tsuno, Soshi; Tsybychev, Dmitri; Tua, Alan; Tuggle, Joseph; Turala, Michal; Turecek, Daniel; Turk Cakir, Ilkay; Turlay, Emmanuel; Turra, Ruggero; Tuts, Michael; Tykhonov, Andrii; Tylmad, Maja; Tyndel, Mike; Tyrvainen, Harri; Tzanakos, George; Uchida, Kirika; Ueda, Ikuo; Ueno, Ryuichi; Ugland, Maren; Uhlenbrock, Mathias; Uhrmacher, Michael; Ukegawa, Fumihiko; Unal, Guillaume; Underwood, David; Undrus, Alexander; Unel, Gokhan; Unno, Yoshinobu; Urbaniec, Dustin; Urkovsky, Evgeny; Urrejola, Pedro; Usai, Giulio; Uslenghi, Massimiliano; Vacavant, Laurent; Vacek, Vaclav; Vachon, Brigitte; Vahsen, Sven; Valenta, Jan; Valente, Paolo; Valentinetti, Sara; Valkar, Stefan; Valladolid Gallego, Eva; Vallecorsa, Sofia; Valls Ferrer, Juan Antonio; van der Graaf, Harry; van der Kraaij, Erik; Van Der Leeuw, Robin; van der Poel, Egge; van der Ster, Daniel; Van Eijk, Bob; van Eldik, Niels; van Gemmeren, Peter; van Kesteren, Zdenko; van Vulpen, Ivo; Vandelli, Wainer; Vandoni, Giovanna; Vaniachine, Alexandre; Vankov, Peter; Vannucci, Francois; Varela Rodriguez, Fernando; Vari, Riccardo; Varnes, Erich; Varouchas, Dimitris; Vartapetian, Armen; Varvell, Kevin; Vassilakopoulos, Vassilios; Vazeille, Francois; Vegni, Guido; Veillet, Jean-Jacques; Vellidis, Constantine; Veloso, Filipe; Veness, Raymond; Veneziano, Stefano; Ventura, Andrea; Ventura, Daniel; Venturi, Manuela; Venturi, Nicola; Vercesi, Valerio; Verducci, Monica; Verkerke, Wouter; Vermeulen, Jos; Vest, Anja; Vetterli, Michel; Vichou, Irene; Vickey, Trevor; Viehhauser, Georg; Viel, Simon; Villa, Mauro; Villaplana Perez, Miguel; Vilucchi, Elisabetta; Vincter, Manuella; Vinek, Elisabeth; Vinogradov, Vladimir; Virchaux, Marc; Viret, Sébastien; Virzi, Joseph; Vitale, Antonio; Vitells, Ofer; Viti, Michele; Vivarelli, Iacopo; Vives Vaque, Francesc; Vlachos, Sotirios; Vlasak, Michal; Vlasov, Nikolai; Vogel, Adrian; Vokac, Petr; Volpi, Guido; Volpi, Matteo; Volpini, Giovanni; von der Schmitt, Hans; von Loeben, Joerg; von Radziewski, Holger; von Toerne, Eckhard; Vorobel, Vit; Vorobiev, Alexander; Vorwerk, Volker; Vos, Marcel; Voss, Rudiger; Voss, Thorsten Tobias; Vossebeld, Joost; Vranjes, Nenad; Vranjes Milosavljevic, Marija; Vrba, Vaclav; Vreeswijk, Marcel; Vu Anh, Tuan; Vuillermet, Raphael; Vukotic, Ilija; Wagner, Wolfgang; Wagner, Peter; Wahlen, Helmut; Wakabayashi, Jun; Walbersloh, Jorg; Walch, Shannon; Walder, James; Walker, Rodney; Walkowiak, Wolfgang; Wall, Richard; Waller, Peter; Wang, Chiho; Wang, Haichen; Wang, Hulin; Wang, Jike; Wang, Jin; Wang, Joshua C; Wang, Rui; Wang, Song-Ming; Warburton, Andreas; Ward, Patricia; Warsinsky, Markus; Watkins, Peter; Watson, Alan; Watson, Miriam; Watts, Gordon; Watts, Stephen; Waugh, Anthony; Waugh, Ben; Weber, Jens; Weber, Marc; Weber, Michele; Weber, Pavel; Weidberg, Anthony; Weigell, Philipp; Weingarten, Jens; Weiser, Christian; Wellenstein, Hermann; Wells, Phillippa; Wen, Mei; Wenaus, Torre; Wendler, Shanti; Weng, Zhili; Wengler, Thorsten; Wenig, Siegfried; Wermes, Norbert; Werner, Matthias; Werner, Per; Werth, Michael; Wessels, Martin; Weydert, Carole; Whalen, Kathleen; Wheeler-Ellis, Sarah Jane; Whitaker, Scott; White, Andrew; White, Martin; White, Sebastian; Whitehead, Samuel Robert; Whiteson, Daniel; Whittington, Denver; Wicek, Francois; Wicke, Daniel; Wickens, Fred; Wiedenmann, Werner; Wielers, Monika; Wienemann, Peter; Wiglesworth, Craig; Wiik, Liv Antje Mari; Wijeratne, Peter Alexander; Wildauer, Andreas; Wildt, Martin Andre; Wilhelm, Ivan; Wilkens, Henric George; Will, Jonas Zacharias; Williams, Eric; Williams, Hugh; Willis, William; Willocq, Stephane; Wilson, John; Wilson, Michael Galante; Wilson, Alan; Wingerter-Seez, Isabelle; Winkelmann, Stefan; Winklmeier, Frank; Wittgen, Matthias; Wolter, Marcin Wladyslaw; Wolters, Helmut; Wooden, Gemma; Wosiek, Barbara; Wotschack, Jorg; Woudstra, Martin; Wraight, Kenneth; Wright, Catherine; Wrona, Bozydar; Wu, Sau Lan; Wu, Xin; Wu, Yusheng; Wulf, Evan; Wunstorf, Renate; Wynne, Benjamin; Xaplanteris, Leonidas; Xella, Stefania; Xie, Song; Xie, Yigang; Xu, Chao; Xu, Da; Xu, Guofa; Yabsley, Bruce; Yamada, Miho; Yamamoto, Akira; Yamamoto, Kyoko; Yamamoto, Shimpei; Yamamura, Taiki; Yamaoka, Jared; Yamazaki, Takayuki; Yamazaki, Yuji; Yan, Zhen; Yang, Haijun; Yang, Un-Ki; Yang, Yi; Yang, Yi; Yang, Zhaoyu; Yanush, Serguei; Yao, Weiming; Yao, Yushu; Yasu, Yoshiji; Ybeles Smit, Gabriel Valentijn; Ye, Jingbo; Ye, Shuwei; Yilmaz, Metin; Yoosoofmiya, Reza; Yorita, Kohei; Yoshida, Riktura; Young, Charles; Youssef, Saul; Yu, Dantong; Yu, Jaehoon; Yu, Jie; Yuan, Li; Yurkewicz, Adam; Zaets, Vassilli; Zaidan, Remi; Zaitsev, Alexander; Zajacova, Zuzana; Zalite, Youris; Zanello, Lucia; Zarzhitsky, Pavel; Zaytsev, Alexander; Zeitnitz, Christian; Zeller, Michael; Zemla, Andrzej; Zendler, Carolin; Zenin, Anton; Zenin, Oleg; Ženiš, Tibor; Zenonos, Zenonas; Zenz, Seth; Zerwas, Dirk; Zevi della Porta, Giovanni; Zhan, Zhichao; Zhang, Dongliang; Zhang, Huaqiao; Zhang, Jinlong; Zhang, Xueyao; Zhang, Zhiqing; Zhao, Long; Zhao, Tianchi; Zhao, Zhengguo; Zhemchugov, Alexey; Zheng, Shuchen; Zhong, Jiahang; Zhou, Bing; Zhou, Ning; Zhou, Yue; Zhu, Cheng Guang; Zhu, Hongbo; Zhu, Yingchun; Zhuang, Xuai; Zhuravlov, Vadym; Zieminska, Daria; Zimmermann, Robert; Zimmermann, Simone; Zimmermann, Stephanie; Ziolkowski, Michael; Zitoun, Robert; Živković, Lidija; Zmouchko, Viatcheslav; Zobernig, Georg; Zoccoli, Antonio; Zolnierowski, Yves; Zsenei, Andras; zur Nedden, Martin; Zutshi, Vishnu; Zwalinski, Lukasz

    2012-01-03

    Proton-proton collisions at $\\sqrt{s}=7$ TeV and heavy ion collisions at $\\sqrt{s_{NN}}$=2.76 TeV were produced by the LHC and recorded using the ATLAS experiment's trigger system in 2010. The LHC is designed with a maximum bunch crossing rate of 40 MHz and the ATLAS trigger system is designed to record approximately 200 of these per second. The trigger system selects events by rapidly identifying signatures of muon, electron, photon, tau lepton, jet, and B meson candidates, as well as using global event signatures, such as missing transverse energy. An overview of the ATLAS trigger system, the evolution of the system during 2010 and the performance of the trigger system components and selections based on the 2010 collision data are shown. A brief outline of plans for the trigger system in 2011 is presented

  15. Alignment of the ATLAS Inner Detector Tracking System

    CERN Document Server

    Moles-Valls, R

    2008-01-01

    The ATLAS experiment is equipped with a tracking system for c harged particles built on two technologies: silicon and drift tube base detectors. These kind of detectors compose the ATLAS Inner Detector (ID). The Alignment of the ATLAS ID tracking s ystem requires the determination of almost 36000 degrees of freedom. From the tracking point o f view, the alignment parameters should be know to a few microns precision. This permits to att ain optimal measurements of the parameters of the charged particles trajectories, thus ena bling ATLAS to achieve its physics goals. The implementation of the alignment software, its framewor k and the data flow will be discussed. Special attention will be paid to the recent challenges wher e large scale computing simulation of the ATLAS detector has been performed, mimicking the ATLAS o peration, which is going to be very important for the LHC startup scenario. The alignment r esult for several challenges (real cosmic ray data taking and computing system commissioning) will be...

  16. Results from the commissioning of the ATLAS Pixel Detector

    CERN Document Server

    Masetti, L

    2008-01-01

    The Pixel Detector is the innermost detector of the ATLAS experiment at the Large Hadron Collider at CERN. It is an 80 million channel silicon tracking system designed to detect charged tracks and secondary vertices with very high precision. After connection of cooling and services and verification of their operation, the ATLAS Pixel Detector is now in the final stage of its commissioning phase. Calibration of optical connections, verification of the analog performance and special DAQ runs for noise studies have been performed and the first tracks in combined operation with the other subdetectors of the ATLAS Inner Detector were observed. The results from calibration tests on the whole detector and from cosmic muon data are presented.

  17. ATLAS Grid Data Processing: system evolution and scalability

    CERN Document Server

    Golubkov, D; The ATLAS collaboration; Klimentov, A; Minaenko, A; Nevski, P; Vaniachine, A; Walker, R

    2012-01-01

    The production system for Grid Data Processing handles petascale ATLAS data reprocessing and Monte Carlo activities. The production system empowered further data processing steps on the Grid performed by dozens of ATLAS physics groups with coordinated access to computing resources worldwide, including additional resources sponsored by regional facilities. The system provides knowledge management of configuration parameters for massive data processing tasks, reproducibility of results, scalable database access, orchestrated workflow and performance monitoring, dynamic workload sharing, automated fault tolerance and petascale data integrity control. The system evolves to accommodate a growing number of users and new requirements from our contacts in ATLAS main areas: Trigger, Physics, Data Preparation and Software & Computing. To assure scalability, the next generation production system architecture development is in progress. We report on scaling up the production system for a growing number of users provi...

  18. Measurement of Z boson production in association with jets at the LHC and study of a DAQ system for the Triple-GEM detector in view of the CMS upgrade

    CERN Document Server

    Léonard, Alexandre

    This PhD thesis presents the measurement of the differential cross section for the production of a Z boson in association with jets in proton-proton collisions taking place at the Large Hadron Collider (LHC) at CERN, at a centre-of-mass energy of 8 TeV. A development of a data acquisition (DAQ) system for the Triple-Gas Electron Multiplier (GEM) detector in view of the Compact Muon Solenoid (CMS) detector upgrade is also presented. The events used for the data analysis were collected by the CMS detector during the year 2012 and constitute a sample of 19.6/fb of integrated luminosity. The cross section measurements are performed as a function of the jet multiplicity, the jet transverse momentum and pseudorapidity, and the scalar sum of the jet transverse momenta. The results were obtained by correcting the observed distributions for detector effects. The measured differential cross sections are compared to some state of the art Monte Carlo predictions MadGraph 5, Sherpa 2 and MadGraph5_aMC@NLO. These measureme...

  19. Online remote monitoring facilities for the ATLAS experiment

    CERN Document Server

    Kolos, S; The ATLAS collaboration; Feng, E; Hauser, R; Yakovlev, A; Zaytsev, A

    2011-01-01

    ATLAS is one of the 4 LHC experiments which started to be operated in the collisions mode in 2010. The ATLAS apparatus itself as well as the Trigger and the DAQ system are extremely complex facilities which have been built up by the collaboration including 144 institutes from 33 countries. The effective running of the experiment is supported by a large number of experts distributed all over the world. This paper describes the online remote monitoring system which has been developed in the ATLAS Trigger and DAQ(TDAQ) community in order to support efficient participation of the experts from remote institutes in the exploitation of the experiment. The facilities provided by the remote monitoring system are ranging from the WEB based access to the general status and data quality for the ongoing data taking session to the scalable service providing real-time mirroring of the detailed monitoring data from the experimental area to the dedicated computers in the CERN public network, where this data is made available ...

  20. Test Management Framework for the Data Acquisition of the ATLAS Experiment

    CERN Document Server

    Kazarov, Andrei; The ATLAS collaboration

    2017-01-01

    Data Acquisition (DAQ) of the ATLAS experiment is a large distributed and inhomogeneous system: it consists of thousands of interconnected computers and electronics devices that operate coherently to read out and select relevant physics data. Advanced testing and diagnostics capabilities of the TDAQ control system are a crucial feature which contributes significantly to smooth operation and fast recovery in case of the problems and, finally, to the high efficiency of the whole experiment. The base layer of the verification and diagnostic functionality is a test management framework. We have developed a flexible test management system that allows the experts to define and configure tests for different components, indicate follow-up actions to test failures and describe inter-dependencies between DAQ or detector elements. This development is based on the experience gained with the previous test system that was used during the first three years of the data taking. We discovered that more emphasis needed to be pu...

  1. The ATLAS Monte Carlo tuning system

    CERN Document Server

    Wahrmund, S; The ATLAS collaboration

    2011-01-01

    The ATLAS experiment moved the tuning of the underlying event and minimum bias event shape modeling, previously done in a manual fashion, to the automated Professor tuning tool, employed in connection with the Rivet analysis framework, when the first corresponding experimental analysis from LHC became available. The tuning effort for the Pythia 8 generator, which includes improved models for diffraction, has been started in this automated way in ATLAS, with the aim of getting a good description of the pile-up generated by multiple minimum bias interactions. The first results for these Pythia 8 tunes are presented, including a study of tunes for various PDFs.

  2. The ATLAS Monte Carlo tuning system

    CERN Document Server

    Wahrmund, S

    2012-01-01

    The ATLAS experiment moved the tuning of the underlying event and minimum bias event shape modeling, previously done in a manual fashion, to the automated Professor tuning tool, employed in connection with the Rivet analysis framework, when the first corresponding experimental analysis from LHC became available. The tuning effort for the Pythia 8 generator, which includes improved models for diffraction, has been started in this automated way in ATLAS, with the aim of getting a good description of the pile-up generated by multiple minimum bias interactions. The first results for these Pythia 8 tunes, as well as Pythia 6 shower tunes are presented, including a study of tunes for various PDFs.

  3. The ATLAS Trigger System Commissioning and Performance

    CERN Document Server

    Hamilton, A

    2010-01-01

    The ATLAS trigger has been used very successfully to collect collision data during 2009 and 2010 LHC running at centre of mass energies of 900 GeV, 2.36 TeV, and 7 TeV. This paper presents the ongoing work to commission the ATLAS trigger with proton collisions, including an overview of the performance of the trigger based on extensive online running. We describe how the trigger has evolved with increasing LHC luminosity and give a brief overview of plans for forthcoming LHC running.

  4. Performance of the ATLAS trigger system in 2015

    Energy Technology Data Exchange (ETDEWEB)

    Aaboud, M. [Univ. Mohamed Premier et LPTPM, Oujda (Morocco). Faculte des Sciences; Aad, G. [CPPM, Aix-Marseille Univ. et CNRS/IN2P3, Marseille (France); Abbott, B. [Oklahoma Univ., Norman, OK (United States). Homer L. Dodge Dept. of Physics and Astronomy; Collaboration: Atlas Collaboration; and others

    2017-05-15

    During 2015 the ATLAS experiment recorded 3.8 fb{sup -1} of proton-proton collision data at a centre-of-mass energy of 13 TeV. The ATLAS trigger system is a crucial component of the experiment, responsible for selecting events of interest at a recording rate of approximately 1 kHz from up to 40 MHz of collisions. This paper presents a short overview of the changes to the trigger and data acquisition systems during the first long shutdown of the LHC and shows the performance of the trigger system and its components based on the 2015 proton-proton collision data. (orig.)

  5. Performance of the ATLAS Trigger System in 2015

    CERN Document Server

    Aaboud, Morad; Abbott, Brad; Abdallah, Jalal; Abdinov, Ovsat; Abeloos, Baptiste; Aben, Rosemarie; AbouZeid, Ossama; Abraham, Nicola; Abramowicz, Halina; Abreu, Henso; Abreu, Ricardo; Abulaiti, Yiming; Acharya, Bobby Samir; Adachi, Shunsuke; Adamczyk, Leszek; Adams, David; Adelman, Jahred; Adomeit, Stefanie; Adye, Tim; Affolder, Tony; Agatonovic-Jovin, Tatjana; Aguilar-Saavedra, Juan Antonio; Ahlen, Steven; Ahmadov, Faig; Aielli, Giulio; Akerstedt, Henrik; Åkesson, Torsten Paul Ake; Akimov, Andrei; Alberghi, Gian Luigi; Albert, Justin; Albrand, Solveig; Alconada Verzini, Maria Josefina; Aleksa, Martin; Aleksandrov, Igor; Alexa, Calin; Alexander, Gideon; Alexopoulos, Theodoros; Alhroob, Muhammad; Ali, Babar; Aliev, Malik; Alimonti, Gianluca; Alison, John; Alkire, Steven Patrick; Allbrooke, Benedict; Allen, Benjamin William; Allport, Phillip; Aloisio, Alberto; Alonso, Alejandro; Alonso, Francisco; Alpigiani, Cristiano; Alshehri, Azzah Aziz; Alstaty, Mahmoud; Alvarez Gonzalez, Barbara; Άlvarez Piqueras, Damián; Alviggi, Mariagrazia; Amadio, Brian Thomas; Amaral Coutinho, Yara; Amelung, Christoph; Amidei, Dante; Amor Dos Santos, Susana Patricia; Amorim, Antonio; Amoroso, Simone; Amundsen, Glenn; Anastopoulos, Christos; Ancu, Lucian Stefan; Andari, Nansi; Andeen, Timothy; Anders, Christoph Falk; Anders, Gabriel; Anders, John Kenneth; Anderson, Kelby; Andreazza, Attilio; Andrei, George Victor; Angelidakis, Stylianos; Angelozzi, Ivan; Angerami, Aaron; Anghinolfi, Francis; Anisenkov, Alexey; Anjos, Nuno; Annovi, Alberto; Antel, Claire; Antonelli, Mario; Antonov, Alexey; Antrim, Daniel Joseph; Anulli, Fabio; Aoki, Masato; Aperio Bella, Ludovica; Arabidze, Giorgi; Arai, Yasuo; Araque, Juan Pedro; Arce, Ayana; Arduh, Francisco Anuar; Arguin, Jean-Francois; Argyropoulos, Spyridon; Arik, Metin; Armbruster, Aaron James; Armitage, Lewis James; Arnaez, Olivier; Arnold, Hannah; Arratia, Miguel; Arslan, Ozan; Artamonov, Andrei; Artoni, Giacomo; Artz, Sebastian; Asai, Shoji; Asbah, Nedaa; Ashkenazi, Adi; Åsman, Barbro; Asquith, Lily; Assamagan, Ketevi; Astalos, Robert; Atkinson, Markus; Atlay, Naim Bora; Augsten, Kamil; Avolio, Giuseppe; Axen, Bradley; Ayoub, Mohamad Kassem; Azuelos, Georges; Baak, Max; Baas, Alessandra; Baca, Matthew John; Bachacou, Henri; Bachas, Konstantinos; Backes, Moritz; Backhaus, Malte; Bagiacchi, Paolo; Bagnaia, Paolo; Bai, Yu; Baines, John; Bajic, Milena; Baker, Oliver Keith; Baldin, Evgenii; Balek, Petr; Balestri, Thomas; Balli, Fabrice; Balunas, William Keaton; Banas, Elzbieta; Banerjee, Swagato; Bannoura, Arwa A E; Barak, Liron; Barberio, Elisabetta Luigia; Barberis, Dario; Barbero, Marlon; Barillari, Teresa; Barisits, Martin-Stefan; Barklow, Timothy; Barlow, Nick; Barnes, Sarah Louise; Barnett, Bruce; Barnett, Michael; Barnovska-Blenessy, Zuzana; Baroncelli, Antonio; Barone, Gaetano; Barr, Alan; Barranco Navarro, Laura; Barreiro, Fernando; Barreiro Guimarães da Costa, João; Bartoldus, Rainer; Barton, Adam Edward; Bartos, Pavol; Basalaev, Artem; Bassalat, Ahmed; Bates, Richard; Batista, Santiago Juan; Batley, Richard; Battaglia, Marco; Bauce, Matteo; Bauer, Florian; Bawa, Harinder Singh; Beacham, James; Beattie, Michael David; Beau, Tristan; Beauchemin, Pierre-Hugues; Bechtle, Philip; Beck, Hans~Peter; Becker, Kathrin; Becker, Maurice; Beckingham, Matthew; Becot, Cyril; Beddall, Andrew; Beddall, Ayda; Bednyakov, Vadim; Bedognetti, Matteo; Bee, Christopher; Beemster, Lars; Beermann, Thomas; Begel, Michael; Behr, Janna Katharina; Bell, Andrew Stuart; Bella, Gideon; Bellagamba, Lorenzo; Bellerive, Alain; Bellomo, Massimiliano; Belotskiy, Konstantin; Beltramello, Olga; Belyaev, Nikita; Benary, Odette; Benchekroun, Driss; Bender, Michael; Bendtz, Katarina; Benekos, Nektarios; Benhammou, Yan; Benhar Noccioli, Eleonora; Benitez, Jose; Benjamin, Douglas; Bensinger, James; Bentvelsen, Stan; Beresford, Lydia; Beretta, Matteo; Berge, David; Bergeaas Kuutmann, Elin; Berger, Nicolas; Beringer, Jürg; Berlendis, Simon; Bernard, Nathan Rogers; Bernius, Catrin; Bernlochner, Florian Urs; Berry, Tracey; Berta, Peter; Bertella, Claudia; Bertoli, Gabriele; Bertolucci, Federico; Bertram, Iain Alexander; Bertsche, Carolyn; Bertsche, David; Besjes, Geert-Jan; Bessidskaia Bylund, Olga; Bessner, Martin Florian; Besson, Nathalie; Betancourt, Christopher; Bethani, Agni; Bethke, Siegfried; Bevan, Adrian John; Bianchi, Riccardo-Maria; Bianco, Michele; Biebel, Otmar; Biedermann, Dustin; Bielski, Rafal; Biesuz, Nicolo Vladi; Biglietti, Michela; Bilbao De Mendizabal, Javier; Billoud, Thomas Remy Victor; Bilokon, Halina; Bindi, Marcello; Bingul, Ahmet; Bini, Cesare; Biondi, Silvia; Bisanz, Tobias; Bjergaard, David Martin; Black, Curtis; Black, James; Black, Kevin; Blackburn, Daniel; Blair, Robert; Blazek, Tomas; Bloch, Ingo; Blocker, Craig; Blue, Andrew; Blum, Walter; Blumenschein, Ulrike; Blunier, Sylvain; Bobbink, Gerjan; Bobrovnikov, Victor; Bocchetta, Simona Serena; Bocci, Andrea; Bock, Christopher; Boehler, Michael; Boerner, Daniela; Bogaerts, Joannes Andreas; Bogavac, Danijela; Bogdanchikov, Alexander; Bohm, Christian; Boisvert, Veronique; Bokan, Petar; Bold, Tomasz; Boldyrev, Alexey; Bomben, Marco; Bona, Marcella; Boonekamp, Maarten; Borisov, Anatoly; Borissov, Guennadi; Bortfeldt, Jonathan; Bortoletto, Daniela; Bortolotto, Valerio; Bos, Kors; Boscherini, Davide; Bosman, Martine; Bossio Sola, Jonathan David; Boudreau, Joseph; Bouffard, Julian; Bouhova-Thacker, Evelina Vassileva; Boumediene, Djamel Eddine; Bourdarios, Claire; Boutle, Sarah Kate; Boveia, Antonio; Boyd, James; Boyko, Igor; Bracinik, Juraj; Brandt, Andrew; Brandt, Gerhard; Brandt, Oleg; Bratzler, Uwe; Brau, Benjamin; Brau, James; Breaden Madden, William Dmitri; Brendlinger, Kurt; Brennan, Amelia Jean; Brenner, Lydia; Brenner, Richard; Bressler, Shikma; Bristow, Timothy Michael; Britton, Dave; Britzger, Daniel; Brochu, Frederic; Brock, Ian; Brock, Raymond; Brooijmans, Gustaaf; Brooks, Timothy; Brooks, William; Brosamer, Jacquelyn; Brost, Elizabeth; Broughton, James; Bruckman de Renstrom, Pawel; Bruncko, Dusan; Bruneliere, Renaud; Bruni, Alessia; Bruni, Graziano; Bruni, Lucrezia Stella; Brunt, Benjamin; Bruschi, Marco; Bruscino, Nello; Bryant, Patrick; Bryngemark, Lene; Buanes, Trygve; Buat, Quentin; Buchholz, Peter; Buckley, Andrew; Budagov, Ioulian; Buehrer, Felix; Bugge, Magnar Kopangen; Bulekov, Oleg; Bullock, Daniel; Burckhart, Helfried; Burdin, Sergey; Burgard, Carsten Daniel; Burger, Angela Maria; Burghgrave, Blake; Burka, Klaudia; Burke, Stephen; Burmeister, Ingo; Burr, Jonathan Thomas Peter; Busato, Emmanuel; Büscher, Daniel; Büscher, Volker; Bussey, Peter; Butler, John; Buttar, Craig; Butterworth, Jonathan; Butti, Pierfrancesco; Buttinger, William; Buzatu, Adrian; Buzykaev, Aleksey; Cabrera Urbán, Susana; Caforio, Davide; Cairo, Valentina; Cakir, Orhan; Calace, Noemi; Calafiura, Paolo; Calandri, Alessandro; Calderini, Giovanni; Calfayan, Philippe; Callea, Giuseppe; Caloba, Luiz; Calvente Lopez, Sergio; Calvet, David; Calvet, Samuel; Calvet, Thomas Philippe; Camacho Toro, Reina; Camarda, Stefano; Camarri, Paolo; Cameron, David; Caminal Armadans, Roger; Camincher, Clement; Campana, Simone; Campanelli, Mario; Camplani, Alessandra; Campoverde, Angel; Canale, Vincenzo; Canepa, Anadi; Cano Bret, Marc; Cantero, Josu; Cao, Tingting; Capeans Garrido, Maria Del Mar; Caprini, Irinel; Caprini, Mihai; Capua, Marcella; Carbone, Ryne Michael; Cardarelli, Roberto; Cardillo, Fabio; Carli, Ina; Carli, Tancredi; Carlino, Gianpaolo; Carlson, Benjamin Taylor; Carminati, Leonardo; Carney, Rebecca; Caron, Sascha; Carquin, Edson; Carrillo-Montoya, German D; Carter, Janet; Carvalho, João; Casadei, Diego; Casado, Maria Pilar; Casolino, Mirkoantonio; Casper, David William; Castaneda-Miranda, Elizabeth; Castelijn, Remco; Castelli, Angelantonio; Castillo Gimenez, Victoria; Castro, Nuno Filipe; Catinaccio, Andrea; Catmore, James; Cattai, Ariella; Caudron, Julien; Cavaliere, Viviana; Cavallaro, Emanuele; Cavalli, Donatella; Cavalli-Sforza, Matteo; Cavasinni, Vincenzo; Ceradini, Filippo; Cerda Alberich, Leonor; Santiago Cerqueira, Augusto; Cerri, Alessandro; Cerrito, Lucio; Cerutti, Fabio; Cervelli, Alberto; Cetin, Serkant Ali; Chafaq, Aziz; Chakraborty, Dhiman; Chan, Stephen Kam-wah; Chan, Yat Long; Chang, Philip; Chapman, John Derek; Charlton, Dave; Chatterjee, Avishek; Chau, Chav Chhiv; Chavez Barajas, Carlos Alberto; Che, Siinn; Cheatham, Susan; Chegwidden, Andrew; Chekanov, Sergei; Chekulaev, Sergey; Chelkov, Gueorgui; Chelstowska, Magda Anna; Chen, Chunhui; Chen, Hucheng; Chen, Karen; Chen, Shenjian; Chen, Shion; Chen, Xin; Chen, Ye; Cheng, Hok Chuen; Cheng, Huajie; Cheng, Yangyang; Cheplakov, Alexander; Cheremushkina, Evgenia; Cherkaoui El Moursli, Rajaa; Chernyatin, Valeriy; Cheu, Elliott; Chevalier, Laurent; Chiarella, Vitaliano; Chiarelli, Giorgio; Chiodini, Gabriele; Chisholm, Andrew; Chitan, Adrian; Chizhov, Mihail; Choi, Kyungeon; Chomont, Arthur Rene; Chouridou, Sofia; Chow, Bonnie Kar Bo; Christodoulou, Valentinos; Chromek-Burckhart, Doris; Chudoba, Jiri; Chuinard, Annabelle Julia; Chwastowski, Janusz; Chytka, Ladislav; Ciapetti, Guido; Ciftci, Abbas Kenan; Cinca, Diane; Cindro, Vladimir; Cioara, Irina Antonela; Ciocca, Claudia; Ciocio, Alessandra; Cirotto, Francesco; Citron, Zvi Hirsh; Citterio, Mauro; Ciubancan, Mihai; Clark, Allan G; Clark, Brian Lee; Clark, Michael; Clark, Philip James; Clarke, Robert; Clement, Christophe; Coadou, Yann; Cobal, Marina; Coccaro, Andrea; Cochran, James H; Colasurdo, Luca; Cole, Brian; Colijn, Auke-Pieter; Collot, Johann; Colombo, Tommaso; Compostella, Gabriele; Conde Muiño, Patricia; Coniavitis, Elias; Connell, Simon Henry; Connelly, Ian; Consorti, Valerio; Constantinescu, Serban; Conti, Geraldine; Conventi, Francesco; Cooke, Mark; Cooper, Ben; Cooper-Sarkar, Amanda; Cormier, Felix; Cormier, Kyle James Read; Cornelissen, Thijs; Corradi, Massimo; Corriveau, Francois; Cortes-Gonzalez, Arely; Cortiana, Giorgio; Costa, Giuseppe; Costa, María José; Costanzo, Davide; Cottin, Giovanna; Cowan, Glen; Cox, Brian; Cranmer, Kyle; Crawley, Samuel Joseph; Cree, Graham; Crépé-Renaudin, Sabine; Crescioli, Francesco; Cribbs, Wayne Allen; Crispin Ortuzar, Mireia; Cristinziani, Markus; Croft, Vince; Crosetti, Giovanni; Cueto, Ana; Cuhadar Donszelmann, Tulay; Cummings, Jane; Curatolo, Maria; Cúth, Jakub; Czirr, Hendrik; Czodrowski, Patrick; D'amen, Gabriele; D'Auria, Saverio; D'Onofrio, Monica; Da Cunha Sargedas De Sousa, Mario Jose; Da Via, Cinzia; Dabrowski, Wladyslaw; Dado, Tomas; Dai, Tiesheng; Dale, Orjan; Dallaire, Frederick; Dallapiccola, Carlo; Dam, Mogens; Dandoy, Jeffrey; Dang, Nguyen Phuong; Daniells, Andrew Christopher; Dann, Nicholas Stuart; Danninger, Matthias; Dano Hoffmann, Maria; Dao, Valerio; Darbo, Giovanni; Darmora, Smita; Dassoulas, James; Dattagupta, Aparajita; Davey, Will; David, Claire; Davidek, Tomas; Davies, Merlin; Davison, Peter; Dawe, Edmund; Dawson, Ian; De, Kaushik; de Asmundis, Riccardo; De Benedetti, Abraham; De Castro, Stefano; De Cecco, Sandro; De Groot, Nicolo; de Jong, Paul; De la Torre, Hector; De Lorenzi, Francesco; De Maria, Antonio; De Pedis, Daniele; De Salvo, Alessandro; De Sanctis, Umberto; De Santo, Antonella; De Vivie De Regie, Jean-Baptiste; Dearnaley, William James; Debbe, Ramiro; Debenedetti, Chiara; Dedovich, Dmitri; Dehghanian, Nooshin; Deigaard, Ingrid; Del Gaudio, Michela; Del Peso, Jose; Del Prete, Tarcisio; Delgove, David; Deliot, Frederic; Delitzsch, Chris Malena; Dell'Acqua, Andrea; Dell'Asta, Lidia; Dell'Orso, Mauro; Della Pietra, Massimo; della Volpe, Domenico; Delmastro, Marco; Delsart, Pierre-Antoine; DeMarco, David; Demers, Sarah; Demichev, Mikhail; Demilly, Aurelien; Denisov, Sergey; Denysiuk, Denys; Derendarz, Dominik; Derkaoui, Jamal Eddine; Derue, Frederic; Dervan, Paul; Desch, Klaus Kurt; Deterre, Cecile; Dette, Karola; Deviveiros, Pier-Olivier; Dewhurst, Alastair; Dhaliwal, Saminder; Di Ciaccio, Anna; Di Ciaccio, Lucia; Di Clemente, William Kennedy; Di Donato, Camilla; Di Girolamo, Alessandro; Di Girolamo, Beniamino; Di Micco, Biagio; Di Nardo, Roberto; Di Simone, Andrea; Di Sipio, Riccardo; Di Valentino, David; Diaconu, Cristinel; Diamond, Miriam; Dias, Flavia; Diaz, Marco Aurelio; Diehl, Edward; Dietrich, Janet; Díez Cornell, Sergio; Dimitrievska, Aleksandra; Dingfelder, Jochen; Dita, Petre; Dita, Sanda; Dittus, Fridolin; Djama, Fares; Djobava, Tamar; Djuvsland, Julia Isabell; Barros do Vale, Maria Aline; Dobos, Daniel; Dobre, Monica; Doglioni, Caterina; Dolejsi, Jiri; Dolezal, Zdenek; Donadelli, Marisilvia; Donati, Simone; Dondero, Paolo; Donini, Julien; Dopke, Jens; Doria, Alessandra; Dova, Maria-Teresa; Doyle, Tony; Drechsler, Eric; Dris, Manolis; Du, Yanyan; Duarte-Campderros, Jorge; Duchovni, Ehud; Duckeck, Guenter; Ducu, Otilia Anamaria; Duda, Dominik; Dudarev, Alexey; Dudder, Andreas Christian; Duffield, Emily Marie; Duflot, Laurent; Dührssen, Michael; Dumancic, Mirta; Duncan, Anna Kathryn; Dunford, Monica; Duran Yildiz, Hatice; Düren, Michael; Durglishvili, Archil; Duschinger, Dirk; Dutta, Baishali; Dyndal, Mateusz; Eckardt, Christoph; Ecker, Katharina Maria; Edgar, Ryan Christopher; Edwards, Nicholas Charles; Eifert, Till; Eigen, Gerald; Einsweiler, Kevin; Ekelof, Tord; El Kacimi, Mohamed; Ellajosyula, Venugopal; Ellert, Mattias; Elles, Sabine; Ellinghaus, Frank; Elliot, Alison; Ellis, Nicolas; Elmsheuser, Johannes; Elsing, Markus; Emeliyanov, Dmitry; Enari, Yuji; Endner, Oliver Chris; Ennis, Joseph Stanford; Erdmann, Johannes; Ereditato, Antonio; Ernis, Gunar; Ernst, Jesse; Ernst, Michael; Errede, Steven; Ertel, Eugen; Escalier, Marc; Esch, Hendrik; Escobar, Carlos; Esposito, Bellisario; Etienvre, Anne-Isabelle; Etzion, Erez; Evans, Hal; Ezhilov, Alexey; Ezzi, Mohammed; Fabbri, Federica; Fabbri, Laura; Facini, Gabriel; Fakhrutdinov, Rinat; Falciano, Speranza; Falla, Rebecca Jane; Faltova, Jana; Fang, Yaquan; Fanti, Marcello; Farbin, Amir; Farilla, Addolorata; Farina, Christian; Farina, Edoardo Maria; Farooque, Trisha; Farrell, Steven; Farrington, Sinead; Farthouat, Philippe; Fassi, Farida; Fassnacht, Patrick; Fassouliotis, Dimitrios; Faucci Giannelli, Michele; Favareto, Andrea; Fawcett, William James; Fayard, Louis; Fedin, Oleg; Fedorko, Wojciech; Feigl, Simon; Feligioni, Lorenzo; Feng, Cunfeng; Feng, Eric; Feng, Haolu; Fenyuk, Alexander; Feremenga, Last; Fernandez Martinez, Patricia; Fernandez Perez, Sonia; Ferrando, James; Ferrari, Arnaud; Ferrari, Pamela; Ferrari, Roberto; Ferreira de Lima, Danilo Enoque; Ferrer, Antonio; Ferrere, Didier; Ferretti, Claudio; Fiedler, Frank; Filipčič, Andrej; Filipuzzi, Marco; Filthaut, Frank; Fincke-Keeler, Margret; Finelli, Kevin Daniel; Fiolhais, Miguel; Fiorini, Luca; Fischer, Adam; Fischer, Cora; Fischer, Julia; Fisher, Wade Cameron; Flaschel, Nils; Fleck, Ivor; Fleischmann, Philipp; Fletcher, Gareth Thomas; Fletcher, Rob Roy MacGregor; Flick, Tobias; Flierl, Bernhard Matthias; Flores Castillo, Luis; Flowerdew, Michael; Forcolin, Giulio Tiziano; Formica, Andrea; Forti, Alessandra; Foster, Andrew Geoffrey; Fournier, Daniel; Fox, Harald; Fracchia, Silvia; Francavilla, Paolo; Franchini, Matteo; Francis, David; Franconi, Laura; Franklin, Melissa; Frate, Meghan; Fraternali, Marco; Freeborn, David; Fressard-Batraneanu, Silvia; Friedrich, Felix; Froidevaux, Daniel; Frost, James; Fukunaga, Chikara; Fullana Torregrosa, Esteban; Fusayasu, Takahiro; Fuster, Juan; Gabaldon, Carolina; Gabizon, Ofir; Gabrielli, Alessandro; Gabrielli, Andrea; Gach, Grzegorz; Gadatsch, Stefan; Gagliardi, Guido; Gagnon, Louis Guillaume; Gagnon, Pauline; Galea, Cristina; Galhardo, Bruno; Gallas, Elizabeth; Gallop, Bruce; Gallus, Petr; Galster, Gorm Aske Gram Krohn; Gan, KK; Ganguly, Sanmay; Gao, Jun; Gao, Yanyan; Gao, Yongsheng; Garay Walls, Francisca; García, Carmen; García Navarro, José Enrique; Garcia-Sciveres, Maurice; Gardner, Robert; Garelli, Nicoletta; Garonne, Vincent; Gascon Bravo, Alberto; Gasnikova, Ksenia; Gatti, Claudio; Gaudiello, Andrea; Gaudio, Gabriella; Gauthier, Lea; Gavrilenko, Igor; Gay, Colin; Gaycken, Goetz; Gazis, Evangelos; Gecse, Zoltan; Gee, Norman; Geich-Gimbel, Christoph; Geisen, Marc; Geisler, Manuel Patrice; Gellerstedt, Karl; Gemme, Claudia; Genest, Marie-Hélène; Geng, Cong; Gentile, Simonetta; Gentsos, Christos; George, Simon; Gerbaudo, Davide; Gershon, Avi; Ghasemi, Sara; Ghneimat, Mazuza; Giacobbe, Benedetto; Giagu, Stefano; Giannetti, Paola; Gibson, Stephen; Gignac, Matthew; Gilchriese, Murdock; Gillam, Thomas; Gillberg, Dag; Gilles, Geoffrey; Gingrich, Douglas; Giokaris, Nikos; Giordani, MarioPaolo; Giorgi, Filippo Maria; Giraud, Pierre-Francois; Giromini, Paolo; Giugni, Danilo; Giuli, Francesco; Giuliani, Claudia; Giulini, Maddalena; Gjelsten, Børge Kile; Gkaitatzis, Stamatios; Gkialas, Ioannis; Gkougkousis, Evangelos Leonidas; Gladilin, Leonid; Glasman, Claudia; Glatzer, Julian; Glaysher, Paul; Glazov, Alexandre; Goblirsch-Kolb, Maximilian; Godlewski, Jan; Goldfarb, Steven; Golling, Tobias; Golubkov, Dmitry; Gomes, Agostinho; Gonçalo, Ricardo; Goncalves Pinto Firmino Da Costa, Joao; Gonella, Giulia; Gonella, Laura; Gongadze, Alexi; González de la Hoz, Santiago; Gonzalez-Sevilla, Sergio; Goossens, Luc; Gorbounov, Petr Andreevich; Gordon, Howard; Gorelov, Igor; Gorini, Benedetto; Gorini, Edoardo; Gorišek, Andrej; Gornicki, Edward; Goshaw, Alfred; Gössling, Claus; Gostkin, Mikhail Ivanovitch; Goudet, Christophe Raymond; Goujdami, Driss; Goussiou, Anna; Govender, Nicolin; Gozani, Eitan; Graber, Lars; Grabowska-Bold, Iwona; Gradin, Per Olov Joakim; Grafström, Per; Gramling, Johanna; Gramstad, Eirik; Grancagnolo, Sergio; Gratchev, Vadim; Gravila, Paul Mircea; Gray, Heather; Graziani, Enrico; Greenwood, Zeno Dixon; Grefe, Christian; Gregersen, Kristian; Gregor, Ingrid-Maria; Grenier, Philippe; Grevtsov, Kirill; Griffiths, Justin; Grillo, Alexander; Grimm, Kathryn; Grinstein, Sebastian; Gris, Philippe Luc Yves; Grivaz, Jean-Francois; Groh, Sabrina; Gross, Eilam; Grosse-Knetter, Joern; Grossi, Giulio Cornelio; Grout, Zara Jane; Guan, Liang; Guan, Wen; Guenther, Jaroslav; Guescini, Francesco; Guest, Daniel; Gueta, Orel; Gui, Bin; Guido, Elisa; Guillemin, Thibault; Guindon, Stefan; Gul, Umar; Gumpert, Christian; Guo, Jun; Guo, Yicheng; Gupta, Ruchi; Gupta, Shaun; Gustavino, Giuliano; Gutierrez, Phillip; Gutierrez Ortiz, Nicolas Gilberto; Gutschow, Christian; Guyot, Claude; Gwenlan, Claire; Gwilliam, Carl; Haas, Andy; Haber, Carl; Hadavand, Haleh Khani; Haddad, Nacim; Hadef, Asma; Hageböck, Stephan; Hagihara, Mutsuto; Hajduk, Zbigniew; Hakobyan, Hrachya; Haleem, Mahsana; Haley, Joseph; Halladjian, Garabed; Hallewell, Gregory David; Hamacher, Klaus; Hamal, Petr; Hamano, Kenji; Hamilton, Andrew; Hamity, Guillermo Nicolas; Hamnett, Phillip George; Han, Liang; Hanagaki, Kazunori; Hanawa, Keita; Hance, Michael; Haney, Bijan; Hanke, Paul; Hanna, Remie; Hansen, Jørgen Beck; Hansen, Jorn Dines; Hansen, Maike Christina; Hansen, Peter Henrik; Hara, Kazuhiko; Hard, Andrew; Harenberg, Torsten; Hariri, Faten; Harkusha, Siarhei; Harrington, Robert; Harrison, Paul Fraser; Hartjes, Fred; Hartmann, Nikolai Marcel; Hasegawa, Makoto; Hasegawa, Yoji; Hasib, Ahmed; Hassani, Samira; Haug, Sigve; Hauser, Reiner; Hauswald, Lorenz; Havranek, Miroslav; Hawkes, Christopher; Hawkings, Richard John; Hayakawa, Daiki; Hayden, Daniel; Hays, Chris; Hays, Jonathan Michael; Hayward, Helen; Haywood, Stephen; Head, Simon; Heck, Tobias; Hedberg, Vincent; Heelan, Louise; Heim, Sarah; Heim, Timon; Heinemann, Beate; Heinrich, Jochen Jens; Heinrich, Lukas; Heinz, Christian; Hejbal, Jiri; Helary, Louis; Hellman, Sten; Helsens, Clement; Henderson, James; Henderson, Robert; Heng, Yang; Henkelmann, Steffen; Henriques Correia, Ana Maria; Henrot-Versille, Sophie; Herbert, Geoffrey Henry; Herde, Hannah; Herget, Verena; Hernández Jiménez, Yesenia; Herten, Gregor; Hertenberger, Ralf; Hervas, Luis; Hesketh, Gavin Grant; Hessey, Nigel; Hetherly, Jeffrey Wayne; Higón-Rodriguez, Emilio; Hill, Ewan; Hill, John; Hiller, Karl Heinz; Hillier, Stephen; Hinchliffe, Ian; Hines, Elizabeth; Hirose, Minoru; Hirschbuehl, Dominic; Hoad, Xanthe; Hobbs, John; Hod, Noam; Hodgkinson, Mark; Hodgson, Paul; Hoecker, Andreas; Hoeferkamp, Martin; Hoenig, Friedrich; Hohn, David; Holmes, Tova Ray; Homann, Michael; Honda, Takuya; Hong, Tae Min; Hooberman, Benjamin Henry; Hopkins, Walter; Horii, Yasuyuki; Horton, Arthur James; Hostachy, Jean-Yves; Hou, Suen; Hoummada, Abdeslam; Howarth, James; Hoya, Joaquin; Hrabovsky, Miroslav; Hristova, Ivana; Hrivnac, Julius; Hryn'ova, Tetiana; Hrynevich, Aliaksei; Hsu, Pai-hsien Jennifer; Hsu, Shih-Chieh; Hu, Qipeng; Hu, Shuyang; Huang, Yanping; Hubacek, Zdenek; Hubaut, Fabrice; Huegging, Fabian; Huffman, Todd Brian; Hughes, Emlyn; Hughes, Gareth; Huhtinen, Mika; Huo, Peng; Huseynov, Nazim; Huston, Joey; Huth, John; Iacobucci, Giuseppe; Iakovidis, Georgios; Ibragimov, Iskander; Iconomidou-Fayard, Lydia; Ideal, Emma; Idrissi, Zineb; Iengo, Paolo; Igonkina, Olga; Iizawa, Tomoya; Ikai, Takashi; Ikegami, Yoichi; Ikeno, Masahiro; Ilchenko, Yuriy; Iliadis, Dimitrios; Ilic, Nikolina; Introzzi, Gianluca; Ioannou, Pavlos; Iodice, Mauro; Iordanidou, Kalliopi; Ippolito, Valerio; Ishijima, Naoki; Ishino, Masaya; Ishitsuka, Masaki; Ishmukhametov, Renat; Issever, Cigdem; Istin, Serhat; Ito, Fumiaki; Iturbe Ponce, Julia Mariana; Iuppa, Roberto; Iwanski, Wieslaw; Iwasaki, Hiroyuki; Izen, Joseph; Izzo, Vincenzo; Jabbar, Samina; Jackson, Brett; Jackson, Paul; Jain, Vivek; Jakobi, Katharina Bianca; Jakobs, Karl; Jakobsen, Sune; Jakoubek, Tomas; Jamin, David Olivier; Jana, Dilip; Jansky, Roland; Janssen, Jens; Janus, Michel; Janus, Piotr Andrzej; Jarlskog, Göran; Javadov, Namig; Javůrek, Tomáš; Jeanneau, Fabien; Jeanty, Laura; Jejelava, Juansher; Jeng, Geng-yuan; Jennens, David; Jenni, Peter; Jeske, Carl; Jézéquel, Stéphane; Ji, Haoshuang; Jia, Jiangyong; Jiang, Hai; Jiang, Yi; Jiang, Zihao; Jiggins, Stephen; Jimenez Pena, Javier; Jin, Shan; Jinaru, Adam; Jinnouchi, Osamu; Jivan, Harshna; Johansson, Per; Johns, Kenneth; Johnson, William Joseph; Jon-And, Kerstin; Jones, Graham; Jones, Roger; Jones, Sarah; Jones, Tim; Jongmanns, Jan; Jorge, Pedro; Jovicevic, Jelena; Ju, Xiangyang; Juste Rozas, Aurelio; Köhler, Markus Konrad; Kaczmarska, Anna; Kado, Marumi; Kagan, Harris; Kagan, Michael; Kahn, Sebastien Jonathan; Kaji, Toshiaki; Kajomovitz, Enrique; Kalderon, Charles William; Kaluza, Adam; Kama, Sami; Kamenshchikov, Andrey; Kanaya, Naoko; Kaneti, Steven; Kanjir, Luka; Kantserov, Vadim; Kanzaki, Junichi; Kaplan, Benjamin; Kaplan, Laser Seymour; Kapliy, Anton; Kar, Deepak; Karakostas, Konstantinos; Karamaoun, Andrew; Karastathis, Nikolaos; Kareem, Mohammad Jawad; Karentzos, Efstathios; Karnevskiy, Mikhail; Karpov, Sergey; Karpova, Zoya; Karthik, Krishnaiyengar; Kartvelishvili, Vakhtang; Karyukhin, Andrey; Kasahara, Kota; Kashif, Lashkar; Kass, Richard; Kastanas, Alex; Kataoka, Yousuke; Kato, Chikuma; Katre, Akshay; Katzy, Judith; Kawade, Kentaro; Kawagoe, Kiyotomo; Kawamoto, Tatsuo; Kawamura, Gen; Kazanin, Vassili; Keeler, Richard; Kehoe, Robert; Keller, John; Kempster, Jacob Julian; Keoshkerian, Houry; Kepka, Oldrich; Kerševan, Borut Paul; Kersten, Susanne; Keyes, Robert; Khader, Mazin; Khalil-zada, Farkhad; Khanov, Alexander; Kharlamov, Alexey; Kharlamova, Tatyana; Khoo, Teng Jian; Khovanskiy, Valery; Khramov, Evgeniy; Khubua, Jemal; Kido, Shogo; Kilby, Callum; Kim, Hee Yeun; Kim, Shinhong; Kim, Young-Kee; Kimura, Naoki; Kind, Oliver Maria; King, Barry; King, Matthew; Kirk, Julie; Kiryunin, Andrey; Kishimoto, Tomoe; Kisielewska, Danuta; Kiss, Florian; Kiuchi, Kenji; Kivernyk, Oleh; Kladiva, Eduard; Klein, Matthew Henry; Klein, Max; Klein, Uta; Kleinknecht, Konrad; Klimek, Pawel; Klimentov, Alexei; Klingenberg, Reiner; Klioutchnikova, Tatiana; Kluge, Eike-Erik; Kluit, Peter; Kluth, Stefan; Knapik, Joanna; Kneringer, Emmerich; Knoops, Edith; Knue, Andrea; Kobayashi, Aine; Kobayashi, Dai; Kobayashi, Tomio; Kobel, Michael; Kocian, Martin; Kodys, Peter; Koffas, Thomas; Koffeman, Els; Köhler, Nicolas Maximilian; Koi, Tatsumi; Kolanoski, Hermann; Kolb, Mathis; Koletsou, Iro; Komar, Aston; Komori, Yuto; Kondo, Takahiko; Kondrashova, Nataliia; Köneke, Karsten; König, Adriaan; Kono, Takanori; Konoplich, Rostislav; Konstantinidis, Nikolaos; Kopeliansky, Revital; Koperny, Stefan; Köpke, Lutz; Kopp, Anna Katharina; Korcyl, Krzysztof; Kordas, Kostantinos; Korn, Andreas; Korol, Aleksandr; Korolkov, Ilya; Korolkova, Elena; Kortner, Oliver; Kortner, Sandra; Kosek, Tomas; Kostyukhin, Vadim; Kotwal, Ashutosh; Koulouris, Aimilianos; Kourkoumeli-Charalampidi, Athina; Kourkoumelis, Christine; Kouskoura, Vasiliki; Kowalewska, Anna Bozena; Kowalewski, Robert Victor; Kowalski, Tadeusz; Kozakai, Chihiro; Kozanecki, Witold; Kozhin, Anatoly; Kramarenko, Viktor; Kramberger, Gregor; Krasnopevtsev, Dimitriy; Krasny, Mieczyslaw Witold; Krasznahorkay, Attila; Kravchenko, Anton; Kretz, Moritz; Kretzschmar, Jan; Kreutzfeldt, Kristof; Krieger, Peter; Krizka, Karol; Kroeninger, Kevin; Kroha, Hubert; Kroll, Joe; Kroseberg, Juergen; Krstic, Jelena; Kruchonak, Uladzimir; Krüger, Hans; Krumnack, Nils; Kruse, Mark; Kruskal, Michael; Kubota, Takashi; Kucuk, Hilal; Kuday, Sinan; Kuechler, Jan Thomas; Kuehn, Susanne; Kugel, Andreas; Kuger, Fabian; Kuhl, Thorsten; Kukhtin, Victor; Kukla, Romain; Kulchitsky, Yuri; Kuleshov, Sergey; Kuna, Marine; Kunigo, Takuto; Kupco, Alexander; Kurashige, Hisaya; Kurchaninov, Leonid; Kurochkin, Yurii; Kurth, Matthew Glenn; Kus, Vlastimil; Kuwertz, Emma Sian; Kuze, Masahiro; Kvita, Jiri; Kwan, Tony; Kyriazopoulos, Dimitrios; La Rosa, Alessandro; La Rosa Navarro, Jose Luis; La Rotonda, Laura; Lacasta, Carlos; Lacava, Francesco; Lacey, James; Lacker, Heiko; Lacour, Didier; Lacuesta, Vicente Ramón; Ladygin, Evgueni; Lafaye, Remi; Laforge, Bertrand; Lagouri, Theodota; Lai, Stanley; Lammers, Sabine; Lampl, Walter; Lançon, Eric; Landgraf, Ulrich; Landon, Murrough; Lanfermann, Marie Christine; Lang, Valerie Susanne; Lange, J örn Christian; Lankford, Andrew; Lanni, Francesco; Lantzsch, Kerstin; Lanza, Agostino; Laplace, Sandrine; Lapoire, Cecile; Laporte, Jean-Francois; Lari, Tommaso; Lasagni Manghi, Federico; Lassnig, Mario; Laurelli, Paolo; Lavrijsen, Wim; Law, Alexander; Laycock, Paul; Lazovich, Tomo; Lazzaroni, Massimo; Le, Brian; Le Dortz, Olivier; Le Guirriec, Emmanuel; Le Quilleuc, Eloi; LeBlanc, Matthew Edgar; LeCompte, Thomas; Ledroit-Guillon, Fabienne Agnes Marie; Lee, Claire Alexandra; Lee, Shih-Chang; Lee, Lawrence; Lefebvre, Benoit; Lefebvre, Guillaume; Lefebvre, Michel; Legger, Federica; Leggett, Charles; Lehan, Allan; Lehmann Miotto, Giovanna; Lei, Xiaowen; Leight, William Axel; Leister, Andrew Gerard; Leite, Marco Aurelio Lisboa; Leitner, Rupert; Lellouch, Daniel; Lemmer, Boris; Leney, Katharine; Lenz, Tatjana; Lenzi, Bruno; Leone, Robert; Leone, Sandra; Leonidopoulos, Christos; Leontsinis, Stefanos; Lerner, Giuseppe; Leroy, Claude; Lesage, Arthur; Lester, Christopher; Levchenko, Mikhail; Levêque, Jessica; Levin, Daniel; Levinson, Lorne; Levy, Mark; Lewis, Dave; Leyton, Michael; Li, Bing; Li, Changqiao; Li, Haifeng; Li, Lei; Li, Liang; Li, Qi; Li, Shu; Li, Xingguo; Li, Yichen; Liang, Zhijun; Liberti, Barbara; Liblong, Aaron; Lichard, Peter; Lie, Ki; Liebal, Jessica; Liebig, Wolfgang; Limosani, Antonio; Lin, Simon; Lin, Tai-Hua; Lindquist, Brian Edward; Lionti, Anthony Eric; Lipeles, Elliot; Lipniacka, Anna; Lisovyi, Mykhailo; Liss, Tony; Lister, Alison; Litke, Alan; Liu, Bo; Liu, Dong; Liu, Hao; Liu, Hongbin; Liu, Jian; Liu, Jianbei; Liu, Kun; Liu, Lulu; Liu, Minghui; Liu, Yanlin; Liu, Yanwen; Livan, Michele; Lleres, Annick; Llorente Merino, Javier; Lloyd, Stephen; Lo Sterzo, Francesco; Lobodzinska, Ewelina Maria; Loch, Peter; Loebinger, Fred; Loew, Kevin Michael; Loginov, Andrey; Lohse, Thomas; Lohwasser, Kristin; Lokajicek, Milos; Long, Brian Alexander; Long, Jonathan David; Long, Robin Eamonn; Longo, Luigi; Looper, Kristina Anne; Lopez Lopez, Jorge Andres; Lopez Mateos, David; Lopez Paredes, Brais; Lopez Paz, Ivan; Lopez Solis, Alvaro; Lorenz, Jeanette; Lorenzo Martinez, Narei; Losada, Marta; Lösel, Philipp Jonathan; Lou, XinChou; Lounis, Abdenour; Love, Jeremy; Love, Peter; Lu, Haonan; Lu, Nan; Lubatti, Henry; Luci, Claudio; Lucotte, Arnaud; Luedtke, Christian; Luehring, Frederick; Lukas, Wolfgang; Luminari, Lamberto; Lundberg, Olof; Lund-Jensen, Bengt; Luzi, Pierre Marc; Lynn, David; Lysak, Roman; Lytken, Else; Lyubushkin, Vladimir; Ma, Hong; Ma, Lian Liang; Ma, Yanhui; Maccarrone, Giovanni; Macchiolo, Anna; Macdonald, Calum Michael; Maček, Boštjan; Machado Miguens, Joana; Madaffari, Daniele; Madar, Romain; Maddocks, Harvey Jonathan; Mader, Wolfgang; Madsen, Alexander; Maeda, Junpei; Maeland, Steffen; Maeno, Tadashi; Maevskiy, Artem; Magradze, Erekle; Mahlstedt, Joern; Maiani, Camilla; Maidantchik, Carmen; Maier, Andreas Alexander; Maier, Thomas; Maio, Amélia; Majewski, Stephanie; Makida, Yasuhiro; Makovec, Nikola; Malaescu, Bogdan; Malecki, Pawel; Maleev, Victor; Malek, Fairouz; Mallik, Usha; Malon, David; Malone, Caitlin; Malone, Claire; Maltezos, Stavros; Malyukov, Sergei; Mamuzic, Judita; Mancini, Giada; Mandelli, Luciano; Mandić, Igor; Maneira, José; Manhaes de Andrade Filho, Luciano; Manjarres Ramos, Joany; Mann, Alexander; Manousos, Athanasios; Mansoulie, Bruno; Mansour, Jason Dhia; Mantifel, Rodger; Mantoani, Matteo; Manzoni, Stefano; Mapelli, Livio; Marceca, Gino; March, Luis; Marchiori, Giovanni; Marcisovsky, Michal; Marjanovic, Marija; Marley, Daniel; Marroquim, Fernando; Marsden, Stephen Philip; Marshall, Zach; Marti-Garcia, Salvador; Martin, Brian Thomas; Martin, Tim; Martin, Victoria Jane; Martin dit Latour, Bertrand; Martinez, Mario; Martinez Outschoorn, Verena; Martin-Haugh, Stewart; Martoiu, Victor Sorin; Martyniuk, Alex; Marzin, Antoine; Masetti, Lucia; Mashimo, Tetsuro; Mashinistov, Ruslan; Masik, Jiri; Maslennikov, Alexey; Massa, Ignazio; Massa, Lorenzo; Mastrandrea, Paolo; Mastroberardino, Anna; Masubuchi, Tatsuya; Mättig, Peter; Mattmann, Johannes; Maurer, Julien; Maxfield, Stephen; Maximov, Dmitriy; Mazini, Rachid; Maznas, Ioannis; Mazza, Simone Michele; Mc Fadden, Neil Christopher; Mc Goldrick, Garrin; Mc Kee, Shawn Patrick; McCarn, Allison; McCarthy, Robert; McCarthy, Tom; McClymont, Laurie; McDonald, Emily; Mcfayden, Josh; Mchedlidze, Gvantsa; McMahon, Steve; McNamara, Peter Charles; McPherson, Robert; Medinnis, Michael; Meehan, Samuel; Mehlhase, Sascha; Mehta, Andrew; Meier, Karlheinz; Meineck, Christian; Meirose, Bernhard; Melini, Davide; Mellado Garcia, Bruce Rafael; Melo, Matej; Meloni, Federico; Menary, Stephen Burns; Meng, Lingxin; Meng, Xiangting; Mengarelli, Alberto; Menke, Sven; Meoni, Evelin; Mergelmeyer, Sebastian; Mermod, Philippe; Merola, Leonardo; Meroni, Chiara; Merritt, Frank; Messina, Andrea; Metcalfe, Jessica; Mete, Alaettin Serhan; Meyer, Carsten; Meyer, Christopher; Meyer, Jean-Pierre; Meyer, Jochen; Meyer Zu Theenhausen, Hanno; Miano, Fabrizio; Middleton, Robin; Miglioranzi, Silvia; Mijović, Liza; Mikenberg, Giora; Mikestikova, Marcela; Mikuž, Marko; Milesi, Marco; Milic, Adriana; Miller, David; Mills, Corrinne; Milov, Alexander; Milstead, David; Minaenko, Andrey; Minami, Yuto; Minashvili, Irakli; Mincer, Allen; Mindur, Bartosz; Mineev, Mikhail; Minegishi, Yuji; Ming, Yao; Mir, Lluisa-Maria; Mistry, Khilesh; Mitani, Takashi; Mitrevski, Jovan; Mitsou, Vasiliki A; Miucci, Antonio; Miyagawa, Paul; Mizukami, Atsushi; Mjörnmark, Jan-Ulf; Mlynarikova, Michaela; Moa, Torbjoern; Mochizuki, Kazuya; Mogg, Philipp; Mohapatra, Soumya; Molander, Simon; Moles-Valls, Regina; Monden, Ryutaro; Mondragon, Matthew Craig; Mönig, Klaus; Monk, James; Monnier, Emmanuel; Montalbano, Alyssa; Montejo Berlingen, Javier; Monticelli, Fernando; Monzani, Simone; Moore, Roger; Morange, Nicolas; Moreno, Deywis; Moreno Llácer, María; Morettini, Paolo; Morgenstern, Stefanie; Mori, Daniel; Mori, Tatsuya; Morii, Masahiro; Morinaga, Masahiro; Morisbak, Vanja; Moritz, Sebastian; Morley, Anthony Keith; Mornacchi, Giuseppe; Morris, John; Mortensen, Simon Stark; Morvaj, Ljiljana; Moschovakos, Paris; Mosidze, Maia; Moss, Harry James; Moss, Josh; Motohashi, Kazuki; Mount, Richard; Mountricha, Eleni; Moyse, Edward; Muanza, Steve; Mudd, Richard; Mueller, Felix; Mueller, James; Mueller, Ralph Soeren Peter; Mueller, Thibaut; Muenstermann, Daniel; Mullen, Paul; Mullier, Geoffrey; Munoz Sanchez, Francisca Javiela; Murillo Quijada, Javier Alberto; Murray, Bill; Musheghyan, Haykuhi; Muškinja, Miha; Myagkov, Alexey; Myska, Miroslav; Nachman, Benjamin Philip; Nackenhorst, Olaf; Nagai, Koichi; Nagai, Ryo; Nagano, Kunihiro; Nagasaka, Yasushi; Nagata, Kazuki; Nagel, Martin; Nagy, Elemer; Nairz, Armin Michael; Nakahama, Yu; Nakamura, Koji; Nakamura, Tomoaki; Nakano, Itsuo; Naranjo Garcia, Roger Felipe; Narayan, Rohin; Narrias Villar, Daniel Isaac; Naryshkin, Iouri; Naumann, Thomas; Navarro, Gabriela; Nayyar, Ruchika; Neal, Homer; Nechaeva, Polina; Neep, Thomas James; Negri, Andrea; Negrini, Matteo; Nektarijevic, Snezana; Nellist, Clara; Nelson, Andrew; Nemecek, Stanislav; Nemethy, Peter; Nepomuceno, Andre Asevedo; Nessi, Marzio; Neubauer, Mark; Neumann, Manuel; Neves, Ricardo; Nevski, Pavel; Newman, Paul; Nguyen, Duong Hai; Nguyen Manh, Tuan; Nickerson, Richard; Nicolaidou, Rosy; Nielsen, Jason; Nikiforov, Andriy; Nikolaenko, Vladimir; Nikolic-Audit, Irena; Nikolopoulos, Konstantinos; Nilsen, Jon Kerr; Nilsson, Paul; Ninomiya, Yoichi; Nisati, Aleandro; Nisius, Richard; Nobe, Takuya; Nomachi, Masaharu; Nomidis, Ioannis; Nooney, Tamsin; Norberg, Scarlet; Nordberg, Markus; Norjoharuddeen, Nurfikri; Novgorodova, Olga; Nowak, Sebastian; Nozaki, Mitsuaki; Nozka, Libor; Ntekas, Konstantinos; Nurse, Emily; Nuti, Francesco; O'grady, Fionnbarr; O'Neil, Dugan; O'Rourke, Abigail Alexandra; O'Shea, Val; Oakham, Gerald; Oberlack, Horst; Obermann, Theresa; Ocariz, Jose; Ochi, Atsuhiko; Ochoa, Ines; Ochoa-Ricoux, Juan Pedro; Oda, Susumu; Odaka, Shigeru; Ogren, Harold; Oh, Alexander; Oh, Seog; Ohm, Christian; Ohman, Henrik; Oide, Hideyuki; Okawa, Hideki; Okumura, Yasuyuki; Okuyama, Toyonobu; Olariu, Albert; Oleiro Seabra, Luis Filipe; Olivares Pino, Sebastian Andres; Oliveira Damazio, Denis; Olszewski, Andrzej; Olszowska, Jolanta; Onofre, António; Onogi, Kouta; Onyisi, Peter; Oreglia, Mark; Oren, Yona; Orestano, Domizia; Orlando, Nicola; Orr, Robert; Osculati, Bianca; Ospanov, Rustem; Otero y Garzon, Gustavo; Otono, Hidetoshi; Ouchrif, Mohamed; Ould-Saada, Farid; Ouraou, Ahmimed; Oussoren, Koen Pieter; Ouyang, Qun; Owen, Mark; Owen, Rhys Edward; Ozcan, Veysi Erkcan; Ozturk, Nurcan; Pachal, Katherine; Pacheco Pages, Andres; Pacheco Rodriguez, Laura; Padilla Aranda, Cristobal; Pagáčová, Martina; Pagan Griso, Simone; Paganini, Michela; Paige, Frank; Pais, Preema; Pajchel, Katarina; Palacino, Gabriel; Palazzo, Serena; Palestini, Sandro; Palka, Marek; Pallin, Dominique; Panagiotopoulou, Evgenia; Panagoulias, Ilias; Pandini, Carlo Enrico; Panduro Vazquez, William; Pani, Priscilla; Panitkin, Sergey; Pantea, Dan; Paolozzi, Lorenzo; Papadopoulou, Theodora; Papageorgiou, Konstantinos; Paramonov, Alexander; Paredes Hernandez, Daniela; Parker, Adam Jackson; Parker, Michael Andrew; Parker, Kerry Ann; Parodi, Fabrizio; Parsons, John; Parzefall, Ulrich; Pascuzzi, Vincent; Pasqualucci, Enrico; Passaggio, Stefano; Pastore, Francesca; Pásztor, Gabriella; Pataraia, Sophio; Pater, Joleen; Pauly, Thilo; Pearce, James; Pearson, Benjamin; Pedersen, Lars Egholm; Pedersen, Maiken; Pedraza Lopez, Sebastian; Pedro, Rute; Peleganchuk, Sergey; Penc, Ondrej; Peng, Cong; Peng, Haiping; Penwell, John; Peralva, Bernardo; Perego, Marta Maria; Perepelitsa, Dennis; Perez Codina, Estel; Perini, Laura; Pernegger, Heinz; Perrella, Sabrina; Peschke, Richard; Peshekhonov, Vladimir; Peters, Krisztian; Peters, Yvonne; Petersen, Brian; Petersen, Troels; Petit, Elisabeth; Petridis, Andreas; Petridou, Chariclia; Petroff, Pierre; Petrolo, Emilio; Petrov, Mariyan; Petrucci, Fabrizio; Pettersson, Nora Emilia; Peyaud, Alan; Pezoa, Raquel; Phillips, Peter William; Piacquadio, Giacinto; Pianori, Elisabetta; Picazio, Attilio; Piccaro, Elisa; Piccinini, Maurizio; Pickering, Mark Andrew; Piegaia, Ricardo; Pilcher, James; Pilkington, Andrew; Pin, Arnaud Willy J; Pinamonti, Michele; Pinfold, James; Pingel, Almut; Pires, Sylvestre; Pirumov, Hayk; Pitt, Michael; Plazak, Lukas; Pleier, Marc-Andre; Pleskot, Vojtech; Plotnikova, Elena; Pluth, Daniel; Poettgen, Ruth; Poggioli, Luc; Pohl, David-leon; Polesello, Giacomo; Poley, Anne-luise; Policicchio, Antonio; Polifka, Richard; Polini, Alessandro; Pollard, Christopher Samuel; Polychronakos, Venetios; Pommès, Kathy; Pontecorvo, Ludovico; Pope, Bernard; Popeneciu, Gabriel Alexandru; Poppleton, Alan; Pospisil, Stanislav; Potamianos, Karolos; Potrap, Igor; Potter, Christina; Potter, Christopher; Poulard, Gilbert; Poveda, Joaquin; Pozdnyakov, Valery; Pozo Astigarraga, Mikel Eukeni; Pralavorio, Pascal; Pranko, Aliaksandr; Prell, Soeren; Price, Darren; Price, Lawrence; Primavera, Margherita; Prince, Sebastien; Prokofiev, Kirill; Prokoshin, Fedor; Protopopescu, Serban; Proudfoot, James; Przybycien, Mariusz; Puddu, Daniele; Purohit, Milind; Puzo, Patrick; Qian, Jianming; Qin, Gang; Qin, Yang; Quadt, Arnulf; Quayle, William; Queitsch-Maitland, Michaela; Quilty, Donnchadha; Raddum, Silje; Radeka, Veljko; Radescu, Voica; Radhakrishnan, Sooraj Krishnan; Radloff, Peter; Rados, Pere; Ragusa, Francesco; Rahal, Ghita; Raine, John Andrew; Rajagopalan, Srinivasan; Rammensee, Michael; Rangel-Smith, Camila; Ratti, Maria Giulia; Rauch, Daniel; Rauscher, Felix; Rave, Stefan; Ravenscroft, Thomas; Ravinovich, Ilia; Raymond, Michel; Read, Alexander Lincoln; Readioff, Nathan Peter; Reale, Marilea; Rebuzzi, Daniela; Redelbach, Andreas; Redlinger, George; Reece, Ryan; Reed, Robert; Reeves, Kendall; Rehnisch, Laura; Reichert, Joseph; Reiss, Andreas; Rembser, Christoph; Ren, Huan; Rescigno, Marco; Resconi, Silvia; Rezanova, Olga; Reznicek, Pavel; Rezvani, Reyhaneh; Richter, Robert; Richter, Stefan; Richter-Was, Elzbieta; Ricken, Oliver; Ridel, Melissa; Rieck, Patrick; Riegel, Christian Johann; Rieger, Julia; Rifki, Othmane; Rijssenbeek, Michael; Rimoldi, Adele; Rimoldi, Marco; Rinaldi, Lorenzo; Ristić, Branislav; Ritsch, Elmar; Riu, Imma; Rizatdinova, Flera; Rizvi, Eram; Rizzi, Chiara; Robertson, Steven; Robichaud-Veronneau, Andree; Robinson, Dave; Robinson, James; Robson, Aidan; Roda, Chiara; Rodina, Yulia; Rodriguez Perez, Andrea; Rodriguez Rodriguez, Daniel; Roe, Shaun; Rogan, Christopher Sean; Røhne, Ole; Roloff, Jennifer; Romaniouk, Anatoli; Romano, Marino; Romano Saez, Silvestre Marino; Romero Adam, Elena; Rompotis, Nikolaos; Ronzani, Manfredi; Roos, Lydia; Ros, Eduardo; Rosati, Stefano; Rosbach, Kilian; Rose, Peyton; Rosien, Nils-Arne; Rossetti, Valerio; Rossi, Elvira; Rossi, Leonardo Paolo; Rosten, Jonatan; Rosten, Rachel; Rotaru, Marina; Roth, Itamar; Rothberg, Joseph; Rousseau, David; Rozanov, Alexandre; Rozen, Yoram; Ruan, Xifeng; Rubbo, Francesco; Rudolph, Matthew Scott; Rühr, Frederik; Ruiz-Martinez, Aranzazu; Rurikova, Zuzana; Rusakovich, Nikolai; Ruschke, Alexander; Russell, Heather; Rutherfoord, John; Ruthmann, Nils; Ryabov, Yury; Rybar, Martin; Rybkin, Grigori; Ryu, Soo; Ryzhov, Andrey; Rzehorz, Gerhard Ferdinand; Saavedra, Aldo; Sabato, Gabriele; Sacerdoti, Sabrina; Sadrozinski, Hartmut; Sadykov, Renat; Safai Tehrani, Francesco; Saha, Puja; Sahinsoy, Merve; Saimpert, Matthias; Saito, Tomoyuki; Sakamoto, Hiroshi; Sakurai, Yuki; Salamanna, Giuseppe; Salamon, Andrea; Salazar Loyola, Javier Esteban; Salek, David; Sales De Bruin, Pedro Henrique; Salihagic, Denis; Salnikov, Andrei; Salt, José; Salvatore, Daniela; Salvatore, Pasquale Fabrizio; Salvucci, Antonio; Salzburger, Andreas; Sammel, Dirk; Sampsonidis, Dimitrios; Sánchez, Javier; Sanchez Martinez, Victoria; Sanchez Pineda, Arturo; Sandaker, Heidi; Sandbach, Ruth Laura; Sandhoff, Marisa; Sandoval, Carlos; Sankey, Dave; Sannino, Mario; Sansoni, Andrea; Santoni, Claudio; Santonico, Rinaldo; Santos, Helena; Santoyo Castillo, Itzebelt; Sapp, Kevin; Sapronov, Andrey; Saraiva, João; Sarrazin, Bjorn; Sasaki, Osamu; Sato, Koji; Sauvan, Emmanuel; Savage, Graham; Savard, Pierre; Savic, Natascha; Sawyer, Craig; Sawyer, Lee; Saxon, James; Sbarra, Carla; Sbrizzi, Antonio; Scanlon, Tim; Scannicchio, Diana; Scarcella, Mark; Scarfone, Valerio; Schaarschmidt, Jana; Schacht, Peter; Schachtner, Balthasar Maria; Schaefer, Douglas; Schaefer, Leigh; Schaefer, Ralph; Schaeffer, Jan; Schaepe, Steffen; Schaetzel, Sebastian; Schäfer, Uli; Schaffer, Arthur; Schaile, Dorothee; Schamberger, R Dean; Scharf, Veit; Schegelsky, Valery; Scheirich, Daniel; Schernau, Michael; Schiavi, Carlo; Schier, Sheena; Schillo, Christian; Schioppa, Marco; Schlenker, Stefan; Schmidt-Sommerfeld, Korbinian Ralf; Schmieden, Kristof; Schmitt, Christian; Schmitt, Stefan; Schmitz, Simon; Schneider, Basil; Schnoor, Ulrike; Schoeffel, Laurent; Schoening, Andre; Schoenrock, Bradley Daniel; Schopf, Elisabeth; Schott, Matthias; Schouwenberg, Jeroen; Schovancova, Jaroslava; Schramm, Steven; Schreyer, Manuel; Schuh, Natascha; Schulte, Alexandra; Schultens, Martin Johannes; Schultz-Coulon, Hans-Christian; Schulz, Holger; Schumacher, Markus; Schumm, Bruce; Schune, Philippe; Schwartzman, Ariel; Schwarz, Thomas Andrew; Schweiger, Hansdieter; Schwemling, Philippe; Schwienhorst, Reinhard; Schwindling, Jerome; Schwindt, Thomas; Sciolla, Gabriella; Scuri, Fabrizio; Scutti, Federico; Searcy, Jacob; Seema, Pienpen; Seidel, Sally; Seiden, Abraham; Seifert, Frank; Seixas, José; Sekhniaidze, Givi; Sekhon, Karishma; Sekula, Stephen; Seliverstov, Dmitry; Semprini-Cesari, Nicola; Serfon, Cedric; Serin, Laurent; Serkin, Leonid; Sessa, Marco; Seuster, Rolf; Severini, Horst; Sfiligoj, Tina; Sforza, Federico; Sfyrla, Anna; Shabalina, Elizaveta; Shaikh, Nabila Wahab; Shan, Lianyou; Shang, Ruo-yu; Shank, James; Shapiro, Marjorie; Shatalov, Pavel; Shaw, Kate; Shaw, Savanna Marie; Shcherbakova, Anna; Shehu, Ciwake Yusufu; Sherwood, Peter; Shi, Liaoshan; Shimizu, Shima; Shimmin, Chase Owen; Shimojima, Makoto; Shirabe, Shohei; Shiyakova, Mariya; Shmeleva, Alevtina; Shoaleh Saadi, Diane; Shochet, Mel; Shojaii, Seyed Ruhollah; Shope, David Richard; Shrestha, Suyog; Shulga, Evgeny; Shupe, Michael; Sicho, Petr; Sickles, Anne Marie; Sidebo, Per Edvin; Sideras Haddad, Elias; Sidiropoulou, Ourania; Sidorov, Dmitri; Sidoti, Antonio; Siegert, Frank; Sijacki, Djordje; Silva, José; Silverstein, Samuel; Simak, Vladislav; Simic, Ljiljana; Simion, Stefan; Simioni, Eduard; Simmons, Brinick; Simon, Dorian; Simon, Manuel; Sinervo, Pekka; Sinev, Nikolai; Sioli, Maximiliano; Siragusa, Giovanni; Sivoklokov, Serguei; Sjölin, Jörgen; Skinner, Malcolm Bruce; Skottowe, Hugh Philip; Skubic, Patrick; Slater, Mark; Slavicek, Tomas; Slawinska, Magdalena; Sliwa, Krzysztof; Slovak, Radim; Smakhtin, Vladimir; Smart, Ben; Smestad, Lillian; Smiesko, Juraj; Smirnov, Sergei; Smirnov, Yury; Smirnova, Lidia; Smirnova, Oxana; Smith, Joshua Wyatt; Smith, Matthew; Smith, Russell; Smizanska, Maria; Smolek, Karel; Snesarev, Andrei; Snyder, Ian Michael; Snyder, Scott; Sobie, Randall; Socher, Felix; Soffer, Abner; Soh, Dart-yin; Sokhrannyi, Grygorii; Solans Sanchez, Carlos; Solar, Michael; Soldatov, Evgeny; Soldevila, Urmila; Solodkov, Alexander; Soloshenko, Alexei; Solovyanov, Oleg; Solovyev, Victor; Sommer, Philip; Son, Hyungsuk; Song, Hong Ye; Sood, Alexander; Sopczak, Andre; Sopko, Vit; Sorin, Veronica; Sosa, David; Sotiropoulou, Calliope Louisa; Soualah, Rachik; Soukharev, Andrey; South, David; Sowden, Benjamin; Spagnolo, Stefania; Spalla, Margherita; Spangenberg, Martin; Spanò, Francesco; Sperlich, Dennis; Spettel, Fabian; Spieker, Thomas Malte; Spighi, Roberto; Spigo, Giancarlo; Spiller, Laurence Anthony; Spousta, Martin; St Denis, Richard Dante; Stabile, Alberto; Stamen, Rainer; Stamm, Soren; Stanecka, Ewa; Stanek, Robert; Stanescu, Cristian; Stanescu-Bellu, Madalina; Stanitzki, Marcel Michael; Stapnes, Steinar; Starchenko, Evgeny; Stark, Giordon; Stark, Jan; Staroba, Pavel; Starovoitov, Pavel; Stärz, Steffen; Staszewski, Rafal; Steinberg, Peter; Stelzer, Bernd; Stelzer, Harald Joerg; Stelzer-Chilton, Oliver; Stenzel, Hasko; Stewart, Graeme; Stillings, Jan Andre; Stockton, Mark; Stoebe, Michael; Stoicea, Gabriel; Stolte, Philipp; Stonjek, Stefan; Stradling, Alden; Straessner, Arno; Stramaglia, Maria Elena; Strandberg, Jonas; Strandberg, Sara; Strandlie, Are; Strauss, Michael; Strizenec, Pavol; Ströhmer, Raimund; Strom, David; Stroynowski, Ryszard; Strubig, Antonia; Stucci, Stefania Antonia; Stugu, Bjarne; Styles, Nicholas Adam; Su, Dong; Su, Jun; Suchek, Stanislav; Sugaya, Yorihito; Suk, Michal; Sulin, Vladimir; Sultansoy, Saleh; Sumida, Toshi; Sun, Siyuan; Sun, Xiaohu; Sundermann, Jan Erik; Suruliz, Kerim; Suster, Carl; Sutton, Mark; Suzuki, Shota; Svatos, Michal; Swiatlowski, Maximilian; Swift, Stewart Patrick; Sykora, Ivan; Sykora, Tomas; Ta, Duc; Taccini, Cecilia; Tackmann, Kerstin; Taenzer, Joe; Taffard, Anyes; Tafirout, Reda; Taiblum, Nimrod; Takai, Helio; Takashima, Ryuichi; Takeshita, Tohru; Takubo, Yosuke; Talby, Mossadek; Talyshev, Alexey; Tan, Kong Guan; Tanaka, Junichi; Tanaka, Masahiro; Tanaka, Reisaburo; Tanaka, Shuji; Tanioka, Ryo; Tannenwald, Benjamin Bordy; Tapia Araya, Sebastian; Tapprogge, Stefan; Tarem, Shlomit; Tartarelli, Giuseppe Francesco; Tas, Petr; Tasevsky, Marek; Tashiro, Takuya; Tassi, Enrico; Tavares Delgado, Ademar; Tayalati, Yahya; Taylor, Aaron; Taylor, Geoffrey; Taylor, Pierre Thor Elliot; Taylor, Wendy; Teischinger, Florian Alfred; Teixeira-Dias, Pedro; Temming, Kim Katrin; Temple, Darren; Ten Kate, Herman; Teng, Ping-Kun; Teoh, Jia Jian; Tepel, Fabian-Phillipp; Terada, Susumu; Terashi, Koji; Terron, Juan; Terzo, Stefano; Testa, Marianna; Teuscher, Richard; Theveneaux-Pelzer, Timothée; Thomas, Juergen; Thomas-Wilsker, Joshuha; Thompson, Paul; Thompson, Stan; Thomsen, Lotte Ansgaard; Thomson, Evelyn; Tibbetts, Mark James; Ticse Torres, Royer Edson; Tikhomirov, Vladimir; Tikhonov, Yury; Timoshenko, Sergey; Tipton, Paul; Tisserant, Sylvain; Todome, Kazuki; Todorov, Theodore; Todorova-Nova, Sharka; Tojo, Junji; Tokár, Stanislav; Tokushuku, Katsuo; Tolley, Emma; Tomlinson, Lee; Tomoto, Makoto; Tompkins, Lauren; Toms, Konstantin; Tong, Baojia(Tony); Tornambe, Peter; Torrence, Eric; Torres, Heberth; Torró Pastor, Emma; Toth, Jozsef; Touchard, Francois; Tovey, Daniel; Trefzger, Thomas; Tricoli, Alessandro; Trigger, Isabel Marian; Trincaz-Duvoid, Sophie; Tripiana, Martin; Trischuk, William; Trocmé, Benjamin; Trofymov, Artur; Troncon, Clara; Trottier-McDonald, Michel; Trovatelli, Monica; Truong, Loan; Trzebinski, Maciej; Trzupek, Adam; Tseng, Jeffrey; Tsiareshka, Pavel; Tsipolitis, Georgios; Tsirintanis, Nikolaos; Tsiskaridze, Shota; Tsiskaridze, Vakhtang; Tskhadadze, Edisher; Tsui, Ka Ming; Tsukerman, Ilya; Tsulaia, Vakhtang; Tsuno, Soshi; Tsybychev, Dmitri; Tu, Yanjun; Tudorache, Alexandra; Tudorache, Valentina; Tulbure, Traian Tiberiu; Tuna, Alexander Naip; Tupputi, Salvatore; Turchikhin, Semen; Turgeman, Daniel; Turk Cakir, Ilkay; Turra, Ruggero; Tuts, Michael; Ucchielli, Giulia; Ueda, Ikuo; Ughetto, Michael; Ukegawa, Fumihiko; Unal, Guillaume; Undrus, Alexander; Unel, Gokhan; Ungaro, Francesca; Unno, Yoshinobu; Unverdorben, Christopher; Urban, Jozef; Urquijo, Phillip; Urrejola, Pedro; Usai, Giulio; Usui, Junya; Vacavant, Laurent; Vacek, Vaclav; Vachon, Brigitte; Valderanis, Chrysostomos; Valdes Santurio, Eduardo; Valencic, Nika; Valentinetti, Sara; Valero, Alberto; Valery, Loic; Valkar, Stefan; Valls Ferrer, Juan Antonio; Van Den Wollenberg, Wouter; Van Der Deijl, Pieter; van der Graaf, Harry; van Eldik, Niels; van Gemmeren, Peter; Van Nieuwkoop, Jacobus; van Vulpen, Ivo; van Woerden, Marius Cornelis; Vanadia, Marco; Vandelli, Wainer; Vanguri, Rami; Vaniachine, Alexandre; Vankov, Peter; Vardanyan, Gagik; Vari, Riccardo; Varnes, Erich; Varol, Tulin; Varouchas, Dimitris; Vartapetian, Armen; Varvell, Kevin; Vasquez, Jared Gregory; Vasquez, Gerardo; Vazeille, Francois; Vazquez Schroeder, Tamara; Veatch, Jason; Veeraraghavan, Venkatesh; Veloce, Laurelle Maria; Veloso, Filipe; Veneziano, Stefano; Ventura, Andrea; Venturi, Manuela; Venturi, Nicola; Venturini, Alessio; Vercesi, Valerio; Verducci, Monica; Verkerke, Wouter; Vermeulen, Jos; Vest, Anja; Vetterli, Michel; Viazlo, Oleksandr; Vichou, Irene; Vickey, Trevor; Vickey Boeriu, Oana Elena; Viehhauser, Georg; Viel, Simon; Vigani, Luigi; Villa, Mauro; Villaplana Perez, Miguel; Vilucchi, Elisabetta; Vincter, Manuella; Vinogradov, Vladimir; Vittori, Camilla; Vivarelli, Iacopo; Vlachos, Sotirios; Vlasak, Michal; Vogel, Marcelo; Vokac, Petr; Volpi, Guido; Volpi, Matteo; von der Schmitt, Hans; von Toerne, Eckhard; Vorobel, Vit; Vorobev, Konstantin; Vos, Marcel; Voss, Rudiger; Vossebeld, Joost; Vranjes, Nenad; Vranjes Milosavljevic, Marija; Vrba, Vaclav; Vreeswijk, Marcel; Vuillermet, Raphael; Vukotic, Ilija; Wagner, Peter; Wagner, Wolfgang; Wahlberg, Hernan; Wahrmund, Sebastian; Wakabayashi, Jun; Walder, James; Walker, Rodney; Walkowiak, Wolfgang; Wallangen, Veronica; Wang, Chao; Wang, Chao; Wang, Fuquan; Wang, Haichen; Wang, Hulin; Wang, Jike; Wang, Jin; Wang, Kuhan; Wang, Rui; Wang, Song-Ming; Wang, Tingting; Wang, Wenxiao; Wanotayaroj, Chaowaroj; Warburton, Andreas; Ward, Patricia; Wardrope, David Robert; Washbrook, Andrew; Watkins, Peter; Watson, Alan; Watson, Miriam; Watts, Gordon; Watts, Stephen; Waugh, Ben; Webb, Samuel; Weber, Michele; Weber, Stefan Wolf; Weber, Stephen; Webster, Jordan S; Weidberg, Anthony; Weinert, Benjamin; Weingarten, Jens; Weiser, Christian; Weits, Hartger; Wells, Phillippa; Wenaus, Torre; Wengler, Thorsten; Wenig, Siegfried; Wermes, Norbert; Werner, Michael David; Werner, Per; Wessels, Martin; Wetter, Jeffrey; Whalen, Kathleen; Whallon, Nikola Lazar; Wharton, Andrew Mark; White, Andrew; White, Martin; White, Ryan; Whiteson, Daniel; Wickens, Fred; Wiedenmann, Werner; Wielers, Monika; Wiglesworth, Craig; Wiik-Fuchs, Liv Antje Mari; Wildauer, Andreas; Wilk, Fabian; Wilkens, Henric George; Williams, Hugh; Williams, Sarah; Willis, Christopher; Willocq, Stephane; Wilson, John; Wingerter-Seez, Isabelle; Winklmeier, Frank; Winston, Oliver James; Winter, Benedict Tobias; Wittgen, Matthias; Wolf, Tim Michael Heinz; Wolff, Robert; Wolter, Marcin Wladyslaw; Wolters, Helmut; Worm, Steven D; Wosiek, Barbara; Wotschack, Jorg; Woudstra, Martin; Wozniak, Krzysztof; Wu, Mengqing; Wu, Miles; Wu, Sau Lan; Wu, Xin; Wu, Yusheng; Wyatt, Terry Richard; Wynne, Benjamin; Xella, Stefania; Xi, Zhaoxu; Xu, Da; Xu, Lailin; Yabsley, Bruce; Yacoob, Sahal; Yamaguchi, Daiki; Yamaguchi, Yohei; Yamamoto, Akira; Yamamoto, Shimpei; Yamanaka, Takashi; Yamauchi, Katsuya; Yamazaki, Yuji; Yan, Zhen; Yang, Haijun; Yang, Hongtao; Yang, Yi; Yang, Zongchang; Yao, Weiming; Yap, Yee Chinn; Yasu, Yoshiji; Yatsenko, Elena; Yau Wong, Kaven Henry; Ye, Jingbo; Ye, Shuwei; Yeletskikh, Ivan; Yildirim, Eda; Yorita, Kohei; Yoshida, Rikutaro; Yoshihara, Keisuke; Young, Charles; Young, Christopher John; Youssef, Saul; Yu, David Ren-Hwa; Yu, Jaehoon; Yu, Jiaming; Yu, Jie; Yuan, Li; Yuen, Stephanie P; Yusuff, Imran; Zabinski, Bartlomiej; Zacharis, George; Zaidan, Remi; Zaitsev, Alexander; Zakharchuk, Nataliia; Zalieckas, Justas; Zaman, Aungshuman; Zambito, Stefano; Zanello, Lucia; Zanzi, Daniele; Zeitnitz, Christian; Zeman, Martin; Zemla, Andrzej; Zeng, Jian Cong; Zeng, Qi; Zenin, Oleg; Ženiš, Tibor; Zerwas, Dirk; Zhang, Dongliang; Zhang, Fangzhou; Zhang, Guangyi; Zhang, Huijun; Zhang, Jinlong; Zhang, Lei; Zhang, Liqing; Zhang, Matt; Zhang, Rui; Zhang, Ruiqi; Zhang, Xueyao; Zhang, Zhiqing; Zhao, Xiandong; Zhao, Yongke; Zhao, Zhengguo; Zhemchugov, Alexey; Zhong, Jiahang; Zhou, Bing; Zhou, Chen; Zhou, Lei; Zhou, Li; Zhou, Mingliang; Zhou, Ning; Zhu, Cheng Guang; Zhu, Hongbo; Zhu, Junjie; Zhu, Yingchun; Zhuang, Xuai; Zhukov, Konstantin; Zibell, Andre; Zieminska, Daria; Zimine, Nikolai; Zimmermann, Christoph; Zimmermann, Stephanie; Zinonos, Zinonas; Zinser, Markus; Ziolkowski, Michael; Živković, Lidija; Zobernig, Georg; Zoccoli, Antonio; zur Nedden, Martin; Zwalinski, Lukasz

    2017-05-18

    During 2015 the ATLAS experiment recorded $3.8 \\mathrm{fb}^{-1}$ of proton--proton collision data at a centre-of-mass energy of $13 \\mathrm{TeV}$. The ATLAS trigger system is a crucial component of the experiment, responsible for selecting events of interest at a recording rate of approximately 1 kHz from up to 40 MHz of collisions. This paper presents a short overview of the changes to the trigger and data acquisition systems during the first long shutdown of the LHC and shows the performance of the trigger system and its components based on the 2015 proton--proton collision data.

  6. ATLAS

    CERN Multimedia

    Akhnazarov, V; Canepa, A; Bremer, J; Burckhart, H; Cattai, A; Voss, R; Hervas, L; Kaplon, J; Nessi, M; Werner, P; Ten kate, H; Tyrvainen, H; Vandelli, W; Krasznahorkay, A; Gray, H; Alvarez gonzalez, B; Eifert, T F; Rolando, G; Oide, H; Barak, L; Glatzer, J; Backhaus, M; Schaefer, D M; Maciejewski, J P; Milic, A; Jin, S; Von torne, E; Limbach, C; Medinnis, M J; Gregor, I; Levonian, S; Schmitt, S; Waananen, A; Monnier, E; Muanza, S G; Pralavorio, P; Talby, M; Tiouchichine, E; Tocut, V M; Rybkin, G; Wang, S; Lacour, D; Laforge, B; Ocariz, J H; Bertoli, W; Malaescu, B; Sbarra, C; Yamamoto, A; Sasaki, O; Koriki, T; Hara, K; Da silva gomes, A; Carvalho maneira, J; Marcalo da palma, A; Chekulaev, S; Tikhomirov, V; Snesarev, A; Buzykaev, A; Maslennikov, A; Peleganchuk, S; Sukharev, A; Kaplan, B E; Swiatlowski, M J; Nef, P D; Schnoor, U; Oakham, G F; Ueno, R; Orr, R S; Abouzeid, O; Haug, S; Peng, H; Kus, V; Vitek, M; Temming, K K; Dang, N P; Meier, K; Schultz-coulon, H; Geisler, M P; Sander, H; Schaefer, U; Ellinghaus, F; Rieke, S; Nussbaumer, A; Liu, Y; Richter, R; Kortner, S; Fernandez-bosman, M; Ullan comes, M; Espinal curull, J; Chiriotti alvarez, S; Caubet serrabou, M; Valladolid gallego, E; Kaci, M; Carrasco vela, N; Lancon, E C; Besson, N E; Gautard, V; Bracinik, J; Bartsch, V C; Potter, C J; Lester, C G; Moeller, V A; Rosten, J; Crooks, D; Mathieson, K; Houston, S C; Wright, M; Jones, T W; Harris, O B; Byatt, T J; Dobson, E; Hodgson, P; Hodgkinson, M C; Dris, M; Karakostas, K; Ntekas, K; Oren, D; Duchovni, E; Etzion, E; Oren, Y; Ferrer, L M; Testa, M; Doria, A; Merola, L; Sekhniaidze, G; Giordano, R; Ricciardi, S; Milazzo, A; Falciano, S; De pedis, D; Dionisi, C; Veneziano, S; Cardarelli, R; Verzegnassi, C; Soualah, R; Ochi, A; Ohshima, T; Kishiki, S; Linde, F L; Vreeswijk, M; Werneke, P; Muijs, A; Vankov, P H; Jansweijer, P P M; Dale, O; Lund, E; Bruckman de renstrom, P; Dabrowski, W; Adamek, J D; Wolters, H; Micu, L; Pantea, D; Tudorache, V; Mjoernmark, J; Klimek, P J; Ferrari, A; Abdinov, O; Akhoundov, A; Hashimov, R; Shelkov, G; Khubua, J; Ladygin, E; Lazarev, A; Glagolev, V; Dedovich, D; Lykasov, G; Zhemchugov, A; Zolnikov, Y; Ryabenko, M; Sivoklokov, S; Vasilyev, I; Shalimov, A; Lobanov, M; Paramoshkina, E; Mosidze, M; Bingul, A; Nodulman, L J; Guarino, V J; Yoshida, R; Drake, G R; Calafiura, P; Haber, C; Quarrie, D R; Alonso, J R; Anderson, C; Evans, H; Lammers, S W; Baubock, M; Anderson, K; Petti, R; Suhr, C A; Linnemann, J T; Richards, R A; Tollefson, K A; Holzbauer, J L; Stoker, D P; Pier, S; Nelson, A J; Isakov, V; Martin, A J; Adelman, J A; Paganini, M; Gutierrez, P; Snow, J M; Pearson, B L; Cleland, W E; Savinov, V; Wong, W; Goodson, J J; Li, H; Lacey, R A; Gordeev, A; Gordon, H; Lanni, F; Nevski, P; Rescia, S; Kierstead, J A; Liu, Z; Yu, W W H; Bensinger, J; Hashemi, K S; Bogavac, D; Cindro, V; Hoeferkamp, M R; Coelli, S; Iodice, M; Piegaia, R N; Alonso, F; Wahlberg, H P; Barberio, E L; Limosani, A; Rodd, N L; Jennens, D T; Hill, E C; Pospisil, S; Smolek, K; Schaile, D A; Rauscher, F G; Adomeit, S; Mattig, P M; Wahlen, H; Volkmer, F; Calvente lopez, S; Sanchis peris, E J; Pallin, D; Podlyski, F; Says, L; Boumediene, D E; Scott, W; Phillips, P W; Greenall, A; Turner, P; Gwilliam, C B; Kluge, T; Wrona, B; Sellers, G J; Millward, G; Adragna, P; Hartin, A; Alpigiani, C; Piccaro, E; Bret cano, M; Hughes jones, R E; Mercer, D; Oh, A; Chavda, V S; Carminati, L; Cavasinni, V; Fedin, O; Patrichev, S; Ryabov, Y; Nesterov, S; Grebenyuk, O; Sasso, J; Mahmood, H; Polsdofer, E; Dai, T; Ferretti, C; Liu, H; Hegazy, K H; Benjamin, D P; Zobernig, G; Ban, J; Brooijmans, G H; Keener, P; Williams, H H; Le geyt, B C; Hines, E J; Fadeyev, V; Schumm, B A; Law, A T; Kuhl, A D; Neubauer, M S; Shang, R; Gagliardi, G; Calabro, D; Conta, C; Zinna, M; Jones, G; Li, J; Stradling, A R; Hadavand, H K; Mcguigan, P; Chiu, P; Baldelomar, E; Stroynowski, R A; Kehoe, R L; De groot, N; Timmermans, C; Lach-heb, F; Addy, T N; Nakano, I; Moreno lopez, D; Grosse-knetter, J; Tyson, B; Rude, G D; Tafirout, R; Benoit, P; Danielsson, H O; Elsing, M; Fassnacht, P; Froidevaux, D; Ganis, G; Gorini, B; Lasseur, C; Lehmann miotto, G; Kollar, D; Aleksa, M; Sfyrla, A; Duehrssen-debling, K; Fressard-batraneanu, S; Van der ster, D C; Bortolin, C; Schumacher, J; Mentink, M; Geich-gimbel, C; Yau wong, K H; Lafaye, R; Crepe-renaudin, S; Albrand, S; Hoffmann, D; Pangaud, P; Meessen, C; Hrivnac, J; Vernay, E; Perus, A; Henrot versille, S L; Le dortz, O; Derue, F; Piccinini, M; Polini, A; Terada, S; Arai, Y; Ikeno, M; Fujii, H; Nagano, K; Ukegawa, F; Aguilar saavedra, J A; Conde muino, P; Castro, N F; Eremin, V; Kopytine, M; Sulin, V; Tsukerman, I; Korol, A; Nemethy, P; Bartoldus, R; Glatte, A; Chelsky, S; Van nieuwkoop, J; Bellerive, A; Sinervo, J K; Battaglia, A; Barbier, G J; Pohl, M; Rosselet, L; Alexandre, G B; Prokoshin, F; Pezoa rivera, R A; Batkova, L; Kladiva, E; Stastny, J; Kubes, T; Vidlakova, Z; Esch, H; Homann, M; Herten, L G; Zimmermann, S U; Pfeifer, B; Stenzel, H; Andrei, G V; Wessels, M; Buescher, V; Kleinknecht, K; Fiedler, F M; Schroeder, C D; Fernandez, E; Mir martinez, L; Vorwerk, V; Bernabeu verdu, J; Salt, J; Civera navarrete, J V; Bernard, R; Berriaud, C P; Chevalier, L P; Hubbard, R; Schune, P; Nikolopoulos, K; Batley, J R; Brochu, F M; Phillips, A W; Teixeira-dias, P J; Rose, M B D; Buttar, C; Buckley, A G; Nurse, E L; Larner, A B; Boddy, C; Henderson, J; Costanzo, D; Tarem, S; Maccarrone, G; Laurelli, P F; Alviggi, M; Chiaramonte, R; Izzo, V; Palumbo, V; Fraternali, M; Crosetti, G; Marchese, F; Yamaguchi, Y; Hessey, N P; Mechnich, J M; Liebig, W; Kastanas, K A; Sjursen, T B; Zalieckas, J; Cameron, D G; Banka, P; Kowalewska, A B; Dwuznik, M; Mindur, B; Boldea, V; Hedberg, V; Smirnova, O; Sellden, B; Allahverdiyev, T; Gornushkin, Y; Koultchitski, I; Tokmenin, V; Chizhov, M; Gongadze, A; Khramov, E; Sadykov, R; Krasnoslobodtsev, I; Smirnova, L; Kramarenko, V; Minaenko, A; Zenin, O; Beddall, A J; Ozcan, E V; Hou, S; Wang, S; Moyse, E; Willocq, S; Chekanov, S; Le compte, T J; Love, J R; Ciocio, A; Hinchliffe, I; Tsulaia, V; Gomez, A; Luehring, F; Zieminska, D; Huth, J E; Gonski, J L; Oreglia, M; Tang, F; Shochet, M J; Costin, T; Mcleod, A; Uzunyan, S; Martin, S P; Pope, B G; Schwienhorst, R H; Brau, J E; Ptacek, E S; Milburn, R H; Sabancilar, E; Lauer, R; Saleem, M; Mohamed meera lebbai, M R; Lou, X; Reeves, K B; Rijssenbeek, M; Novakova, P N; Rahm, D; Steinberg, P A; Wenaus, T J; Paige, F; Ye, S; Kotcher, J R; Assamagan, K A; Oliveira damazio, D; Maeno, T; Henry, A; Dushkin, A; Costa, G; Meroni, C; Resconi, S; Lari, T; Biglietti, M; Lohse, T; Gonzalez silva, M L; Monticelli, F G; Saavedra, A F; Patel, N D; Ciodaro xavier, T; Asevedo nepomuceno, A; Lefebvre, M; Albert, J E; Kubik, P; Faltova, J; Turecek, D; Solc, J; Schaile, O; Ebke, J; Losel, P J; Zeitnitz, C; Sturm, P D; Barreiro alonso, F; Modesto alapont, P; Soret medel, J; Garzon alama, E J; Gee, C N; Mccubbin, N A; Sankey, D; Emeliyanov, D; Dewhurst, A L; Houlden, M A; Klein, M; Burdin, S; Lehan, A K; Eisenhandler, E; Lloyd, S; Traynor, D P; Ibbotson, M; Marshall, R; Pater, J; Freestone, J; Masik, J; Haughton, I; Manousakis katsikakis, A; Sampsonidis, D; Krepouri, A; Roda, C; Sarri, F; Fukunaga, C; Nadtochiy, A; Kara, S O; Timm, S; Alam, S M; Rashid, T; Goldfarb, S; Espahbodi, S; Marley, D E; Rau, A W; Dos anjos, A R; Haque, S; Grau, N C; Havener, L B; Thomson, E J; Newcomer, F M; Hansl-kozanecki, G; Deberg, H A; Takeshita, T; Goggi, V; Ennis, J S; Olness, F I; Kama, S; Ordonez sanz, G; Koetsveld, F; Elamri, M; Mansoor-ul-islam, S; Lemmer, B; Kawamura, G; Bindi, M; Schulte, S; Kugel, A; Kretz, M P; Kurchaninov, L; Blanchot, G; Chromek-burckhart, D; Di girolamo, B; Francis, D; Gianotti, F; Nordberg, M Y; Pernegger, H; Roe, S; Boyd, J; Wilkens, H G; Pauly, T; Fabre, C; Tricoli, A; Bertet, D; Ruiz martinez, M A; Arnaez, O L; Lenzi, B; Boveia, A J; Gillberg, D I; Davies, J M; Zimmermann, R; Uhlenbrock, M; Kraus, J K; Narayan, R T; John, A; Dam, M; Padilla aranda, C; Bellachia, F; Le flour chollet, F M; Jezequel, S; Dumont dayot, N; Fede, E; Mathieu, M; Gensolen, F D; Alio, L; Arnault, C; Bouchel, M; Ducorps, A; Kado, M M; Lounis, A; Zhang, Z P; De vivie de regie, J; Beau, T; Bruni, A; Bruni, G; Grafstrom, P; Romano, M; Lasagni manghi, F; Massa, L; Shaw, K; Ikegami, Y; Tsuno, S; Kawanishi, Y; Benincasa, G; Blagov, M; Fedorchuk, R; Shatalov, P; Romaniouk, A; Belotskiy, K; Timoshenko, S; Hooft van huysduynen, L; Lewis, G H; Wittgen, M M; Mader, W F; Rudolph, C J; Gumpert, C; Mamuzic, J; Rudolph, G; Schmid, P; Corriveau, F; Belanger-champagne, C; Yarkoni, S; Leroy, C; Koffas, T; Harack, B D; Weber, M S; Beck, H; Leger, A; Gonzalez sevilla, S; Zhu, Y; Gao, J; Zhang, X; Blazek, T; Rames, J; Sicho, P; Kouba, T; Sluka, T; Lysak, R; Ristic, B; Kompatscher, A E; Von radziewski, H; Groll, M; Meyer, C P; Oberlack, H; Stonjek, S M; Cortiana, G; Werthenbach, U; Ibragimov, I; Czirr, H S; Cavalli-sforza, M; Puigdengoles olive, C; Tallada crespi, P; Marti i garcia, S; Gonzalez de la hoz, S; Guyot, C; Meyer, J; Schoeffel, L O; Garvey, J; Hawkes, C; Hillier, S J; Staley, R J; Salvatore, P F; Santoyo castillo, I; Carter, J; Yusuff, I B; Barlow, N R; Berry, T S; Savage, G; Wraight, K G; Steele, G E; Hughes, G; Walder, J W; Love, P A; Crone, G J; Waugh, B M; Boeser, S; Sarkar, A M; Holmes, A; Massey, R; Pinder, A; Nicholson, R; Korolkova, E; Katsoufis, I; Maltezos, S; Tsipolitis, G; Leontsinis, S; Levinson, L J; Shoa, M; Abramowicz, H E; Bella, G; Gershon, A; Urkovsky, E; Taiblum, N; Gatti, C; Della pietra, M; Lanza, A; Negri, A; Flaminio, V; Lacava, F; Petrolo, E; Pontecorvo, L; Rosati, S; Zanello, L; Pasqualucci, E; Di ciaccio, A; Giordani, M; Yamazaki, Y; Jinno, T; Nomachi, M; De jong, P J; Ferrari, P; Homma, J; Van der graaf, H; Igonkina, O B; Stugu, B S; Buanes, T; Pedersen, M; Turala, M; Olszewski, A J; Koperny, S Z; Onofre, A; Castro nunes fiolhais, M; Alexa, C; Cuciuc, C M; Akesson, T P A; Hellman, S L; Milstead, D A; Bondyakov, A; Pushnova, V; Budagov, Y; Minashvili, I; Romanov, V; Sniatkov, V; Tskhadadze, E; Kalinovskaya, L; Shalyugin, A; Tavkhelidze, A; Rumyantsev, L; Karpov, S; Soloshenko, A; Vostrikov, A; Borissov, E; Solodkov, A; Vorob'ev, A; Sidorov, S; Malyaev, V; Lee, S; Grudzinski, J J; Virzi, J S; Vahsen, S E; Lys, J; Penwell, J W; Yan, Z; Bernard, C S; Barreiro guimaraes da costa, J P; Oliver, J N; Merritt, F S; Brubaker, E M; Kapliy, A; Kim, J; Zutshi, V V; Burghgrave, B O; Abolins, M A; Arabidze, G; Caughron, S A; Frey, R E; Radloff, P T; Schernau, M; Murillo garcia, R; Porter, R A; Mccormick, C A; Karn, P J; Sliwa, K J; Demers konezny, S M; Strauss, M G; Mueller, J A; Izen, J M; Klimentov, A; Lynn, D; Polychronakos, V; Radeka, V; Sondericker, J I I I; Bathe, S; Duffin, S; Chen, H; De castro faria salgado, P E; Kersevan, B P; Lacker, H M; Schulz, H; Kubota, T; Tan, K G; Yabsley, B D; Nunes de moura junior, N; Pinfold, J; Soluk, R A; Ouellette, E A; Leitner, R; Sykora, T; Solar, M; Sartisohn, G; Hirschbuehl, D; Huning, D; Fischer, J; Terron cuadrado, J; Glasman kuguel, C B; Lacasta llacer, C; Lopez-amengual, J; Calvet, D; Chevaleyre, J; Daudon, F; Montarou, G; Guicheney, C; Calvet, S P J; Tyndel, M; Dervan, P J; Maxfield, S J; Hayward, H S; Beck, G; Cox, B; Da via, C; Paschalias, P; Manolopoulou, M; Ragusa, F; Cimino, D; Ezzi, M; Fiuza de barros, N F; Yildiz, H; Ciftci, A K; Turkoz, S; Zain, S B; Tegenfeldt, F; Chapman, J W; Panikashvili, N; Bocci, A; Altheimer, A D; Martin, F F; Fratina, S; Jackson, B D; Grillo, A A; Seiden, A; Watts, G T; Mangiameli, S; Johns, K A; O'grady, F T; Errede, D R; Darbo, G; Ferretto parodi, A; Leahu, M C; Farbin, A; Ye, J; Liu, T; Wijnen, T A; Naito, D; Takashima, R; Sandoval usme, C E; Zinonos, Z; Moreno llacer, M; Agricola, J B; Mcgovern, S A; Sakurai, Y; Trigger, I M; Qing, D; De silva, A S; Butin, F; Dell'acqua, A; Hawkings, R J; Lamanna, M; Mapelli, L; Passardi, G; Rembser, C; Tremblet, L; Andreazza, W; Dobos, D A; Koblitz, B; Bianco, M; Dimitrov, G V; Schlenker, S; Armbruster, A J; Rammensee, M C; Romao rodrigues, L F; Peters, K; Pozo astigarraga, M E; Yi, Y; Desch, K K; Huegging, F G; Muller, K K; Stillings, J A; Schaetzel, S; Xella, S; Hansen, J D; Colas, J; Daguin, G; Wingerter, I; Ionescu, G D; Ledroit, F; Lucotte, A; Clement, B E; Stark, J; Clemens, J; Djama, F; Knoops, E; Coadou, Y; Vigeolas-choury, E; Feligioni, L; Iconomidou-fayard, L; Imbert, P; Schaffer, A C; Nikolic, I; Trincaz-duvoid, S; Warin, P; Camard, A F; Ridel, M; Pires, S; Giacobbe, B; Spighi, R; Villa, M; Negrini, M; Sato, K; Gavrilenko, I; Akimov, A; Khovanskiy, V; Talyshev, A; Voronkov, A; Hakobyan, H; Mallik, U; Shibata, A; Konoplich, R; Barklow, T L; Koi, T; Straessner, A; Stelzer, B; Robertson, S H; Vachon, B; Stoebe, M; Keyes, R A; Wang, K; Billoud, T R V; Strickland, V; Batygov, M; Krieger, P; Palacino caviedes, G D; Gay, C W; Jiang, Y; Han, L; Liu, M; Zenis, T; Lokajicek, M; Staroba, P; Tasevsky, M; Popule, J; Svatos, M; Seifert, F; Landgraf, U; Lai, S T; Schmitt, K H; Achenbach, R; Schuh, N; Kiesling, C; Macchiolo, A; Nisius, R; Schacht, P; Von der schmitt, J G; Kortner, O; Atlay, N B; Segura sole, E; Grinstein, S; Neissner, C; Bruckner, D M; Oliver garcia, E; Boonekamp, M; Perrin, P; Gaillot, F M; Wilson, J A; Thomas, J P; Thompson, P D; Palmer, J D; Falk, I E; Chavez barajas, C A; Sutton, M R; Robinson, D; Kaneti, S A; Wu, T; Robson, A; Shaw, C; Buzatu, A; Qin, G; Jones, R; Bouhova-thacker, E V; Viehhauser, G; Weidberg, A R; Gilbert, L; Johansson, P D C; Orphanides, M; Vlachos, S; Behar harpaz, S; Papish, O; Lellouch, D J H; Turgeman, D; Benary, O; La rotonda, L; Vena, R; Tarasio, A; Marzano, F; Gabrielli, A; Di stante, L; Liberti, B; Aielli, G; Oda, S; Nozaki, M; Takeda, H; Hayakawa, T; Miyazaki, K; Maeda, J; Sugimoto, T; Pettersson, N E; Bentvelsen, S; Groenstege, H L; Lipniacka, A; Vahabi, M; Ould-saada, F; Chwastowski, J J; Hajduk, Z; Kaczmarska, A; Olszowska, J B; Trzupek, A; Staszewski, R P; Palka, M; Constantinescu, S; Jarlskog, G; Lundberg, B L A; Pearce, M; Ellert, M F; Bannikov, A; Fechtchenko, A; Iambourenko, V; Kukhtin, V; Pozdniakov, V; Topilin, N; Vorozhtsov, S; Khassanov, A; Fliaguine, V; Kharchenko, D; Nikolaev, K; Kotenov, K; Kozhin, A; Zenin, A; Ivashin, A; Golubkov, D; Beddall, A; Su, D; Dallapiccola, C J; Cranshaw, J M; Price, L; Stanek, R W; Gieraltowski, G; Zhang, J; Gilchriese, M; Shapiro, M; Ahlen, S; Morii, M; Taylor, F E; Miller, R J; Phillips, F H; Torrence, E C; Wheeler, S J; Benedict, B H; Napier, A; Hamilton, S F; Petrescu, T A; Boyd, G R J; Jayasinghe, A L; Smith, J M; Mc carthy, R L; Adams, D L; Le vine, M J; Zhao, X; Patwa, A M; Baker, M; Kirsch, L; Krstic, J; Simic, L; Filipcic, A; Seidel, S C; Cantore-cavalli, D; Baroncelli, A; Kind, O M; Scarcella, M J; Maidantchik, C L L; Seixas, J; Balabram filho, L E; Vorobel, V; Spousta, M; Strachota, P; Vokac, P; Slavicek, T; Bergmann, B L; Biebel, O; Kersten, S; Srinivasan, M; Trefzger, T; Vazeille, F; Insa, C; Kirk, J; Middleton, R; Burke, S; Klein, U; Morris, J D; Ellis, K V; Millward, L R; Giokaris, N; Ioannou, P; Angelidakis, S; Bouzakis, K; Andreazza, A; Perini, L; Chtcheguelski, V; Spiridenkov, E; Yilmaz, M; Kaya, U; Ernst, J; Mahmood, A; Saland, J; Kutnink, T; Holler, J; Kagan, H P; Wang, C; Pan, Y; Xu, N; Ji, H; Willis, W J; Tuts, P M; Litke, A; Wilder, M; Rothberg, J; Twomey, M S; Rizatdinova, F; Loch, P; Rutherfoord, J P; Varnes, E W; Barberis, D; Osculati-becchi, B; Brandt, A G; Turvey, A J; Benchekroun, D; Nagasaka, Y; Thanakornworakij, T; Quadt, A; Nadal serrano, J; Magradze, E; Nackenhorst, O; Musheghyan, H; Kareem, M; Chytka, L; Perez codina, E; Stelzer-chilton, O; Brunel, B; Henriques correia, A M; Dittus, F; Hatch, M; Haug, F; Hauschild, M; Huhtinen, M; Lichard, P; Schuh-erhard, S; Spigo, G; Avolio, G; Tsarouchas, C; Ahmad, I; Backes, M P; Barisits, M; Gadatsch, S; Cerv, M; Sicoe, A D; Nattamai sekar, L P; Fazio, D; Shan, L; Sun, X; Gaycken, G F; Hemperek, T; Petersen, T C; Alonso diaz, A; Moynot, M; Werlen, M; Hryn'ova, T; Gallin-martel, M; Wu, M; Touchard, F; Menouni, M; Fougeron, D; Le guirriec, E; Chollet, J C; Veillet, J; Barrillon, P; Prat, S; Krasny, M W; Roos, L; Boudarham, G; Lefebvre, G; Boscherini, D; Valentinetti, S; Acharya, B S; Miglioranzi, S; Kanzaki, J; Unno, Y; Yasu, Y; Iwasaki, H; Tokushuku, K; Maio, A; Rodrigues fernandes, B J; Pinto figueiredo raimundo ribeiro, N M; Bot, A; Shmeleva, A; Zaidan, R; Djilkibaev, R; Mincer, A I; Salnikov, A; Aracena, I A; Schwartzman, A G; Silverstein, D J; Fulsom, B G; Anulli, F; Kuhn, D; White, M J; Vetterli, M J; Stockton, M C; Mantifel, R L; Azuelos, G; Shoaleh saadi, D; Savard, P; Clark, A; Ferrere, D; Gaumer, O P; Diaz gutierrez, M A; Liu, Y; Dubnickova, A; Sykora, I; Strizenec, P; Weichert, J; Zitek, K; Naumann, T; Goessling, C; Klingenberg, R; Jakobs, K; Rurikova, Z; Werner, M W; Arnold, H R; Buscher, D; Hanke, P; Stamen, R; Dietzsch, T A; Kiryunin, A; Salihagic, D; Buchholz, P; Pacheco pages, A; Sushkov, S; Porto fernandez, M D C; Cruz josa, R; Vos, M A; Schwindling, J; Ponsot, P; Charignon, C; Kivernyk, O; Goodrick, M J; Hill, J C; Green, B J; Quarman, C V; Bates, R L; Allwood-spiers, S E; Quilty, D; Chilingarov, A; Long, R E; Barton, A E; Konstantinidis, N; Simmons, B; Davison, A R; Christodoulou, V; Wastie, R L; Gallas, E J; Cox, J; Dehchar, M; Behr, J K; Pickering, M A; Filippas, A; Panagoulias, I; Tenenbaum katan, Y D; Roth, I; Pitt, M; Citron, Z H; Benhammou, Y; Amram, N Y N; Soffer, A; Gorodeisky, R; Antonelli, M; Chiarella, V; Curatolo, M; Esposito, B; Nicoletti, G; Martini, A; Sansoni, A; Carlino, G; Del prete, T; Bini, C; Vari, R; Kuna, M; Pinamonti, M; Itoh, Y; Colijn, A P; Klous, S; Garitaonandia elejabarrieta, H; Rosendahl, P L; Taga, A V; Malecki, P; Malecki, P; Wolter, M W; Kowalski, T; Korcyl, G M; Caprini, M; Caprini, I; Dita, P; Olariu, A; Tudorache, A; Lytken, E; Hidvegi, A; Aliyev, M; Alexeev, G; Bardin, D; Kakurin, S; Lebedev, A; Golubykh, S; Chepurnov, V; Gostkin, M; Kolesnikov, V; Karpova, Z; Davkov, K I; Yeletskikh, I; Grishkevich, Y; Rud, V; Myagkov, A; Nikolaenko, V; Starchenko, E; Zaytsev, A; Fakhrutdinov, R; Cheine, I; Istin, S; Sahin, S; Teng, P; Chu, M L; Trilling, G H; Heinemann, B; Richoz, N; Degeorge, C; Youssef, S; Pilcher, J; Cheng, Y; Purohit, M V; Kravchenko, A; Calkins, R E; Blazey, G; Hauser, R; Koll, J D; Reinsch, A; Brost, E C; Allen, B W; Lankford, A J; Ciobotaru, M D; Slagle, K J; Haffa, B; Mann, A; Loginov, A; Cummings, J T; Loyal, J D; Skubic, P L; Boudreau, J F; Lee, B E; Redlinger, G; Wlodek, T; Carcassi, G; Sexton, K A; Yu, D; Deng, W; Metcalfe, J E; Panitkin, S; Sijacki, D; Mikuz, M; Kramberger, G; Tartarelli, G F; Farilla, A; Stanescu, C; Herrberg, R; Alconada verzini, M J; Brennan, A J; Varvell, K; Marroquim, F; Gomes, A A; Do amaral coutinho, Y; Gingrich, D; Moore, R W; Dolejsi, J; Valkar, S; Broz, J; Jindra, T; Kohout, Z; Kral, V; Mann, A W; Calfayan, P P; Langer, T; Hamacher, K; Sanny, B; Wagner, W; Flick, T; Redelbach, A R; Ke, Y; Higon-rodriguez, E; Donini, J N; Lafarguette, P; Adye, T J; Baines, J; Barnett, B; Wickens, F J; Martin, V J; Jackson, J N; Prichard, P; Kretzschmar, J; Martin, A J; Walker, C J; Potter, K M; Kourkoumelis, C; Tzamarias, S; Houiris, A G; Iliadis, D; Fanti, M; Bertolucci, F; Maleev, V; Sultanov, S; Rosenberg, E I; Krumnack, N E; Bieganek, C; Diehl, E B; Mc kee, S P; Eppig, A P; Harper, D R; Liu, C; Schwarz, T A; Mazor, B; Looper, K A; Wiedenmann, W; Huang, P; Stahlman, J M; Battaglia, M; Nielsen, J A; Zhao, T; Khanov, A; Kaushik, V S; Vichou, E; Liss, A M; Gemme, C; Morettini, P; Parodi, F; Passaggio, S; Rossi, L; Kuzhir, P; Ignatenko, A; Ferrari, R; Spairani, M; Pianori, E; Sekula, S J; Firan, A I; Cao, T; Hetherly, J W; Gouighri, M; Vassilakopoulos, V; Long, M C; Shimojima, M; Sawyer, L H; Brummett, R E; Losada, M A; Schorlemmer, A L; Mantoani, M; Bawa, H S; Mornacchi, G; Nicquevert, B; Palestini, S; Stapnes, S; Veness, R; Kotamaki, M J; Sorde, C; Iengo, P; Campana, S; Goossens, L; Zajacova, Z; Pribyl, L; Poveda torres, J; Marzin, A; Conti, G; Carrillo montoya, G D; Kroseberg, J; Gonella, L; Velz, T; Schmitt, S; Lobodzinska, E M; Lovschall-jensen, A E; Galster, G; Perrot, G; Cailles, M; Berger, N; Barnovska, Z; Delsart, P; Lleres, A; Tisserant, S; Grivaz, J; Matricon, P; Bellagamba, L; Bertin, A; Bruschi, M; De castro, S; Semprini cesari, N; Fabbri, L; Rinaldi, L; Quayle, W B; Truong, T N L; Kondo, T; Haruyama, T; Ng, C; Do valle wemans, A; Almeida veloso, F M; Konovalov, S; Ziegler, J M; Su, D; Lukas, W; Prince, S; Ortega urrego, E J; Teuscher, R J; Knecht, N; Pretzl, K; Borer, C; Gadomski, S; Koch, B; Kuleshov, S; Brooks, W K; Antos, J; Kulkova, I; Chudoba, J; Chyla, J; Tomasek, L; Bazalova, M; Messmer, I; Tobias, J; Sundermann, J E; Kuehn, S S; Kluge, E; Scharf, V L; Barillari, T; Kluth, S; Menke, S; Weigell, P; Schwegler, P; Ziolkowski, M; Casado lechuga, P M; Garcia, C; Sanchez, J; Costa mezquita, M J; Valero biot, J A; Laporte, J; Nikolaidou, R; Virchaux, M; Nguyen, V T H; Charlton, D; Harrison, K; Slater, M W; Newman, P R; Parker, A M; Ward, P; Mcgarvie, S A; Kilvington, G J; D'auria, S; O'shea, V; Mcglone, H M; Fox, H; Henderson, R; Kartvelishvili, V; Davies, B; Sherwood, P; Fraser, J T; Lancaster, M A; Tseng, J C; Hays, C P; Apolle, R; Dixon, S D; Parker, K A; Gazis, E; Papadopoulou, T; Panagiotopoulou, E; Karastathis, N; Hershenhorn, A D; Milov, A; Groth-jensen, J; Bilokon, H; Miscetti, S; Canale, V; Rebuzzi, D M; Capua, M; Bagnaia, P; De salvo, A; Gentile, S; Safai tehrani, F; Solfaroli camillocci, E; Sasao, N; Tsunada, K; Massaro, G; Magrath, C A; Van kesteren, Z; Beker, M G; Van den wollenberg, W; Bugge, L; Buran, T; Read, A L; Gjelsten, B K; Banas, E A; Turnau, J; Derendarz, D K; Kisielewska, D; Chesneanu, D; Rotaru, M; Maurer, J B; Wong, M L; Lund-jensen, B; Asman, B; Jon-and, K B; Silverstein, S B; Johansen, M; Alexandrov, I; Iatsounenko, I; Krumshteyn, Z; Peshekhonov, V; Rybaltchenko, K; Samoylov, V; Cheplakov, A; Kekelidze, G; Lyablin, M; Teterine, V; Bednyakov, V; Kruchonak, U; Shiyakova, M M; Demichev, M; Denisov, S P; Fenyuk, A; Djobava, T; Salukvadze, G; Cetin, S A; Brau, B P; Pais, P R; Proudfoot, J; Van gemmeren, P; Zhang, Q; Beringer, J A; Ely, R; Leggett, C; Pengg, F X; Barnett, M R; Quick, R E; Williams, S; Gardner jr, R W; Huston, J; Brock, R; Wanotayaroj, C; Unel, G N; Taffard, A C; Frate, M; Baker, K O; Tipton, P L; Hutchison, A; Walsh, B J; Norberg, S R; Su, J; Tsybyshev, D; Caballero bejar, J; Ernst, M U; Wellenstein, H; Vudragovic, D; Vidic, I; Gorelov, I V; Toms, K; Alimonti, G; Petrucci, F; Kolanoski, H; Smith, J; Jeng, G; Watson, I J; Guimaraes ferreira, F; Miranda vieira xavier, F; Araujo pereira, R; Poffenberger, P; Sopko, V; Elmsheuser, J; Wittkowski, J; Glitza, K; Gorfine, G W; Ferrer soria, A; Fuster verdu, J A; Sanchis lozano, A; Reinmuth, G; Busato, E; Haywood, S J; Mcmahon, S J; Qian, W; Villani, E G; Laycock, P J; Poll, A J; Rizvi, E S; Foster, J M; Loebinger, F; Forti, A; Plano, W G; Brown, G J A; Kordas, K; Vegni, G; Ohsugi, T; Iwata, Y; Cherkaoui el moursli, R; Sahin, M; Akyazi, E; Carlsen, A; Kanwal, B; Cochran jr, J H; Aronnax, M V; Lockner, M J; Zhou, B; Levin, D S; Weaverdyck, C J; Grom, G F; Rudge, A; Ebenstein, W L; Jia, B; Yamaoka, J; Jared, R C; Wu, S L; Banerjee, S; Lu, Q; Hughes, E W; Alkire, S P; Degenhardt, J D; Lipeles, E D; Spencer, E N; Savine, A; Cheu, E C; Lampl, W; Veatch, J R; Roberts, K; Atkinson, M J; Odino, G A; Polesello, G; Martin, T; White, A P; Stephens, R; Grinbaum sarkisyan, E; Vartapetian, A; Yu, J; Sosebee, M; Thilagar, P A; Spurlock, B; Bonde, R; Filthaut, F; Klok, P; Hoummada, A; Ouchrif, M; Pellegrini, G; Rafi tatjer, J M; Navarro, G A; Blumenschein, U; Weingarten, J C; Mueller, D; Graber, L; Gao, Y; Bode, A; Capeans garrido, M D M; Carli, T; Wells, P; Beltramello, O; Vuillermet, R; Dudarev, A; Salzburger, A; Torchiani, C I; Serfon, C L G; Sloper, J E; Duperrier, G; Lilova, P T; Knecht, M O; Lassnig, M; Anders, G; Deviveiros, P; Young, C; Sforza, F; Shaochen, C; Lu, F; Wermes, N; Wienemann, P; Schwindt, T; Hansen, P H; Hansen, J B; Pingel, A M; Massol, N; Elles, S L; Hallewell, G D; Rozanov, A; Vacavant, L; Fournier, D A; Poggioli, L; Puzo, P M; Tanaka, R; Escalier, M A; Makovec, N; Rezynkina, K; De cecco, S; Cavalleri, P G; Massa, I; Zoccoli, A; Tanaka, S; Odaka, S; Mitsui, S; Tomasio pina, J A; Santos, H F; Satsounkevitch, I; Harkusha, S; Baranov, S; Nechaeva, P; Kayumov, F; Kazanin, V; Asai, M; Mount, R P; Nelson, T K; Smith, D; Kenney, C J; Malone, C M; Kobel, M; Friedrich, F; Grohs, J P; Jais, W J; O'neil, D C; Warburton, A T; Vincter, M; Mccarthy, T G; Groer, L S; Pham, Q T; Taylor, W J; La marra, D; Perrin, E; Wu, X; Bell, W H; Delitzsch, C M; Feng, C; Zhu, C; Tokar, S; Bruncko, D; Kupco, A; Marcisovsky, M; Jakoubek, T; Bruneliere, R; Aktas, A; Narrias villar, D I; Tapprogge, S; Mattmann, J; Kroha, H; Crespo, J; Korolkov, I; Cavallaro, E; Cabrera urban, S; Mitsou, V; Kozanecki, W; Mansoulie, B; Pabot, Y; Etienvre, A; Bauer, F; Chevallier, F; Bouty, A R; Watkins, P; Watson, A; Faulkner, P J W; Curtis, C J; Murillo quijada, J A; Grout, Z J; Chapman, J D; Cowan, G D; George, S; Boisvert, V; Mcmahon, T R; Doyle, A T; Thompson, S A; Britton, D; Smizanska, M; Campanelli, M; Butterworth, J M; Loken, J; Renton, P; Barr, A J; Issever, C; Short, D; Crispin ortuzar, M; Tovey, D R; French, R; Rozen, Y; Alexander, G; Kreisel, A; Conventi, F; Raulo, A; Schioppa, M; Susinno, G; Tassi, E; Giagu, S; Luci, C; Nisati, A; Cobal, M; Ishikawa, A; Jinnouchi, O; Bos, K; Verkerke, W; Vermeulen, J; Van vulpen, I B; Kieft, G; Mora, K D; Olsen, F; Rohne, O M; Pajchel, K; Nilsen, J K; Wosiek, B K; Wozniak, K W; Badescu, E; Jinaru, A; Bohm, C; Johansson, E K; Sjoelin, J B R; Clement, C; Buszello, C P; Huseynova, D; Boyko, I; Popov, B; Poukhov, O; Vinogradov, V; Tsiareshka, P; Skvorodnev, N; Soldatov, A; Chuguev, A; Gushchin, V; Yazici, E; Lutz, M S; Malon, D; Vanyashin, A; Lavrijsen, W; Spieler, H; Biesiada, J L; Bahr, M; Kong, J; Tatarkhanov, M; Ogren, H; Van kooten, R J; Cwetanski, P; Butler, J M; Shank, J T; Chakraborty, D; Ermoline, I; Sinev, N; Whiteson, D O; Corso radu, A; Huang, J; Werth, M P; Kastoryano, M; Meirose da silva costa, B; Namasivayam, H; Hobbs, J D; Schamberger jr, R D; Guo, F; Potekhin, M; Popovic, D; Gorisek, A; Sokhrannyi, G; Hofsajer, I W; Mandelli, L; Ceradini, F; Graziani, E; Giorgi, F; Zur nedden, M E G; Grancagnolo, S; Volpi, M; Nunes hanninger, G; Rados, P K; Milesi, M; Cuthbert, C J; Black, C W; Fink grael, F; Fincke-keeler, M; Keeler, R; Kowalewski, R V; Berghaus, F O; Qi, M; Davidek, T; Tas, P; Jakubek, J; Duckeck, G; Walker, R; Mitterer, C A; Harenberg, T; Sandvoss, S A; Del peso, J; Llorente merino, J; Gonzalez millan, V; Irles quiles, A; Crouau, M; Gris, P L Y; Liauzu, S; Romano saez, S M; Gallop, B J; Jones, T J; Austin, N C; Morris, J; Duerdoth, I; Thompson, R J; Kelly, M P; Leisos, A; Garas, A; Pizio, C; Venda pinto, B A; Kudin, L; Qian, J; Wilson, A W; Mietlicki, D; Long, J D; Sang, Z; Arms, K E; Rahimi, A M; Moss, J J; Oh, S H; Parker, S I; Parsons, J; Cunitz, H; Vanguri, R S; Sadrozinski, H; Lockman, W S; Martinez-mc kinney, G; Goussiou, A; Jones, A; Lie, K; Hasegawa, Y; Olcese, M; Gilewsky, V; Harrison, P F; Janus, M; Spangenberg, M; De, K; Ozturk, N; Pal, A K; Darmora, S; Bullock, D J; Oviawe, O; Derkaoui, J E; Rahal, G; Sircar, A; Frey, A S; Stolte, P; Rosien, N; Zoch, K; Li, L; Schouten, D W; Catinaccio, A; Ciapetti, M; Delruelle, N; Ellis, N; Farthouat, P; Hoecker, A; Klioutchnikova, T; Macina, D; Malyukov, S; Spiwoks, R D; Unal, G P; Vandoni, G; Petersen, B A; Pommes, K; Nairz, A M; Wengler, T; Mladenov, D; Solans sanchez, C A; Lantzsch, K; Schmieden, K; Jakobsen, S; Ritsch, E; Sciuccati, A; Alves dos santos, A M; Ouyang, Q; Zhou, M; Brock, I C; Janssen, J; Katzy, J; Anders, C F; Nilsson, B S; Bazan, A; Di ciaccio, L; Yildizkaya, T; Collot, J; Malek, F; Trocme, B S; Breugnon, P; Godiot, S; Adam bourdarios, C; Coulon, J; Duflot, L; Petroff, P G; Zerwas, D; Lieuvin, M; Calderini, G; Laporte, D; Ocariz, J; Gabrielli, A; Ohska, T K; Kurochkin, Y; Kantserov, V; Vasilyeva, L; Speransky, M; Smirnov, S; Antonov, A; Bulekov, O; Tikhonov, Y; Sargsyan, L; Vardanyan, G; Budick, B; Kocian, M L; Luitz, S; Young, C C; Grenier, P J; Kelsey, M; Black, J E; Kneringer, E; Jussel, P; Horton, A J; Beaudry, J; Chandra, A; Ereditato, A; Topfel, C M; Mathieu, R; Bucci, F; Muenstermann, D; White, R M; He, M; Urban, J; Straka, M; Vrba, V; Schumacher, M; Parzefall, U; Mahboubi, K; Sommer, P O; Koepke, L H; Bethke, S; Moser, H; Wiesmann, M; Walkowiak, W A; Fleck, I J; Martinez-perez, M; Sanchez sanchez, C A; Jorgensen roca, S; Accion garcia, E; Sainz ruiz, C A; Valls ferrer, J A; Amoros vicente, G; Vives torrescasana, R; Ouraou, A; Formica, A; Hassani, S; Watson, M F; Cottin buracchio, G F; Bussey, P J; Saxon, D; Ferrando, J E; Collins-tooth, C L; Hall, D C; Cuhadar donszelmann, T; Dawson, I; Duxfield, R; Argyropoulos, T; Brodet, E; Livneh, R; Shougaev, K; Reinherz, E I; Guttman, N; Beretta, M M; Vilucchi, E; Aloisio, A; Patricelli, S; Caprio, M; Cevenini, F; De vecchi, C; Livan, M; Rimoldi, A; Vercesi, V; Ayad, R; Mastroberardino, A; Ciapetti, G; Luminari, L; Rescigno, M; Santonico, R; Salamon, A; Del papa, C; Kurashige, H; Homma, Y; Tomoto, M; Horii, Y; Sugaya, Y; Hanagaki, K; Bobbink, G; Kluit, P M; Koffeman, E N; Van eijk, B; Lee, H; Eigen, G; Dorholt, O; Strandlie, A; Strzempek, P B; Dita, S; Stoicea, G; Chitan, A; Leven, S S; Moa, T; Brenner, R; Ekelof, T J C; Olshevskiy, A; Roumiantsev, V; Chlachidze, G; Zimine, N; Gusakov, Y; Grigalashvili, N; Mineev, M; Potrap, I; Barashkou, A; Shoukavy, D; Shaykhatdenov, B; Pikelner, A; Gladilin, L; Ammosov, V; Abramov, A; Arik, M; Sahinsoy, M; Uysal, Z; Azizi, K; Hotinli, S C; Zhou, S; Berger, E; Blair, R; Underwood, D G; Einsweiler, K; Garcia-sciveres, M A; Siegrist, J L; Kipnis, I; Dahl, O; Holland, S; Barbaro galtieri, A; Smith, P T; Parua, N; Franklin, M; Mercurio, K M; Tong, B; Pod, E; Cole, S G; Hopkins, W H; Guest, D H; Severini, H; Marsicano, J J; Abbott, B K; Wang, Q; Lissauer, D; Ma, H; Takai, H; Rajagopalan, S; Protopopescu, S D; Snyder, S S; Undrus, A; Popescu, R N; Begel, M A; Blocker, C A; Amelung, C; Mandic, I; Macek, B; Tucker, B H; Citterio, M; Troncon, C; Orestano, D; Taccini, C; Romeo, G L; Dova, M T; Taylor, G N; Gesualdi manhaes, A; Mcpherson, R A; Sobie, R; Taylor, R P; Dolezal, Z; Kodys, P; Slovak, R; Sopko, B; Vacek, V; Sanders, M P; Hertenberger, R; Meineck, C; Becks, K; Kind, P; Sandhoff, M; Cantero garcia, J; De la torre perez, H; Castillo gimenez, V; Ros, E; Hernandez jimenez, Y; Chadelas, R; Santoni, C; Washbrook, A J; O'brien, B J; Wynne, B M; Mehta, A; Vossebeld, J H; Landon, M; Teixeira dias castanheira, M; Cerrito, L; Keates, J R; Fassouliotis, D; Chardalas, M; Manousos, A; Grachev, V; Seliverstov, D; Sedykh, E; Cakir, O; Ciftci, R; Edson, W; Prell, S A; Rosati, M; Stroman, T; Jiang, H; Neal, H A; Li, X; Gan, K K; Smith, D S; Kruse, M C; Ko, B R; Leung fook cheong, A M; Cole, B; Angerami, A R; Greene, Z S; Kroll, J I; Van berg, R P; Forbush, D A; Lubatti, H; Raisher, J; Shupe, M A; Wolin, S; Oshita, H; Gaudio, G; Das, R; Konig, A C; Croft, V A; Harvey, A; Maaroufi, F; Melo, I; Greenwood jr, Z D; Shabalina, E; Mchedlidze, G; Drechsler, E; Rieger, J K; Blackston, M; Colombo, T

    2002-01-01

    % ATLAS \\\\ \\\\ ATLAS is a general-purpose experiment for recording proton-proton collisions at LHC. The ATLAS collaboration consists of 144 participating institutions (June 1998) with more than 1750~physicists and engineers (700 from non-Member States). The detector design has been optimized to cover the largest possible range of LHC physics: searches for Higgs bosons and alternative schemes for the spontaneous symmetry-breaking mechanism; searches for supersymmetric particles, new gauge bosons, leptoquarks, and quark and lepton compositeness indicating extensions to the Standard Model and new physics beyond it; studies of the origin of CP violation via high-precision measurements of CP-violating B-decays; high-precision measurements of the third quark family such as the top-quark mass and decay properties, rare decays of B-hadrons, spectroscopy of rare B-hadrons, and $ B ^0 _{s} $-mixing. \\\\ \\\\The ATLAS dectector, shown in the Figure includes an inner tracking detector inside a 2~T~solenoid providing an axial...

  7. Task Management in the New ATLAS Production System

    CERN Document Server

    De, K; The ATLAS collaboration; Klimentov, A; Potekhin, M; Vaniachine, A

    2013-01-01

    The ATLAS Production System is the top level workflow manager which translates physicists' needs for production level processing into actual workflows executed across about a hundred processing sites used globally by ATLAS. As the production workload increased in volume and complexity in recent years (the ATLAS production tasks count is above one million, with each task containing hundreds or thousands of jobs) there is a need to upgrade the Production System to meet the challenging requirements of the next LHC run while minimizing the operating costs. Providing a front-end and a management layer for petascale data processing and analysis, the new Production System contains generic subsystems that can be used in a wider range of applications. The main subsystems are the Database Engine for Tasks (DEFT) and the Job Execution and Definition Interface (JEDI). Based on users' requests, the DEFT subsystem manages inter-dependent groups of tasks (Meta-Tasks) and generates corresponding data processing workflows. Th...

  8. Task Management in the New ATLAS Production System

    CERN Document Server

    De, K; The ATLAS collaboration; Klimentov, A; Potekhin, M; Vaniachine, A

    2014-01-01

    The ATLAS Production System is the top level workflow manager which translates physicists' needs for production level processing into actual workflows executed across about a hundred processing sites used globally by ATLAS. As the production workload increased in volume and complexity in recent years (the ATLAS production tasks count is above one million, with each task containing hundreds or thousands of jobs) there is a need to upgrade the Production System to meet the challenging requirements of the next LHC run while minimizing the operating costs. Providing a front-end and a management layer for petascale data processing and analysis, the new Production System contains generic subsystems that can be used in a wider range of applications. The main subsystems are the Database Engine for Tasks (DEFT) and the Job Execution and Definition Interface (JEDI). Based on users' requests, the DEFT subsystem manages inter-dependent groups of tasks (Meta-Tasks) and generates corresponding data processing workflows. Th...

  9. The ATLAS Data Acquisition and High Level Trigger system

    International Nuclear Information System (INIS)

    2016-01-01

    This paper describes the data acquisition and high level trigger system of the ATLAS experiment at the Large Hadron Collider at CERN, as deployed during Run 1. Data flow as well as control, configuration and monitoring aspects are addressed. An overview of the functionality of the system and of its performance is presented and design choices are discussed.

  10. Report on container technology for the ATLAS TDAQ system

    CERN Document Server

    Gadirov, Hamid

    2016-01-01

    My summer student project "Container technology for the Upgrade of the ATLAS Trigger and Data Acquisition (TDAQ) system" focused on the research of container-based (operating system-level) virtualization for TDAQ software. Several tests were performed on Docker platform, all of them showed compatibility for TDAQ software.

  11. Integration of Titan supercomputer at OLCF with ATLAS production system

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration

    2016-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this talk we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job...

  12. Module and electronics developments for the ATLAS ITK pixel system

    CERN Document Server

    Munoz Sanchez, Francisca Javiela; The ATLAS collaboration

    2017-01-01

    The ATLAS experiment is preparing for an extensive modification of its detectors in the course of the planned HL-LHC accelerator upgrade around 2025. The ATLAS upgrade includes the replacement of the entire tracking system by an all-silicon detector (Inner Tracker, ITk). The five innermost layers of ITk will be a pixel detector built of new sensor and readout electronics technologies to improve the tracking performance and cope with the severe HL-LHC environment in terms of occupancy and radiation. The total area of the new pixel system could measure up to 14 m2, depending on the final layout choice, which is expected to take place in 2017. In this paper an overview of the ongoing R\\&D activities on modules and electronics for the ATLAS ITk is given including the main developments and achievements in silicon planar and 3D sensor technologies, readout and power challenges.

  13. Integration of Titan supercomputer at OLCF with ATLAS Production System

    CERN Document Server

    AUTHOR|(SzGeCERN)643806; The ATLAS collaboration; De, Kaushik; Klimentov, Alexei; Nilsson, Paul; Oleynik, Danila; Padolski, Siarhei; Panitkin, Sergey; Wenaus, Torre

    2017-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for jo...

  14. Front-end DAQ strategy and implementation for the KLOE-2 experiment

    Science.gov (United States)

    Branchini, P.; Budano, A.; Balla, A.; Beretta, M.; Ciambrone, P.; De Lucia, E.; D'Uffizi, A.; Marciniewski, P.

    2013-04-01

    A new front-end data acquisition (DAQ) system has been conceived for the data collection of the new detectors which will be installed by the KLOE2 collaboration. This system consists of a general purpose FPGA based DAQ module and a VME board hosting up to 16 optical links. The DAQ module has been built around a Virtex-4 FPGA and it is able to acquire up to 1024 different channels distributed over 16 front-end slave cards. Each module is a general interface board (GIB) which performs also first level data concentration tasks. The GIB has an optical interface, a RS-232, an USB and a Gigabit Ethernet Interface. The optical interface will be used for DAQ purposes while the Gigabit Ethernet interface for monitoring tasks and debug. Two new detectors exploit this strategy to collect data. Optical links are used to deliver data to the VME board which performs data concentration tasks. The return optical link from the board to the GIB is used to initialize the front-end cards. The VME interface of the module implements the VME 2eSST protocol in order to sustain a peak data rate of up to 320 MB/s. At the moment the system is working at the Frascati National Laboratory (LNF).

  15. Front-end DAQ strategy and implementation for the KLOE-2 experiment

    International Nuclear Information System (INIS)

    Branchini, P; Budano, A; Balla, A; Beretta, M; Ciambrone, P; Lucia, E De; D'Uffizi, A; Marciniewski, P

    2013-01-01

    A new front-end data acquisition (DAQ) system has been conceived for the data collection of the new detectors which will be installed by the KLOE2 collaboration. This system consists of a general purpose FPGA based DAQ module and a VME board hosting up to 16 optical links. The DAQ module has been built around a Virtex-4 FPGA and it is able to acquire up to 1024 different channels distributed over 16 front-end slave cards. Each module is a general interface board (GIB) which performs also first level data concentration tasks. The GIB has an optical interface, a RS-232, an USB and a Gigabit Ethernet Interface. The optical interface will be used for DAQ purposes while the Gigabit Ethernet interface for monitoring tasks and debug. Two new detectors exploit this strategy to collect data. Optical links are used to deliver data to the VME board which performs data concentration tasks. The return optical link from the board to the GIB is used to initialize the front-end cards. The VME interface of the module implements the VME 2eSST protocol in order to sustain a peak data rate of up to 320 MB/s. At the moment the system is working at the Frascati National Laboratory (LNF).

  16. A Web 2.0 approach to DAQ monitoring and controlling

    Energy Technology Data Exchange (ETDEWEB)

    Penschuck, Manuel [Goethe-Universitaet, Frankfurt (Germany); Collaboration: TRB3-Collaboration

    2014-07-01

    In the scope of experimental set-ups for the upcoming FAIR experiments, a FPGA-based general purpose trigger and read-out board (TRB3) has been developed which is already in use in several detector set-ups (e.g. HADES, CBM-MVD, PANDA). For on- and off-board communication between the DAQ's subsystems, TrbNet, a specialised high-speed, low-latency network protocol developed for the DAQ system of the HADES detector, is used. Communication with any computer infrastructure is provided by Gigabit Ethernet. Monitoring and configuration of all DAQ systems and front-end electronics is consistently managed by the powerful slow-control features of TrbNet and supported by a flexible and mature software tool-chain, designed to meet the diverse requirements during development, setup phase and experiment. Most building blocks offer a graphical-user-interface (GUI) implemented using omnipresent web 2.0 technologies, which enable rapid prototyping, network transparent access and impose minimal software dependencies on the client's machine. This contribution will present the GUI-related features and infrastructure highlighting the multiple interfaces from the DAQ's slow-control to the client's web-browser.

  17. ATLAS Point-1 System Administration Group

    CERN Multimedia

    Marc Dobson

    2007-01-01

    Hello, my name is Joe Blog and I am about to go on shift at ATLAS. When I enter the control room shown below with my CERN ID card, I go to the subsystem desk for which I am responsible. This is the first shift of the run period and there is a login window displayed on the screens. I just need to hit return and the control room desktop is started. Before I can do anything I must give my credentials in the shifter window which is then synchronised with the shift plan. After that I have access to all the allowed commands and can start preparing for the run. In order not to forget any steps I consult the documentation on how to prepare for a run on the Point-1 web. I can also check what the general status is for the ATLAS online computing farm, the sub-detectors and the LHC by using the utilities provided. ATLAS Control Room. The situation described is made up but the conditions are real. But the control room that the shifters and general public see is only the tip of the iceberg. Behind these tools lie the...

  18. An Embedded Real-Time System on ATLAS ROBIN

    OpenAIRE

    Yu, Maoyuan

    2012-01-01

    ATLAS is the largest particle detector at the Large Hadron Collider for high energy physics experiments that produces over 40 TB/s event data. The ATLAS Readout Buffer INput(ROBIN) subsystem is an essential device to buffer and reduce the data, which has a IBM PowerPC core for the control functionalities. This dissertation addresses the software design of an embedded real-time system centering on the PowerPC micro-controller, as the management core of the ROBIN. A page-based solution is pr...

  19. System Architecture Modeling for Technology Portfolio Management using ATLAS

    Science.gov (United States)

    Thompson, Robert W.; O'Neil, Daniel A.

    2006-01-01

    Strategic planners and technology portfolio managers have traditionally relied on consensus-based tools, such as Analytical Hierarchy Process (AHP) and Quality Function Deployment (QFD) in planning the funding of technology development. While useful to a certain extent, these tools are limited in the ability to fully quantify the impact of a technology choice on system mass, system reliability, project schedule, and lifecycle cost. The Advanced Technology Lifecycle Analysis System (ATLAS) aims to provide strategic planners a decision support tool for analyzing technology selections within a Space Exploration Architecture (SEA). Using ATLAS, strategic planners can select physics-based system models from a library, configure the systems with technologies and performance parameters, and plan the deployment of a SEA. Key parameters for current and future technologies have been collected from subject-matter experts and other documented sources in the Technology Tool Box (TTB). ATLAS can be used to compare the technical feasibility and economic viability of a set of technology choices for one SEA, and compare it against another set of technology choices or another SEA. System architecture modeling in ATLAS is a multi-step process. First, the modeler defines the system level requirements. Second, the modeler identifies technologies of interest whose impact on an SEA. Third, the system modeling team creates models of architecture elements (e.g. launch vehicles, in-space transfer vehicles, crew vehicles) if they are not already in the model library. Finally, the architecture modeler develops a script for the ATLAS tool to run, and the results for comparison are generated.

  20. Web tools to monitor and debug DAQ hardware

    International Nuclear Information System (INIS)

    Desavouret, Eugene; Nogiec, Jerzy M.

    2003-01-01

    A web-based toolkit to monitor and diagnose data acquisition hardware has been developed. It allows for remote testing, monitoring, and control of VxWorks data acquisition computers and associated instrumentation using the HTTP protocol and a web browser. This solution provides concurrent and platform independent access, supplementary to the standard single-user rlogin mechanism. The toolkit is based on a specialized web server, and allows remote access and execution of select system commands and tasks, execution of test procedures, and provides remote monitoring of computer system resources and connected hardware. Various DAQ components such as multiplexers, digital I/O boards, analog to digital converters, or current sources can be accessed and diagnosed remotely in a uniform and well-organized manner. Additionally, the toolkit application supports user authentication and is able to enforce specified access restrictions

  1. Experiences with the new ATLAS Distributed Data Management System

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00214543; The ATLAS collaboration; Serfon, Cedric; Barisits, Martin-Stefan; Lassnig, Mario; Beermann, Thomas; Guan, Wen

    2017-01-01

    The ATLAS Distributed Data Management (DDM) system has evolved drastically in the last two years with the Rucio software fully replacing the previous system before the start of LHC Run-2. The ATLAS DDM system manages now more than 250 petabytes spread on 130 storage sites and can handle file transfer rates of up to 30Hz. In this paper, we discuss our experience acquired in developing, commissioning, running and maintaining such a large system. First, we describe the general architecture of the system, our integration with external services like the WLCG File Transfer Service and the evolution of the system over its first years of production. Then, we show the performance of the system, describe the integration of new technologies such as object stores, and outline some new developments, which mainly focus on performance and automation.

  2. MBAT: A scalable informatics system for unifying digital atlasing workflows

    Directory of Open Access Journals (Sweden)

    Sane Nikhil

    2010-12-01

    Full Text Available Abstract Background Digital atlases provide a common semantic and spatial coordinate system that can be leveraged to compare, contrast, and correlate data from disparate sources. As the quality and amount of biological data continues to advance and grow, searching, referencing, and comparing this data with a researcher's own data is essential. However, the integration process is cumbersome and time-consuming due to misaligned data, implicitly defined associations, and incompatible data sources. This work addressing these challenges by providing a unified and adaptable environment to accelerate the workflow to gather, align, and analyze the data. Results The MouseBIRN Atlasing Toolkit (MBAT project was developed as a cross-platform, free open-source application that unifies and accelerates the digital atlas workflow. A tiered, plug-in architecture was designed for the neuroinformatics and genomics goals of the project to provide a modular and extensible design. MBAT provides the ability to use a single query to search and retrieve data from multiple data sources, align image data using the user's preferred registration method, composite data from multiple sources in a common space, and link relevant informatics information to the current view of the data or atlas. The workspaces leverage tool plug-ins to extend and allow future extensions of the basic workspace functionality. A wide variety of tool plug-ins were developed that integrate pre-existing as well as newly created technology into each workspace. Novel atlasing features were also developed, such as supporting multiple label sets, dynamic selection and grouping of labels, and synchronized, context-driven display of ontological data. Conclusions MBAT empowers researchers to discover correlations among disparate data by providing a unified environment for bringing together distributed reference resources, a user's image data, and biological atlases into the same spatial or semantic context

  3. Control and Data Acquisition System of the ATLAS Facility

    International Nuclear Information System (INIS)

    Choi, Ki-Yong; Kwon, Tae-Soon; Cho, Seok; Park, Hyun-Sik; Baek, Won-Pil; Kim, Jung-Taek

    2007-02-01

    This report describes the control and data acquisition system of an integral effect test facility, ATLAS (Advanced Thermal-hydraulic Test Loop for Accident Simulation) facility, which recently has been constructed at KAERI (Korea Atomic Energy Research Institute). The control and data acquisition system of the ATLAS is established with the hybrid distributed control system (DCS) by RTP corp. The ARIDES system on a LINUX platform which is provided by BNF Technology Inc. is used for a control software. The IO signals consists of 1995 channels and they are processed at 10Hz. The Human-Machine-Interface (HMI) consists of 43 processing windows and they are classified according to fluid system. All control devices can be controlled by manual, auto, sequence, group, and table control methods. The monitoring system can display the real time trend or historical data of the selected IO signals on LCD monitors in a graphical form. The data logging system can be started or stopped by operator and the logging frequency can be selected among 0.5, 1, 2, 10Hz. The fluid system of the ATLAS facility consists of several systems including a primary system to auxiliary system. Each fluid system has a control similarity to the prototype plant, APR1400/OPR1000

  4. Beam Test of the ATLAS Level-1 Calorimeter Trigger System

    CERN Document Server

    Garvey, J; Mahout, G; Moye, T H; Staley, R J; Thomas, J P; Typaldos, D; Watkins, P M; Watson, A; Achenbach, R; Föhlisch, F; Geweniger, C; Hanke, P; Kluge, E E; Mahboubi, K; Meier, K; Meshkov, P; Rühr, F; Schmitt, K; Schultz-Coulon, H C; Ay, C; Bauss, B; Belkin, A; Rieke, S; Schäfer, U; Tapprogge, T; Trefzger, T; Weber, GA; Eisenhandler, E F; Landon, M; Apostologlou, P; Barnett, B M; Brawn, I P; Davis, A O; Edwards, J; Gee, C N P; Gillman, A R; Mirea, A; Perera, V J O; Qian, W; Sankey, D P C; Bohm, C; Hellman, S; Hidvegi, A; Silverstein, S

    2005-01-01

    The Level-1 Calorimter Trigger consists of a Preprocessor (PP), a Cluster Processor (CP), and a Jet/Energy-sum Processor (JEP). The CP and JEP receive digitised trigger-tower data from the Preprocessor and produce Region-of-Interest (RoIs) and trigger multiplicities. The latter are sent in real time to the Central Trigger Processor (CTP) where the Level-1 decision is made. On receipt of a Level-1 Accept, Readout Driver Modules (RODs), provide intermediate results to the data acquisition (DAQ) system for monitoring and diagnostic purpose. RoI information is sent to the RoI builder (RoIB) to help reduce the amount of data required for the Level-2 Trigger The Level-1 Calorimeter Trigger System at the test beam consisted of 1 Preprocessor module, 1 Cluster Processor Module, 1 Jet/Energy Module and 2 Common Merger Modules. Calorimeter energies were sucessfully handled thourghout the chain and trigger object sent to the CTP. Level-1 Accepts were sucessfully produced and used to drive the readout path. Online diagno...

  5. A quality control atlas for scintillation camera systems

    International Nuclear Information System (INIS)

    Busemann Sokole, E.; Graham, L.S.; Todd-Pokropek, A.; Wegst, A.; Robilotta, C.C.

    2002-01-01

    Full text: The accurate interpretation of quality control and clinical nuclear medicine image data is coupled to an understanding of image patterns and quantitative results. Understanding is gained by learning from different examples, and knowledge of underlying principles of image production. An Atlas of examples has been created to assist with interpreting quality control tests and recognizing artifacts in clinical examples. The project was initiated and supported by the International Atomic Energy Agency (IAEA). The Atlas was developed and written by Busemann Sokole from image examples submitted from nuclear medicine users from around the world. The descriptive text was written in a consistent format to accompany each image or image set. Each example in the atlas finally consisted of the images; a brief description of the data acquisition, radionuclide/radiopharmaceutical, specific circumstances under which the image was produced; results describing the images and subsequent conclusions; comments, where appropriate, giving guidelines for follow-up strategies and trouble shooting; and occasional literature references. Hardcopy images required digitizing into JPEG format for inclusion into a digital document. Where possible, an example was contained on one page. The atlas was reviewed by an international group of experts. A total of about 250 examples were compiled into 6 sections: planar, SPECT, whole body, camera/computer interface, environment/radioactivity, and display/hardcopy. Subtle loss of image quality may be difficult to detect. SPECT examples, therefore, include simulations demonstrating effects of deterioration in camera performance (e.g. center-of-rotation offset, non-uniformity) or suboptimal clinical performance. The atlas includes normal results, results from poor adjustment of the camera system, poor results obtained at acceptance testing, artifacts due to system malfunction, and artifacts due to environmental situations. Some image patterns are

  6. The Run-2 ATLAS Trigger System: Design, Performance and Plan

    CERN Document Server

    zur Nedden, Martin; The ATLAS collaboration

    2016-01-01

    In high-energy physics experiments, online selection is crucial to select interesting collisions from the large data volume. The ATLAS experiment at the Large Hadron Collider (LHC) utilizes the trigger system that consists of a hardware Level-1 (L1) and a software based high-level trigger (HLT), reducing the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of about 1000 Hz. The ATLAS trigger has been successfully collecting collision data during the first run of the LHC (Run-1) between 2009-2013 at a centre-of-mass energy between 900 GeV and 8 TeV. In the second run of LHC (Run-2) starting from 2015, the LHC operates at centre-of-mass energy of 13 TeV and provides a higher luminosity of collisions. Also, the number of collisions occurring in a same bunch crossing increases. The ATLAS trigger system has to cope with these challenges, while maintaining or even improving the efficiency to select relevant physics processes. In this talk, first we will review the ATLAS trigger ...

  7. Development of fluorocarbon evaporative cooling recirculators and controls for the ATLAS inner silicon tracker

    CERN Document Server

    Bayer, C; Bonneau, P; Bosteels, Michel; Burckhart, H J; Cragg, D; English, R; Hallewell, G D; Hallgren, Björn I; Ilie, S; Kersten, S; Kind, P; Langedrag, K; Lindsay, S; Merkel, M; Stapnes, Steinar; Thadome, J; Vacek, V

    2000-01-01

    We report on the development of evaporative fluorocarbon cooling recirculators and their control systems for the ATLAS inner silicon tracker. We have developed a prototype circulator using a dry, hermetic compressor with C/sub 3/F/sup 8/ refrigerant, and have prototyped the remote-control analog pneumatic links for the regulation of coolant mass flows and operating temperatures that will be necessary in the magnetic field and radiation environment around ATLAS. pressure and flow measurement and control use 150+ channels of standard ATLAS LMB ("Local Monitor Board") DAQ and DACs on a multi-drop CAN network administered through a BridgeVIEW user interface. A hardwired thermal interlock system has been developed to cut power to individual silicon modules should their temperatures exceed safe values. Highly satisfactory performance of the circulator under steady state, partial-load and transient conditions was seen, with proportional fluid flow tuned to varying circuit power. Future developments, including a 6 kW...

  8. The ATLAS PanDA Monitoring System and its Evolution

    CERN Document Server

    Klimentov, A; The ATLAS collaboration; Potekhin, M; Wenaus, T

    2011-01-01

    The PanDA (Production and Distributed Analysis) Workload Management System is used for ATLAS distributed production and analysis worldwide. The needs of ATLAS global computing imposed challenging requirements on PanDA design in areas such as scalability, robustness, automation, diagnostics, and usability for both production shifters and analysis users. Important to meeting these and other requirements is a comprehensive monitoring system. Through a system-wide job database, the PanDA monitor provides a comprehensive and coherent view of the system and job execution, from high level summaries to detailed drill-down job diagnostics. It is (like the rest of PanDA) an Apache-based Python application backed by Oracle. The presentation layer is HTML code generated on the fly in the Python application which is also responsible for managing database queries. However, this approach is lacking in user interface flexibility, simplicity of communication with external systems, and ease of maintenance. We decided to migrat...

  9. The ATLAS PanDA Monitoring System and its Evolution

    CERN Document Server

    Klimentov, A; The ATLAS collaboration; Potekhin, M; Wenaus, T

    2010-01-01

    The PanDA (Production and Distributed Analysis) Workload Management System is used for ATLAS distributed production and analysis worldwide. The needs of ATLAS global computing imposed challenging requirements on PanDA design in areas such as scalability, robustness, automation, diagnostics, and usability for both production shifters and analysis users. Important to meeting these and other requirements is a comprehensive monitoring system. Through a system-wide job database, the PanDA monitor provides a comprehensive and coherent view of the system and job execution, from high level summaries to detailed drill-down job diagnostics. It is (like the rest of PanDA) an Apache-based Python application backed by Oracle. The presentation layer is HTML code generated on the fly in the Python application which is also responsible for managing database queries. However, this approach is lacking in user interface flexibility, simplicity of communication with external systems, and ease of maintenance. We decided to migrat...

  10. Supervision of the ATLAS High Level Trigger System

    CERN Document Server

    Wheeler, S.; Meessen, C.; Qian, Z.; Touchard, F.; Negri, France A.; Zobernig, H.; CHEP 2003 Computing in High Energy Physics; Negri, France A.

    2003-01-01

    The ATLAS High Level Trigger (HLT) system provides software-based event selection after the initial LVL1 hardware trigger. It is composed of two stages, the LVL2 trigger and the Event Filter. The HLT is implemented as software tasks running on large processor farms. An essential part of the HLT is the supervision system, which is responsible for configuring, coordinating, controlling and monitoring the many hundreds of processes running in the HLT. A prototype implementation of the supervision system, using tools from the ATLAS Online Software system is presented. Results from scalability tests are also presented where the supervision system was shown to be capable of controlling over 1000 HLT processes running on 230 nodes.

  11. ATLAS: triggers for B-physics

    International Nuclear Information System (INIS)

    George, Simon

    2000-01-01

    The LHC will produce bb-bar events at an unprecedented rate. The number of events recorded by ATLAS will be limited by the rate at which they can be stored offline and subsequently analysed. Despite the huge number of events, the small branching ratios mean that analysis of many of the most interesting channels for CP violation and other measurements will be limited by statistics. The challenge for the Trigger and Data Acquisition (DAQ) system is therefore to maximise the fraction of interesting B decays in the B-physics data stream. The ATLAS Trigger/DAQ system is split into three levels. The initial B-physics selection is made in the first-level trigger by an inclusive low-p T muon trigger (∼6 GeV). The second-level trigger strategy is based on identifying classes of final states by their partial reconstruction. The muon trigger is confirmed before proceeding to a track search. Electron/hadron separation is given by the transition radiation tracking detector and the Electromagnetic calorimeter. Muon identification is possible using the muon detectors and the hadronic calorimeter. From silicon strips, pixels and straw tracking, precise track reconstruction is used to make selections based on invariant mass, momentum and impact parameter. The ATLAS trigger group is currently engaged in algorithm development and performance optimisation for the B-physics trigger. This is closely coupled to the R and D programme for the higher-level triggers. Together the two programmes of work will optimise the hardware, architecture and algorithms to meet the challenging requirements. This paper describes the current status and progress of this work

  12. Planetary Data Systems (PDS) Imaging Node Atlas II

    Science.gov (United States)

    Stanboli, Alice; McAuley, James M.

    2013-01-01

    The Planetary Image Atlas (PIA) is a Rich Internet Application (RIA) that serves planetary imaging data to the science community and the general public. PIA also utilizes the USGS Unified Planetary Coordinate system (UPC) and the on-Mars map server. The Atlas was designed to provide the ability to search and filter through greater than 8 million planetary image files. This software is a three-tier Web application that contains a search engine backend (MySQL, JAVA), Web service interface (SOAP) between server and client, and a GWT Google Maps API client front end. This application allows for the search, retrieval, and download of planetary images and associated meta-data from the following missions: 2001 Mars Odyssey, Cassini, Galileo, LCROSS, Lunar Reconnaissance Orbiter, Mars Exploration Rover, Mars Express, Magellan, Mars Global Surveyor, Mars Pathfinder, Mars Reconnaissance Orbiter, MESSENGER, Phoe nix, Viking Lander, Viking Orbiter, and Voyager. The Atlas utilizes the UPC to translate mission-specific coordinate systems into a unified coordinate system, allowing the end user to query across missions of similar targets. If desired, the end user can also use a mission-specific view of the Atlas. The mission-specific views rely on the same code base. This application is a major improvement over the initial version of the Planetary Image Atlas. It is a multi-mission search engine. This tool includes both basic and advanced search capabilities, providing a product search tool to interrogate the collection of planetary images. This tool lets the end user query information about each image, and ignores the data that the user has no interest in. Users can reduce the number of images to look at by defining an area of interest with latitude and longitude ranges.

  13. Towards a Level-1 tracking trigger for the ATLAS experiment

    CERN Document Server

    Cerri, A; The ATLAS collaboration

    2014-01-01

    The future plans for the LHC accelerator allow, through a schedule of phased upgrades, an increase in the average instantaneous luminosity by a factor 5 with respect to the original design luminosity. The ATLAS experiment at the LHC will be able to maximise the physics potential from this higher luminosity only if the detector, trigger and DAQ infrastructure are adapted to handle the sustained increase in particle production rates. In this paper the changes expected to be required to the ATLAS detectors and trigger system to fulfill the requirement for working in such high luminosity scenario are described. The increased number of interactions per bunch crossing will result in higher occupancy in the detectors and increased rates at each level of the trigger system. The trigger selection will improve the selectivity partly from increased granularity for the sub detectors and the consequent higher resolution. One of the largest challenges will be the provision of tracking information at the first trigger level...

  14. A firmware implementation of a Quad HOLA S-LINK to PCI Express interface for use in the ATLAS Trigger DAQ system

    CERN Document Server

    Slenders, Daniel

    2014-01-01

    The firmware for a PCI Express interface card with four on-board high-speed optical S-LINKS (FILAREXPRESS) has been developed. This was done for an Altera Stratix II GX FPGA. Furthermore, detection of the available channels through a pull-up resistor and a readout of the on-board temperature sensor were implemented.

  15. The ATLAS Trigger system upgrade and performance in Run 2

    CERN Document Server

    Shaw, Savanna Marie; The ATLAS collaboration

    2017-01-01

    The ATLAS trigger has been used very successfully for the online event selection during the first part of the LHC Run-2 in 2015/16 at a centre-of-mass energy of 13 TeV. The trigger system is composed of a hardware Level-1 trigger and a software-based high-level trigger; it reduces the event rate from the bunch-crossing rate of 40 MHz to an average recording rate of about 1 kHz. The excellent performance of the ATLAS trigger has been vital for the ATLAS physics program of Run-2, selecting interesting collision events for wide variety of physics signatures with high efficiency. The trigger selection capabilities of ATLAS during Run-2 have been significantly improved compared to Run-1, in order to cope with the higher event rates and pile-up which are the result of the almost doubling of the center-of-mass collision energy and the increase in the instantaneous luminosity of the LHC. In order to prepare for the anticipated further luminosity increase of the LHC in 2017/18, improving the trigger performance remain...

  16. The ATLAS Trigger system upgrade and performance in Run 2

    CERN Document Server

    Shaw, Savanna Marie; The ATLAS collaboration

    2018-01-01

    The ATLAS trigger has been used very successfully for the online event selection during the first part of the second LHC run (Run-2) in 2015/16 at a centre-of-mass energy of 13 TeV. The trigger system is composed of a hardware Level-1 trigger and a software-based high-level trigger; it reduces the event rate from the bunch-crossing rate of 40 MHz to an average recording rate of about 1 kHz. The excellent performance of the ATLAS trigger has been vital for the ATLAS physics program of Run-2, selecting interesting collision events for wide variety of physics signatures with high efficiency. The trigger selection capabilities of ATLAS during Run-2 have been significantly improved compared to Run-1, in order to cope with the higher event rates and pile-up which are the result of the almost doubling of the center-of-mass collision energy and the increase in the instantaneous luminosity of the LHC. At the Level-1 trigger the undertaken improvements resulted in more pile-up robust selection efficiencies and event ra...

  17. ATLAS TDAQ System Administration: an overview and evolution

    CERN Document Server

    LEE, CJ; The ATLAS collaboration; BOGDANCHIKOV, A; BRASOLIN, F; CONTESCU, AC; DARLEA, GL; KOROL, A; SCANNICCHIO, DA; TWOMEY, M; VALSAN, ML

    2013-01-01

    The ATLAS Trigger and Data Acquisition (TDAQ) system is responsible for the online processing of live data streaming from the ATLAS experiment at the Large Hadron Collider (LHC) at CERN. The system processes the direct data readout from ~100 million channels on the detector through three trigger levels, selecting interesting events for analysis with a factor of 10^7 reduction on the data rate with a latency of less than a few seconds. Most of the functionality is implemented on ~3000 servers composing the online farm. Due to the critical functionality of the system a sophisticated computing environment is maintained, covering the online farm and ATLAS control rooms, as well as a number of development and testing labs. The specificity of the system required the development of dedicated applications (e.g. ConfDB, BWM) for system configuration and maintenance; in parallel other Open Source tools (Puppet and Quattor) are used to centrally configure the operating systems. The health monitoring of the TDAQ system h...

  18. Evolution of the Argonne Tandem Linear Accelerator System (ATLAS) control system

    International Nuclear Information System (INIS)

    Power, M.; Munson, F.

    2012-01-01

    Given that the Argonne Tandem Linear Accelerator System (ATLAS) recently celebrated its 25. anniversary, this paper will explore the past, present, and future of the ATLAS Control System, and how it has evolved along with the accelerator and control system technology. ATLAS as we know it today, originated with a Tandem Van de Graff in the sixties. With the addition of the Booster section in the late seventies, came the first computerized control. ATLAS itself was placed into service on June 25, 1985, and was the world's first superconducting linear accelerator for ions. Since its dedication as a National User Facility, more than a thousand experiments by more than 2,000 users worldwide, have taken advantage of the unique capabilities it provides. Today, ATLAS continues to be a user facility for physicists who study the particles that form the heart of atoms. Its most recent addition, CARIBU (Californium Rare Isotope Breeder Upgrade), creates special beams that feed into ATLAS. ATLAS is similar to a living organism, changing and responding to new technological challenges and research needs. As it continues to evolve, so does the control system: from the original days using a DEC PDP-11/34 computer and two CAMAC crates, to a DEC Alpha computer running Vsystem software and more than twenty CAMAC crates, to distributed computers and VME systems. Future upgrades are also in the planning stages that will continue to evolve the control system. (authors)

  19. The Detector Control System of the ATLAS experiment at CERN An application to the calibration of the modules of the Tile Hadron Calorimeter

    CERN Document Server

    Varelá-Rodriguez, F

    2002-01-01

    The principle subject of this thesis work is the design and development of the Detector Control System (DCS) of the ATLAS experiment at CERN. The DCS must ensure the coherent and safe operation of the detector and handle the communication with external systems, like the LHC accelerator and CERN services. A bidirectional data flow between the Data AcQuisition (DAQ) system and the DCS will enable coherent operation of the experiment. The LHC experiments represent new challenges for the design of the control system. The extremely high complexity of the project forces the design of different components of the detector and related systems to be performed well ahead to their use. The long lifetime of the LHC experiments imposes the use of evolving technologies and modular design. The overall dimensions of the detector and the high number of I/O channels call for a control system with processing power distributed all over the facilities of the experiment while keeping a low cost. The environmental conditions require...

  20. ATLAS TDAQ System Administration: an overview and evolution

    CERN Document Server

    LEE, CJ; The ATLAS collaboration; BOGDANCHIKOV, A; BRASOLIN, F; CONTESCU, AC; DARLEA, G-L; KOROL, A; SCANNICCHIO, DA; TWOMEY, M; VALSAN, ML

    2013-01-01

    The ATLAS Trigger and Data Acquisition (TDAQ) system is responsible for the online processing of live data streaming from the ATLAS experiment at the Large Hadron Collider (LHC) at CERN. The system processes the direct data readout from ~100 million channels on the detector through multiple trigger levels, selecting interesting events for analysis with a factor of $10^{7}$ reduction on the data rate with a latency of less than a few seconds. Most of the functionality is implemented on ~3000 servers composing the online farm. Due to the critical functionality of the system a sophisticated computing environment is maintained, covering the online farm and ATLAS control rooms, as well as a number of development and testing labs. The specificity of the system required the development of dedicated applications (e.g. ConfDB, BWM) for system configuration and maintenance; in parallel other Open Source tools (Puppet and Quattor) are used to centrally configure the operating systems. The health monitoring of the TDAQ s...

  1. The detector control system of the ATLAS experiment

    International Nuclear Information System (INIS)

    Poy, A Barriuso; Burckhart, H J; Cook, J; Franz, S; Gutzwiller, O; Hallgren, B; Schlenker, S; Varela, F; Boterenbrood, H; Filimonov, V; Khomutnikov, V

    2008-01-01

    The ATLAS experiment is one of the experiments at the Large Hadron Collider, constructed to study elementary particle interactions in collisions of high-energy proton beams. The individual detector components as well as the common experimental infrastructure are supervised by the Detector Control System (DCS). The DCS enables equipment supervision using operator commands, reads, processes and archives the operational parameters of the detector, allows for error recognition and handling, manages the communication with external control systems, and provides a synchronization mechanism with the physics data acquisition system. Given the enormous size and complexity of ATLAS, special emphasis was put on the use of standardized hardware and software components enabling efficient development and long-term maintainability of the DCS over the lifetime of the experiment. Currently, the DCS is being used successfully during the experiment commissioning phase

  2. Commissioning the ATLAS Level-1 Central Trigger System

    CERN Document Server

    Sherman, Daniel

    2010-01-01

    The ATLAS Level-1 central trigger is a critical part of ATLAS operation. It receives the 40 MHz bunch clock from the LHC and distributes it to all sub-detectors. It initiates their read-out by forming the Level-1 Accept decision, which is based on information from the calorimeter and muon trigger processors and a variety of additional trigger inputs from detectors in the forward region. It also provides trigger summary information to the data acquisition system and the Level-2 trigger system. In this paper, we present the completion of the installed central trigger system, its performance during cosmic-ray data taking and the experience gained with triggering on the first LHC beams.

  3. A Fast Hardware Tracker for the ATLAS Trigger System

    CERN Document Server

    Neubauer, Mark S

    2011-01-01

    In hadron collider experiments, triggering the detector to store interesting events for offline analysis is a challenge due to the high rates and multiplicities of particles produced. Maintaining high trigger efficiency for the physics we are most interested in while at the same time suppressing high rate physics from inclusive QCD processes is a difficult but important problem. It is essential that the trigger system be flexible and robust, with sufficient redundancy and operating margin. Providing high quality track reconstruction over the full ATLAS detector by the start of processing at LVL2 is an important element to achieve these needs. As the instantaneous luminosity increases, the computational load on the LVL2 system will significantly increase due to the need for more sophisticated algorithms to suppress backgrounds. The Fast Tracker (FTK) is a proposed upgrade to the ATLAS trigger system. It is designed to enable early rejection of background events and thus leave more LVL2 execution time by moving...

  4. Task management in the new ATLAS production system

    International Nuclear Information System (INIS)

    De, K; Golubkov, D; Klimentov, A; Potekhin, M; Vaniachine, A

    2014-01-01

    This document describes the design of the new Production System of the ATLAS experiment at the LHC [1]. The Production System is the top level workflow manager which translates physicists' needs for production level processing and analysis into actual workflows executed across over a hundred Grid sites used globally by ATLAS. As the production workload increased in volume and complexity in recent years (the ATLAS production tasks count is above one million, with each task containing hundreds or thousands of jobs) there is a need to upgrade the Production System to meet the challenging requirements of the next LHC run while minimizing the operating costs. In the new design, the main subsystems are the Database Engine for Tasks (DEFT) and the Job Execution and Definition Interface (JEDI). Based on users' requests, DEFT manages inter-dependent groups of tasks (Meta-Tasks) and generates corresponding data processing workflows. The JEDI component then dynamically translates the task definitions from DEFT into actual workload jobs executed in the PanDA Workload Management System [2]. We present the requirements, design parameters, basics of the object model and concrete solutions utilized in building the new Production System and its components.

  5. ATLAS TDAQ System Administration: evolution and re-design

    CERN Document Server

    Ballestrero, Sergio; The ATLAS collaboration; Brasolin, Franco; Contescu, Alexandru Cristian; Dubrov, Sergei; Fazio, Daniel; Korol, Aleksandr; Lee, Christopher Jon; Scannicchio, Diana; Twomey, Matthew Shaun

    2015-01-01

    The ATLAS Trigger and Data Acquisition (TDAQ) system is responsible for the online processing of live data, streaming from the ATLAS experiment at the Large Hadron Collider (LHC) at CERN. The online farm is composed of $\\sim 3000$ servers, processing the data readout from $\\sim 100$ million detector channels through multiple trigger levels. During the two years of the first Long Shutdown (LS1) there has been a tremendous amount of work done by the ATLAS TDAQ System Administrators, implementing numerous new software applications, upgrading the OS and the hardware, changing some design philosophies and exploiting the High Level Trigger farm with different purposes. The OS version has been upgraded to SLC6; for the largest part of the farm, which is composed by net booted nodes, this required a completely new design of the net booting system. In parallel, the migration to Puppet of the Configuration Management systems has been completed for both net booted and local booted hosts; the Post-Boot Scripts system and...

  6. Towards a Level-1 Tracking Trigger for the ATLAS Experiment

    CERN Document Server

    De Santo, A; The ATLAS collaboration

    2014-01-01

    Plans for a physics-driven upgrade of the LHC foresee staged increases of the accelerator's average instantaneous luminosity, of up to a factor of five compared to the original design. In order to cope with the sustained luminosity increase, and the resulting higher detector occupancy and particle interaction rates, the ATLAS experiment is planning phased upgrades of the trigger system and of the DAQ infrastructure. In the new conditions, maintaining an adequate signal acceptance for electro-weak processes will pose unprecedented challenges, as the default solution to cope with the higher rates would be to increase thresholds on the transverse momenta of physics objects (leptons, jets, etc). Therefore the possibility to apply fast processing at the first trigger level in order to use tracking information as early as possible in the trigger selection represents a most appealing opportunity, which can preserve the ATLAS trigger's selectivity without reducing its flexibility. Studies to explore the feasibility o...

  7. The ATLAS ROBIN – A High-Performance Data-Acquisition Module

    CERN Document Server

    Kugel, Andreas

    2009-01-01

    This work presents the re-configurable processor ROBIN, which is a key element of the data-acquisition-system of the ATLAS experiment, located at the new LHC at CERN. The ATLAS detector provides data over 1600 channels simultaneously towards the DAQ system. The ATLAS dataflow model follows the “PULL” strategy in contrast to the commonly used “PUSH” strategy. The data volume transported is reduced by a factor of 10, however the data must be temporarily stored at the entry to the DAQ system. The input layer consists of approx. 160 ROS read-out units comprising 1 PC and 4 ROBIN modules. Each ROBIN device acquires detector data via 3 input channels and performs local buffering. Board control is done via a 64-bit PCI interface. Event selection and data transmission runs via PCI in the baseline bus-based ROS. Alternatively, a local GE interface can take over part or all of the data traffic in the switch-based ROS, in order to reduce the load on the host PC. The performance of the ROBIN module stems from the...

  8. Detector Control System for the ATLAS Forward Proton detector

    CERN Document Server

    Czekierda, Sabina; The ATLAS collaboration

    2017-01-01

    The ATLAS Forward Proton (AFP) is a forward detector using a Roman Pot technique, recently installed in the LHC tunnel. It is aiming at registering protons that were diffractively or electromagnetically scattered in soft and hard processes. Infrastructure of the detector consists of hardware placed both in the tunnel and in the control room USA15 (about 330 meters from the Roman Pots). AFP detector, like the other detectors of the ATLAS experiment, uses the Detector Control System (DCS) to supervise the detector and to ensure its safe and coherent operation, since the incorrect detector performance may influence the physics results. The DCS continuously monitors the detector parameters, subset of which is stored in data bases. Crucial parameters are guarded by alarm system. A detector representation as a hierarchical tree-like structure of well-defined subsystems built with the use of the Finite State Machine (FSM) toolkit allows for overall detector operation and visualization. Every node in the hierarchy is...

  9. The ATLAS software installation system for LCG/EGEE

    Energy Technology Data Exchange (ETDEWEB)

    Salvo, A D [Istituto Nazionale di Fisica Nucleare, sez. Roma 1 (Italy); Barchiesi, A [Universita di Roma I ' La Sapienza' (Italy); Gnanvo, K [Queen Mary and Westfield College (United Kingdom); Gwilliam, C [University of Liverpool (United Kingdom); Kennedy, J; Krobath, G [Ludwig-Maximilians-Universitaet Muenchen (Germany); Olszewski, A [Henryk Niewodniczanski Institute of Nuclear Physics, Polish Academy of Sciences (Poland); Rybkine, G [Royal Holloway College (United Kingdom)

    2008-07-15

    The huge amount of resources available in the Grids, and the necessity to have the most up-to-date experimental software deployed in all the sites within a few hours, have driven the need for an automatic installation system for the LHC experiments. In this work we describe the ATLAS system for the experiment software installation in LCG/EGEE, based on the Light Job Submission Framework for Installation (LJSFi), an independent job submission framework for generic submission and job tracking in EGEE. LJSFi is able to automatically discover, check, install, test and tag the full set of resources made available in LCG/EGEE to the ATLAS Virtual Organization in a few hours, depending on the site availability.

  10. The hardware of the ATLAS Pixel Detector Control System

    International Nuclear Information System (INIS)

    Henss, T; Andreani, A; Boek, J; Boyd, G; Citterio, M; Einsweiler, K; Kersten, S; Kind, P; Lantzsch, K; Latorre, S; Maettig, P; Meroni, C; Sabatini, F; Schultes, J

    2007-01-01

    The innermost part of the ATLAS (A Toroidal LHC ApparatuS) experiment, which is currently under construction at the LHC (Large Hadron Collider), will be a silicon pixel detector comprised of 1744 individual detector modules. To operate these modules, the readout electronics, and other detector components, a complex power supply and control system is necessary. The specific powering and control requirements, as well as the custom made components of our power supply and control systems, are described. These include remotely programmable regulator stations, the power supply system for the optical transceivers, several monitoring units, and the Interlock System. In total, this comprises the Pixel Detector Control System (DCS)

  11. The Run-2 ATLAS Trigger System: Design, Performance and Plan

    CERN Document Server

    zur Nedden, Martin; The ATLAS collaboration

    2016-01-01

    In high-energy physics experiments, online selection is crucial to select interesting collisions from the large data volume. The ATLAS experiment at the Large Hadron Collider (LHC) utilizes the trigger system that consists of a hardware Level-1 (L1) and a software based high-level trigger (HLT), reducing the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of about 1000 Hz. In the LHC Run-2 starting from in 2015, the LHC operates at centre-of-mass energy of 13 TeV providing a luminosity up to $1.2 \\cdot 10^{34} {\\rm cm^{-2}s^{-1}}$. The ATLAS trigger system has to cope with these challenges, while maintaining or even improving the efficiency to select relevant physics processes. In this paper, the ATLAS trigger system for LHC Run-2 is reviewed. Secondly, the impressive performance improvements in the HLT trigger algorithms used to identify leptons, hadrons and global event quantities like missing transverse energy is shown. Electron, muon and photon triggers covering trans...

  12. Operation of the Upgraded ATLAS Level-1 Central Trigger System

    CERN Document Server

    Glatzer, Julian Maximilian Volker; The ATLAS collaboration

    2015-01-01

    The ATLAS Level-1 Central Trigger (L1CT) system is a central part of ATLAS data-taking and has undergone a major upgrade for Run 2 of the LHC, in order to cope with the expected increase of instantaneous luminosity of a factor of 2 with respect to Run 1. The upgraded hardware offers more flexibility in the trigger decisions due to the double amount of trigger inputs and usable trigger channels. It also provides an interface to the new topological trigger system. Operationally - particularly useful for commissioning, calibration and test runs - it allows concurrent running of up to 3 different sub-detector combinations. In this contribution, we give an overview of the operational software framework of the L1CT system with particular emphasis of the configuration, controls and monitoring aspects. The software framework allows a consistent configuration with respect to the ATLAS experiment and the LHC machine, upstream and downstream trigger processors, and the data acquisition. Trigger and dead-time rates are m...

  13. Firmware development and testing of the ATLAS Pixel Detector / IBL ROD card

    International Nuclear Information System (INIS)

    Gabrielli, A.; Balbi, G.; Falchieri, D.; Lama, L.; Travaglini, R.; Backhaus, M.; Bindi, M.; Chen, S.P.; Hauck, S.; Hsu, S.C.; Flick, T.; Wensing, M.; Kretz, M.; Kugel, A.

    2015-01-01

    The ATLAS Experiment is reworking and upgrading systems during the current LHC shut down. In particular, the Pixel detector has inserted an additional inner layer called the Insertable B-Layer (IBL). The Readout-Driver card (ROD), the Back-of-Crate card (BOC), and the S-Link together form the essential frontend data path of the IBL's off-detector DAQ system. The strategy for IBL ROD firmware development was three-fold: keeping as much of the Pixel ROD datapath firmware logic as possible, employing a complete new scheme of steering and calibration firmware, and designing the overall system to prepare for a future unified code version integrating IBL and Pixel layers. Essential features such as data formatting, frontend-specific error handling, and calibration are added to the ROD data path. An IBL DAQ test bench using a realistic front-end chip model was created to serve as an initial framework for full offline electronic system simulation. In this document, major firmware achievements concerning the IBL ROD data path implementation, test on the test bench and ROD prototypes, will be reported. Recent Pixel collaboration efforts focus on finalizing hardware and firmware tests for the IBL. The plan is to approach a complete IBL DAQ hardware-software installation by the end of 2014

  14. The LUCID detector ATLAS luminosity monitor and its electronic system

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00378808; The ATLAS collaboration

    2016-01-01

    Starting from 2015 LHC is performing a new run, at higher center of mass energy (13 TeV) and with 25 ns bunch-spacing. The ATLAS luminosity monitor LUCID has been completely renewed, both on detector design and in the electronics, in order to cope with the new running conditions. The new detector electronics is presented, featuring a new read-out board (LUCROD), for signal acquisition and digitization, PMT-charge integration and single-side luminosity measurements, and the revisited LUMAT board for side-A-side-C combination. The contribution covers the new boards design, the firmware and software developments, the implementation of luminosity algorithms, the optical communication between boards and the integration into the ATLAS TDAQ system.

  15. H4DAQ: a modern and versatile data-acquisition package for calorimeter prototypes test-beams

    Science.gov (United States)

    Marini, A. C.

    2018-02-01

    The upgrade of the particle detectors for the HL-LHC or for future colliders requires an extensive program of tests to qualify different detector prototypes with dedicated test beams. A common data-acquisition system, H4DAQ, was developed for the H4 test beam line at the North Area of the CERN SPS in 2014 and it has since been adopted in various applications for the CMS experiment and AIDA project. Several calorimeter prototypes and precision timing detectors have used our system from 2014 to 2017. H4DAQ has proven to be a versatile application and has been ported to many other beam test environments. H4DAQ is fast, simple, modular and can be configured to support various kinds of setup. The functionalities of the DAQ core software are split into three configurable finite state machines: data readout, run control, and event builder. The distribution of information and data between the various computers is performed using ZEROMQ (0MQ) sockets. Plugins are available to read different types of hardware, including VME crates with many types of boards, PADE boards, custom front-end boards and beam instrumentation devices. The raw data are saved as ROOT files, using the CERN C++ ROOT libraries. A Graphical User Interface, based on the python gtk libraries, is used to operate the H4DAQ and an integrated data quality monitoring (DQM), written in C++, allows for fast processing of the events for quick feedback to the user. As the 0MQ libraries are also available for the National Instruments LabVIEW program, this environment can easily be integrated within H4DAQ applications.

  16. Integration of Titan supercomputer at OLCF with ATLAS Production System

    Science.gov (United States)

    Barreiro Megino, F.; De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Padolski, S.; Panitkin, S.; Wells, J.; Wenaus, T.; ATLAS Collaboration

    2017-10-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job submission to Titan’s batch queues and local data management, with lightweight MPI wrappers to run single node workloads in parallel on Titan’s multi-core worker nodes. It provides for running of standard ATLAS production jobs on unused resources (backfill) on Titan. The system already allowed ATLAS to collect on Titan millions of core-hours per month, execute hundreds of thousands jobs, while simultaneously improving Titans utilization efficiency. We will discuss the details of the implementation, current experience with running the system, as well as future plans aimed at improvements in scalability and efficiency. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to

  17. ATCA-based ATLAS FTK input interface system

    Energy Technology Data Exchange (ETDEWEB)

    Okumura, Yasuyuki [Chicago U., EFI; Liu, Tiehui Ted [Fermilab; Olsen, Jamieson [Fermilab; Iizawa, Tomoya [Waseda U.; Mitani, Takashi [Waseda U.; Korikawa, Tomohiro [Waseda U.; Yorita, Kohei [Waseda U.; Annovi, Alberto [Frascati; Beretta, Matteo [Frascati; Gatta, Maurizio [Frascati; Sotiropoulou, C-L. [Aristotle U., Thessaloniki; Gkaitatzis, Stamatios [Aristotle U., Thessaloniki; Kordas, Konstantinos [Aristotle U., Thessaloniki; Kimura, Naoki [Aristotle U., Thessaloniki; Cremonesi, Matteo [Chicago U., EFI; Yin, Hang [Fermilab; Xu, Zijun [Peking U.

    2015-04-27

    The first stage of the ATLAS Fast TracKer (FTK) is an ATCA-based input interface system, where hits from the entire silicon tracker are clustered and organized into overlapping eta-phi trigger towers before being sent to the tracking engines. First, FTK Input Mezzanine cards receive hit data and perform clustering to reduce data volume. Then, the ATCA-based Data Formatter system will organize the trigger tower data, sharing data among boards over full mesh backplanes and optic fibers. The board and system level design concepts and implementation details, as well as the operation experiences from the FTK full-chain testing, will be presented.

  18. FELIX: The new detector readout system for the ATLAS experiment

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00370160; The ATLAS collaboration

    2017-01-01

    After the Phase-I upgrades (2019) of the ATLAS experiment, the Front-End Link eXchange (FELIX) system will be the interface between the data acquisition system and the detector front-end and trigger electronics. FELIX will function as a router between custom serial links and a commodity switch network using standard technologies (Ethernet or Infiniband) to communicate with commercial data collecting and processing components. The system architecture of FELIX will be described and the status of the firmware implementation and hardware development currently in progress will be presented.

  19. FELIX: The new detector readout system for the ATLAS experiment

    Science.gov (United States)

    Ryu, Soo; ATLAS TDAQ Collaboration

    2017-10-01

    After the Phase-I upgrades (2019) of the ATLAS experiment, the Front-End Link eXchange (FELIX) system will be the interface between the data acquisition system and the detector front-end and trigger electronics. FELIX will function as a router between custom serial links and a commodity switch network using standard technologies (Ethernet or Infiniband) to communicate with commercial data collecting and processing components. The system architecture of FELIX will be described and the status of the firmware implementation and hardware development currently in progress will be presented.

  20. FELIX: the new detector readout system for the ATLAS experiment

    CERN Document Server

    Bauer, Kevin Thomas; The ATLAS collaboration

    2017-01-01

    Starting in 2018 during the planned shutdown of the LHC, the ATLAS experiment at CERN will be deploying new optical link technology (GigaBit Transceiver links) connecting the front end electronics. The Front-End LInk eXchange (FELIX) will provide an infrastructure for the new GBT links to connect to the rest of the Trigger and Data Acquisition (TDAQ) system. FELIX is a PC-based system designed to route data and commands to and from the GBT links and a Commercial Off-The Shelf (COTS) network. In this paper, the FELIX system is described and the design of the hardware prototype and core software is presented.

  1. ATCA-based ATLAS FTK input interface system

    CERN Document Server

    Okumura, Y; The ATLAS collaboration; Olsen, J; Iizawa, T; Mitani, T; Korikawa, T; Yorita, K; Annovi, A; Beretta, M; Gatta, M; Sotiropoulou, C; Gkaitatzis, S; Kordas, K; Kimura, N; Cremonesi, M; Yin, H; Xu, Z

    2014-01-01

    The first stage of the ATLAS Fast TracKer (FTK) is an ATCA-based input interface system, where hits from the entire silicon tracker must be clustered and organized into overlapping eta-phi trigger towers before being sent to the tracking processors. First, FTK Input Mezzanine cards receive hit data and perform clustering to reduce data volume. Then, the ATCA-based Data Formatter system will organize the trigger tower data, sharing data among boards over a full-mesh backplane. The board and system level performance studies and implementation details, as well as the operation experiences from the FTK full-chain testing, will be presented.

  2. Experience from a pilot based system for ATLAS

    International Nuclear Information System (INIS)

    Nilsson, P

    2008-01-01

    The PanDA software provides a highly performing distributed production and distributed analysis system. It is the first system in the ATLAS experiment to use a pilot based late job delivery technique. This paper describes the architecture of the pilot system used in PanDA. Unique features have been implemented for high reliability automation in a distributed environment. Performance of PanDA is analyzed from one and a half years of experience of performing distributed computing on the Open Science Grid (OSG) infrastructure. Experience with pilot delivery mechanism using Condor-G, and a glide-in factory developed under OSG will be described

  3. The 40 MHz trigger-less DAQ for the LHCb Upgrade

    Energy Technology Data Exchange (ETDEWEB)

    Campora Perez, D.H. [INFN CNAF, Bologna (Italy); Falabella, A., E-mail: antonio.falabella@cnaf.infn.it [CERN, Geneva (Switzerland); Galli, D. [INFN Sezione di Bologna, Bologna (Italy); Università Bologna, Bologna (Italy); Giacomini, F. [CERN, Geneva (Switzerland); Gligorov, V. [INFN CNAF, Bologna (Italy); Manzali, M. [Università Bologna, Bologna (Italy); Università Ferrara, Ferrara (Italy); Marconi, U. [INFN Sezione di Bologna, Bologna (Italy); Neufeld, N.; Otto, A. [INFN CNAF, Bologna (Italy); Pisani, F. [INFN CNAF, Bologna (Italy); Università la Sapienza, Roma (Italy); Vagnoni, V.M. [INFN Sezione di Bologna, Bologna (Italy)

    2016-07-11

    The LHCb experiment will undergo a major upgrade during the second long shutdown (2018–2019), aiming to let LHCb collect an order of magnitude more data with respect to Run 1 and Run 2. The maximum readout rate of 1 MHz is the main limitation of the present LHCb trigger. The upgraded detector, apart from major detector upgrades, foresees a full read-out, running at the LHC bunch crossing frequency of 40 MHz, using an entirely software based trigger. A new high-throughput PCIe Generation 3 based read-out board, named PCIe40, has been designed for this purpose. The read-out board will allow an efficient and cost-effective implementation of the DAQ system by means of high-speed PC networks. The network-based DAQ system reads data fragments, performs the event building, and transports events to the High-Level Trigger at an estimated aggregate rate of about 32 Tbit/s. Different architecture for the DAQ can be implemented, such as push, pull and traffic shaping with barrel-shifter. Possible technology candidates for the foreseen event-builder under study are InfiniBand and Gigabit Ethernet. In order to define the best implementation of the event-builder we are performing tests of the event-builder on different platforms with different technologies. For testing we are using an event-builder evaluator, which consists of a flexible software implementation, to be used on small size test beds as well as on HPC scale facilities. The architecture of DAQ system and up to date performance results will be presented.

  4. Evaluation and proposal of improvement for the measurement system in ATLAS

    International Nuclear Information System (INIS)

    Cho, Dong Woo; Kim, Jong Rok; Park, Jun Kwon

    2007-03-01

    The project independently evaluated the validities and reliability of measurement system in ATLAS, then proposed plans to improve the measurement system from evaluated results. For this objectives, we evaluated the design, technical backgrounds, verifying data of measurement system in ATLAS. From this evaluation, we proposed plans for improvement on parts which need improvement

  5. The associative memory system for the FTK processor at ATLAS

    CERN Document Server

    Magalotti, D; The ATLAS collaboration; Donati, S; Luciano, P; Piendibene, M; Giannetti, P; Lanza, A; Verzellesi, G; Sakellariou, Andreas; Billereau, W; Combe, J M

    2014-01-01

    In high energy physics experiments, the most interesting processes are very rare and hidden in an extremely large level of background. As the experiment complexity, accelerator backgrounds, and instantaneous luminosity increase, more effective and accurate data selection techniques are needed. The Fast TracKer processor (FTK) is a real time tracking processor designed for the ATLAS trigger upgrade. The FTK core is the Associative Memory system. It provides massive computing power to minimize the processing time of complex tracking algorithms executed online. This paper reports on the results and performance of a new prototype of Associative Memory system.

  6. Thermal Performance of ATLAS Laser Thermal Control System Demonstration Unit

    Science.gov (United States)

    Ku, Jentung; Robinson, Franklin; Patel, Deepak; Ottenstein, Laura

    2013-01-01

    The second Ice, Cloud, and Land Elevation Satellite mission currently planned by National Aeronautics and Space Administration will measure global ice topography and canopy height using the Advanced Topographic Laser Altimeter System {ATLAS). The ATLAS comprises two lasers; but only one will be used at a time. Each laser will generate between 125 watts and 250 watts of heat, and each laser has its own optimal operating temperature that must be maintained within plus or minus 1 degree Centigrade accuracy by the Laser Thermal Control System (LTCS) consisting of a constant conductance heat pipe (CCHP), a loop heat pipe (LHP) and a radiator. The heat generated by the laser is acquired by the CCHP and transferred to the LHP, which delivers the heat to the radiator for ultimate rejection. The radiator can be exposed to temperatures between minus 71 degrees Centigrade and minus 93 degrees Centigrade. The two lasers can have different operating temperatures varying between plus 15 degrees Centigrade and plus 30 degrees Centigrade, and their operating temperatures are not known while the LTCS is being designed and built. Major challenges of the LTCS include: 1) A single thermal control system must maintain the ATLAS at 15 degrees Centigrade with 250 watts heat load and minus 71 degrees Centigrade radiator sink temperature, and maintain the ATLAS at plus 30 degrees Centigrade with 125 watts heat load and minus 93 degrees Centigrade radiator sink temperature. Furthermore, the LTCS must be qualification tested to maintain the ATLAS between plus 10 degrees Centigrade and plus 35 degrees Centigrade. 2) The LTCS must be shut down to ensure that the ATLAS can be maintained above its lowest desirable temperature of minus 2 degrees Centigrade during the survival mode. No software control algorithm for LTCS can be activated during survival and only thermostats can be used. 3) The radiator must be kept above minus 65 degrees Centigrade to prevent ammonia from freezing using no more

  7. Measurement Of Neutron Radius In Lead By Parity Violating Scattering Flash ADC DAQ

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed, Zafar [Christopher Newport Univ., Newport News, VA (United States)

    2012-06-01

    This dissertation reports the experiment PREx, a parity violation experiment which is designed to measure the neutron radius in 208Pb. PREx is performed in hall A of Thomas Jefferson National Accelerator Facility from March 19th to June 21st. Longitudionally polarized electrons at energy 1 GeV scattered at and angle of θlab = 5.8 ° from the Lead target. Beam corrected pairty violaing counting rate asymmetry is (Acorr= 594 ± 50(stat) ± 9(syst))ppb at Q2 = 0.009068GeV 2. This dissertation also presents the details of Flash ADC Data Acquisition(FADC DAQ) system for Moller polarimetry in Hall A of Thomas Jefferson National Accelerator Facility. The Moller polarimeter measures the beam polarization to high precision to meet the specification of the PREx(Lead radius experiment). The FADC DAQ is part of the upgrade of Moller polarimetery to reduce the systematic error for PREx. The hardware setup and the results of the FADC DAQ analysis are presented

  8. ATLAS TRT Barrel in Test Beam

    CERN Multimedia

    Luehring, F

    In July, the TRT group made a highly successful test of 6 Barrel TRT modules in the ATLAS H8 testbeam. Over 3000 TRT straw tubes (4 mm diameter gas drift tubes) were instrumented and found to operate well. The prototype represents 1/16 of the ATLAS TRT barrel and was assembled from TRT modules produced as spares. This was the largest scale test of the TRT to this date and the measured detector performance was as good as or better than what was expected in all cases. The 2004 TRT testbeam setup before final cabling was attached. The readout chain and central DAQ system used in the TRT testbeam is a final prototype for the ATLAS experiment. The TRT electronics used to read out the data were: The Amplifier/Shaper/Discriminator with Baseline Restoration (ASDBLR) chip is the front-end analog chip that shapes and discriminates the electronic pulses generated by the TRT straws. The Digital Time Measurement Read Out Chip (DTMROC) measures the time of the pulse relative to the beam crossing time. The TRT-ROD ...

  9. The Error Reporting in the ATLAS TDAQ System

    Science.gov (United States)

    Kolos, Serguei; Kazarov, Andrei; Papaevgeniou, Lykourgos

    2015-05-01

    The ATLAS Error Reporting provides a service that allows experts and shift crew to track and address errors relating to the data taking components and applications. This service, called the Error Reporting Service (ERS), gives to software applications the opportunity to collect and send comprehensive data about run-time errors, to a place where it can be intercepted in real-time by any other system component. Other ATLAS online control and monitoring tools use the ERS as one of their main inputs to address system problems in a timely manner and to improve the quality of acquired data. The actual destination of the error messages depends solely on the run-time environment, in which the online applications are operating. When an application sends information to ERS, depending on the configuration, it may end up in a local file, a database, distributed middleware which can transport it to an expert system or display it to users. Thanks to the open framework design of ERS, new information destinations can be added at any moment without touching the reporting and receiving applications. The ERS Application Program Interface (API) is provided in three programming languages used in the ATLAS online environment: C++, Java and Python. All APIs use exceptions for error reporting but each of them exploits advanced features of a given language to simplify the end-user program writing. For example, as C++ lacks language support for exceptions, a number of macros have been designed to generate hierarchies of C++ exception classes at compile time. Using this approach a software developer can write a single line of code to generate a boilerplate code for a fully qualified C++ exception class declaration with arbitrary number of parameters and multiple constructors, which encapsulates all relevant static information about the given type of issues. When a corresponding error occurs at run time, the program just need to create an instance of that class passing relevant values to one

  10. Detector control system of the ATLAS insertable B-Layer

    International Nuclear Information System (INIS)

    Kersten, S.; Kind, P.; Lantzsch, K.; Maettig, P.; Zeitnitz, C.; Gensolen, F.; Citterio, M.; Meroni, C.; Verlaat, B.; Kovalenko, S.

    2012-01-01

    To improve tracking robustness and precision of the ATLAS inner tracker, an additional, fourth pixel layer is foreseen, called Insertable B-Layer (IBL). It will be installed between the innermost present Pixel layer and a new, smaller beam pipe and is presently under construction. As, once installed into the experiment, no access is possible, a highly reliable control system is required. It has to supply the detector with all entities required for operation and protect it at all times. Design constraints are the high power density inside the detector volume, the sensitivity of the sensors against heat-ups, and the protection of the front end electronics against transients. We present the architecture of the control system with an emphasis on the CO 2 cooling system, the power supply system, and protection strategies. As we aim for a common operation of Pixel and IBL detector, the integration of the IBL control system into the Pixel control system will also be discussed. (authors)

  11. The ATLAS Trigger System: Ready for Run-2

    CERN Document Server

    Maeda, Junpei; The ATLAS collaboration

    2015-01-01

    The ATLAS trigger has been successfully collecting collision data during the first run of the LHC between 2009-2013 at a centre-of-mass energy between 900 GeV and 8 TeV. The trigger system consists of a hardware Level-1 and a software based high-level trigger that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. During the data-taking period of Run-2 the LHC will operate at a centre-of-mass energy of about 13 TeV resulting in roughly five times higher trigger rates. In these proceedings, we briefly review the ATLAS trigger system upgrades that were implemented during the shutdown, allowing us to cope with the increased trigger rates while maintaining or even improving our efficiency to select relevant physics processes. This includes changes to the Level-1 calorimeter and muon trigger system, the introduction of a new Level-1 topological trigger module and themerging of the previously two-level higher-level trigger system into a single even...

  12. The ATLAS Trigger System: Ready for Run II

    CERN Document Server

    Czodrowski, Patrick; The ATLAS collaboration

    2015-01-01

    The ATLAS trigger system has been used successfully for data collection in the 2009-2013 Run 1 operation cycle of the CERN Large Hadron Collider (LHC) at center-of-mass energies of up to 8 TeV. With the restart of the LHC for the new Run 2 data-taking period at 13 TeV, the trigger rates are expected to rise by approximately a factor of 5. The trigger system consists of a hardware-based first level (L1) and a software-based high-level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of ~ 1kHz. This presentation will give an overview of the upgrades to the ATLAS trigger system that have been implemented during the LHC shutdown period in order to deal with the increased trigger rates while efficiently selecting the physics processes of interest. These upgrades include changes to the L1 calorimeter trigger, the introduction of a new L1 topological trigger module, improvements in the L1 muon system, and the merging of the previously two-level HLT ...

  13. The ATLAS Trigger System : Ready for Run-2

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00211007; The ATLAS collaboration

    2016-01-01

    The ATLAS trigger has been successfully collecting collision data during the first run of the LHC between 2009-2013 at a centre-of-mass energy between 900 GeV and 8 TeV. The trigger system consists of a hardware based Level-1 (L1) and a software based high-level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. During the course of the ongoing Run-2 data-taking campaign at 13 TeV centre-of-mass energy the trigger rates will be approximately 5 times higher compared to Run-1. In these proceedings we briefly review the ATLAS trigger system upgrades that were implemented during the shutdown, allowing us to cope with the increased trigger rates while maintaining or even improving our efficiency to select relevant physics processes. This includes changes to the L1 calorimeter and muon trigger system, the introduction of a new L1 topological trigger subsystem and the merging of the previously two-level HLT system into a single ev...

  14. The ATLAS PanDA Monitoring System and its Evolution

    Science.gov (United States)

    Klimentov, A.; Nevski, P.; Potekhin, M.; Wenaus, T.

    2011-12-01

    The PanDA (Production and Distributed Analysis) Workload Management System is used for ATLAS distributed production and analysis worldwide. The needs of ATLAS global computing imposed challenging requirements on the design of PanDA in areas such as scalability, robustness, automation, diagnostics, and usability for both production shifters and analysis users. Through a system-wide job database, the PanDA monitor provides a comprehensive and coherent view of the system and job execution, from high level summaries to detailed drill-down job diagnostics. It is (like the rest of PanDA) an Apache-based Python application backed by Oracle. The presentation layer is HTML code generated on the fly in the Python application which is also responsible for managing database queries. However, this approach is lacking in user interface flexibility, simplicity of communication with external systems, and ease of maintenance. A decision was therefore made to migrate the PanDA monitor server to Django Web Application Framework and apply JSON/AJAX technology in the browser front end. This allows us to greatly reduce the amount of application code, separate data preparation from presentation, leverage open source for tools such as authentication and authorization mechanisms, and provide a richer and more dynamic user experience. We describe our approach, design and initial experience with the migration process.

  15. The ATLAS PanDA Monitoring System and its Evolution

    International Nuclear Information System (INIS)

    Klimentov, A; Nevski, P; Wenaus, T; Potekhin, M

    2011-01-01

    The PanDA (Production and Distributed Analysis) Workload Management System is used for ATLAS distributed production and analysis worldwide. The needs of ATLAS global computing imposed challenging requirements on the design of PanDA in areas such as scalability, robustness, automation, diagnostics, and usability for both production shifters and analysis users. Through a system-wide job database, the PanDA monitor provides a comprehensive and coherent view of the system and job execution, from high level summaries to detailed drill-down job diagnostics. It is (like the rest of PanDA) an Apache-based Python application backed by Oracle. The presentation layer is HTML code generated on the fly in the Python application which is also responsible for managing database queries. However, this approach is lacking in user interface flexibility, simplicity of communication with external systems, and ease of maintenance. A decision was therefore made to migrate the PanDA monitor server to Django Web Application Framework and apply JSON/AJAX technology in the browser front end. This allows us to greatly reduce the amount of application code, separate data preparation from presentation, leverage open source for tools such as authentication and authorization mechanisms, and provide a richer and more dynamic user experience. We describe our approach, design and initial experience with the migration process.

  16. The ATLAS Trigger System: Ready for Run-2

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00211007; The ATLAS collaboration

    2015-01-01

    The ATLAS trigger has been successfully collecting collision data during the first run of the LHC between 2009-2013 at a centre-of-mass energy between 900 GeV and 8 TeV. The trigger system consists of a hardware Level-1 (L1) and a software based high-level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. During the next data-taking period starting in 2015 (Run-2) the LHC will operate at a centre-of-mass energy of about 13 TeV resulting in roughly five times higher trigger rates. We will briefly review the ATLAS trigger system upgrades that were implemented during the shutdown, allowing us to cope with the increased trigger rates while maintaining or even improving our efficiency to select relevant physics processes. This includes changes to the L1 calorimeter and muon trigger system, the introduction of a new L1 topological trigger module and the merging of the previously two-level HLT system into a single event filter fa...

  17. Role Based Access Control system in the ATLAS experiment

    CERN Document Server

    Valsan, M L; The ATLAS collaboration; Lehmann Miotto, G; Scannicchio, D A; Schlenker, S; Filimonov, V; Khomoutnikov, V; Dumitru, I; Zaytsev, A S; Korol, A A; Bogdantchikov, A; Caramarcu, C; Ballestrero, S; Darlea, G L; Twomey, M; Bujor, F; Avolio, G

    2011-01-01

    The complexity of the ATLAS experiment motivated the deployment of an integrated Access Control System in order to guarantee safe and optimal access for a large number of users to the various software and hardware resources. Such an integrated system was foreseen since the design of the infrastructure and is now central to the operations model. In order to cope with the ever growing needs of restricting access to all resources used within the experiment, the Roles Based Access Control (RBAC) previously developed has been extended and improved. The paper starts with a short presentation of the RBAC design, implementation and the changes made to the system to allow the management and usage of roles to control access to the vast and diverse set of resources. The paper continues with a detailed description of the integration across all areas of the system: local Linux and Windows nodes in the ATLAS Control Network (ATCN), the Linux application gateways offering remote access inside ATCN, the Windows Terminal Serv...

  18. Role Based Access Control System in the ATLAS Experiment

    CERN Document Server

    Valsan, M L; The ATLAS collaboration; Lehmann Miotto, G; Scannicchio, D A; Schlenker, S; Filimonov, V; Khomoutnikov, V; Dumitru, I; Zaytsev, A S; Korol, A A; Bogdantchikov, A; Avolio, G; Caramarcu, C; Ballestrero, S; Darlea, G L; Twomey, M; Bujor, F

    2010-01-01

    The complexity of the ATLAS experiment motivated the deployment of an integrated Access Control System in order to guarantee safe and optimal access for a large number of users to the various software and hardware resources. Such an integrated system was foreseen since the design of the infrastructure and is now central to the operations model. In order to cope with the ever growing needs of restricting access to all resources used within the experiment, the Roles Based Access Control (RBAC) previously developed has been extended and improved. The paper starts with a short presentation of the RBAC design, implementation and the changes made to the system to allow the management and usage of roles to control access to the vast and diverse set of resources. The paper continues with a detailed description of the integration across all areas of the system: local Linux and Windows nodes in the ATLAS Control Network (ATCN), the Linux application gateways offering remote access inside ATCN, the Windows Terminal Serv...

  19. A Fast hardware Tracker for the ATLAS Trigger system

    CERN Document Server

    Pandini, Carlo Enrico; The ATLAS collaboration

    2015-01-01

    The trigger system at the ATLAS experiment is designed to lower the event rate occurring from the nominal bunch crossing at 40 MHz to about 1 kHz for a designed LHC luminosity of 10$^{34}$ cm$^{-2}$ s$^{-1}$. After a very successful data taking run the LHC is expected to run starting in 2015 with much higher instantaneous luminosities and this will increase the load on the High Level Trigger system. More sophisticated algorithms will be needed to achieve higher background rejection while maintaining good efficiency for interesting physics signals, which requires a more extensive use of tracking information. The Fast Tracker (FTK) trigger system, part of the ATLAS trigger upgrade program, is a highly parallel hardware device designed to perform full-scan track-finding at the event rate of 100 kHz. FTK is a dedicated processor based on a mixture of advanced technologies. Modern, powerful, Field Programmable Gate Arrays form an important part of the system architecture, and the combinatorial problem of pattern r...

  20. Quality of service on Linux for the Atlas TDAQ event building network

    International Nuclear Information System (INIS)

    Yasu, Y.; Manabe, A.; Fujii, H.; Watase, Y.; Nagasaka, Y.; Hasegawa, Y.; Shimojima, M.; Nomachi, M.

    2001-01-01

    Congestion control for packets sent on a network is important for DAQ systems that contain an event builder using switching network technologies. Quality of Service (QoS) is a technique for congestion control. Recent Linux releases provide QoS in the kernel to manage network traffic. The authors have analyzed the packet-loss and packet distribution for the event builder prototype of the Atlas TDAQ system. The authors used PC/Linux with Gigabit Ethernet network as the testbed. The results showed that QoS using CBQ and TBF eliminated packet loss on UDP/IP transfer while the UDP/IP transfer in best effort made lots of packet loss. The result also showed that the QoS overhead was small. The authors concluded that QoS on Linux performed efficiently in TCP/IP and UDP/IP and will have an important role of the Atlas TDAQ system

  1. The ATLAS Data Acquisition System LHC Run 2

    CERN Document Server

    Panduro Vazquez, William; The ATLAS collaboration

    2016-01-01

    The LHC has been providing pp collisions with record luminosity and energy since the start of Run 2 in 2015. In the ATLAS experiment the Trigger and Data Acquisition system has been upgraded to deal with the increased event rates. The dataflow element of the system is distributed across hardware and software and is responsible for buffering and transporting event data from the Readout system to the High Level Trigger and on to event storage. The dataflow system has been reshaped in order benefit from technological progress and to maximize the flexibility and efficiency of the data selection process. The updated dataflow system is radically different from the previous implementation both in terms of architecture and performance. The previous two level software filtering architecture, known as L2 and the Event Filter, have been merged with the Event Builder function into a single process, performing incremental data collection and analysis. This design has many advantages, among which are: radical simplificatio...

  2. The ATLAS Trigger System: Ready for Run-2

    CERN Document Server

    Nakahama, Yu; The ATLAS collaboration

    2015-01-01

    The ATLAS trigger has been used very successfully for the online event selection during the first run of the LHC between 2009-2013 at a centre-of-mass energy between 900 GeV and 8 TeV. The trigger system consists of a hardware Level-1 (L1) and a software based high-level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. During the next data-taking period starting in early 2015 (Run-2) the LHC will operate at a centre-of-mass energy of about 13 TeV resulting in roughly five times higher trigger rates. We will review the upgrades to the ATLAS Trigger system that have been implemented during the shutdown and that will allow us to cope with these increased trigger rates while maintaining or even improving our efficiency to select relevant physics processes. This includes changes to the L1 calorimeter trigger, the introduction of a new L1 topological trigger module, improvements in the L1 muon system and the merging of the prev...

  3. Module and electronics developments for the ATLAS ITK pixel system

    CERN Document Server

    Nellist, Clara; The ATLAS collaboration

    2016-01-01

    Summary ATLAS is preparing for an extensive modification of its detector in the course of the planned HL‐ LHC accelerator upgrade around 2025 which includes a replacement of the entire tracking system by an all‐silicon detector (Inner Tracker, ITk). A revised trigger and data taking system is foreseen with triggers expected at lowest level at an average rate of 1 MHz. The five innermost layers of ITk will comprise of a pixel detector built of new sensor and readout electronics technologies to improve the tracking performance and cope with the severe HL‐LHC environment in terms of occupancy and radiation. The total area of the new pixel system could measure up to 14 m2, depending on the final layout choice that is expected to take place in early 2017. A new on‐detector readout chip is designed in the context of the RD53 collaboration in 65 nm CMOS technology. This paper will present the on‐going R&D within the ATLAS ITK project towards the new pixel modules and the off‐detector electronics. Pla...

  4. The Error Reporting in the ATLAS TDAQ system

    CERN Document Server

    Kolos, S; The ATLAS collaboration; Papaevgeniou, L

    2014-01-01

    The ATLAS Error Reporting feature, which is used in the TDAQ environment, provides a service that allows experts and shift crew to track and address errors relating to the data taking components and applications. This service, called the Error Reporting Service(ERS), gives software applications the opportunity to collect and send comprehensive data about errors, happening at run-time, to a place where it can be intercepted in real-time by any other system component. Other ATLAS online control and monitoring tools use the Error Reporting service as one of their main inputs to address system problems in a timely manner and to improve the quality of acquired data. The actual destination of the error messages depends solely on the run-time environment, in which the online applications are operating. When applications send information to ERS, depending on the actual configuration the information may end up in a local file, in a database, in distributed middle-ware, which can transport it to an expert system or dis...

  5. The Error Reporting in the ATLAS TDAQ System

    CERN Document Server

    Kolos, S; The ATLAS collaboration; Papaevgeniou, L

    2015-01-01

    The ATLAS Error Reporting feature, which is used in the TDAQ environment, provides a service that allows experts and shift crew to track and address errors relating to the data taking components and applications. This service, called the Error Reporting Service(ERS), gives software applications the opportunity to collect and send comprehensive data about errors, happening at run-time, to a place where it can be intercepted in real-time by any other system component. Other ATLAS online control and monitoring tools use the Error Reporting service as one of their main inputs to address system problems in a timely manner and to improve the quality of acquired data. The actual destination of the error messages depends solely on the run-time environment, in which the online applications are operating. When applications send information to ERS, depending on the actual configuration the information may end up in a local file, in a database, in distributed middle-ware, which can transport it to an expert system or dis...

  6. Real time physics analysis with the ATLAS tau trigger system

    International Nuclear Information System (INIS)

    Casado Lechuga, M. P.

    2009-01-01

    The scope of the ATLAS tau trigger system at the LHC is most ambitious. It aims at reconstructing in real time, a matter of seconds, a detailed picture of the high energy proton proton collisions at the LHC. Such system is mandatory in order to select efficiently data needed for discovery of new physics in a proton proton collision environment where the rates of jets observed in the detector are high and the tau identification is difficult. New physics scenarios targeted specifically by the the ATLAS tau trigger system are Standard Model or Supersymmetric Higgs production, and production of new exotic resonances. This contribution will detail how the analysis techniques developed offline for efficient data analysis have been implemented in the algorithms which run online at the trigger. In particular, the focus will be on how to satisfy the requirements imposed by the physics goals while addressing the limitations from the overall event rate and latency allowed. The prospects for early running during the first LHC collisions and trigger evolution from first collisions to stable running will be also summarized, following change of trigger goals from commissioning of detector to measurement of Standard Model physics and discoveries. (author)

  7. The Resource Manager the ATLAS Trigger and Data Acquisition System

    CERN Document Server

    Aleksandrov, Igor; The ATLAS collaboration; Lehmann Miotto, Giovanna; Soloviev, Igor

    2016-01-01

    The Resource Manager of the ATLAS Trigger and Data Acquisition system The Resource Manager is one of the core components of the Data Acquisition system of the ATLAS experiment at the LHC. The Resource Manager marshals the right for applications to access resources which may exist in multiple but limited copies, in order to avoid conflicts due to program faults or operator errors. The access to resources is managed in a manner similar to what a lock manager would do in other software systems. All the available resources and their association to software processes are described in the Data Acquisition configuration database. The Resource Manager is queried about the availability of resources every time an application needs to be started. The Resource Manager’s design is based on a client-server model, hence it consists of two components: the Resource Manager "server" application and the "client" shared library. The Resource Manager server implements all the needed functionalities, while the Resource Manager c...

  8. Rucio, the next-generation Data Management system in ATLAS

    CERN Document Server

    Serfon, C; The ATLAS collaboration; Beermann, T; Garonne, V; Goossens, L; Lassnig, M; Nairz, A; Vigne, R

    2014-01-01

    Rucio is the next-generation of Distributed Data Management (DDM) system benefiting from recent advances in cloud and "Big Data" computing to address HEP experiments scaling requirements. Rucio is an evolution of the ATLAS DDM system Don Quijote 2 (DQ2), which has demonstrated very large scale data management capabilities with more than 160 petabytes spread worldwide across 130 sites, and accesses from 1,000 active users. However, DQ2 is reaching its limits in terms of scalability, requiring a large number of support staff to operate and being hard to extend with new technologies. Rucio addresses these issues by relying on new technologies to ensure system scalability, cover new user requirements and employ new automation framework to reduce operational overheads. In this talk, we will present the history of the DDM project and the experience of data management operation in ATLAS computing. Thus, We will show the key concepts of Rucio, including its data organization. The Rucio design, and the technology it e...

  9. ATLAS Tier-2 monitoring system for the German cloud

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Joerg; Quadt, Arnulf; Weber, Pavel [II. Physikalisches Institut, Georg-August-Universitaet, Goettingen (Germany)

    2011-07-01

    The ATLAS tier centers in Germany provide their computing resources for the ATLAS experiment. The stable and sustainable operation of this so-called DE-cloud heavily relies on effective monitoring of the Tier-1 center GridKa and its associated Tier-2 centers. Central and local grid information services constantly collect and publish the status information from many computing resources and sites. The cloud monitoring system discussed in this presentation evaluates the information related to different cloud resources and provides a coherent and comprehensive view of the cloud. The main monitoring areas covered by the tool are data transfers, cloud software installation, site batch systems, Service Availability Monitoring (SAM). The cloud monitoring system consists of an Apache-based Python application, which retrieves the information and publishes it on the generated HTML web page. This results in an easy-to-use web interface for the limited number of sites in the cloud with fast and efficient access to the required information starting from a high level summary for the whole cloud to detailed diagnostics for the single site services. This approach provides the efficient identification of correlated site problems and simplifies the administration on both cloud and site level.

  10. The GNAM system in the ATLAS online monitoring framework

    Energy Technology Data Exchange (ETDEWEB)

    Salvatore, D. [INFN Cosenza and Dip. di Fisica, Universita della Calabria, ponte P. Bucci 31 C, 87036 Rende (Italy)], E-mail: daniela.salvatore@cern.ch; Adragna, P. [Queen Mary, University of London, London (United Kingdom); Bosman, M. [IFAE, Institut de Fisica de Altes Energies, UAB/Barcelona (Spain); Burckhart, D. [CERN, Geneva (Switzerland); Caprini, M. [National Institute for Physics and Nuclear Engineering, Bucharest (Romania); Corso-Radu, A. [University of California Irvine, Irvine, California (United States); Costa, M.J. [CERN, Geneva (Switzerland); Della Pietra, M. [INFN Sezione diNapoli, Napoli (Italy); Dotti, A. [Universita and INFN Pisa, Pisa (Italy); Eschrich, I. [University of California Irvine, Irvine, California (United States); Ferrari, R. [INFN Sezione di Pavia, Pavia (Italy); Ferrer, M.L. [INFN Laboratori Nazionali di Frascati, Frascati (Italy); Gaudio, G. [INFN Sezione di Pavia, Pavia (Italy); Hadavand, H. [Southern Methodist University, Dallas (United States); Hauschild, M. [CERN, Geneva (Switzerland); Hillier, S. [University of Birmingham, Birmingham (United Kingdom); Kehoe, B. [Southern Methodist University, Dallas (United States); Kolos, S. [University of California Irvine, Irvine, California (United States); Kordas, K. [INFN Laboratori Nazionali di Frascati, Frascati (Italy); Mcpherson, R. [University of Victoria, Vancouver (Canada)] (and others)

    2007-10-15

    ATLAS [ATLAS Collaboration, 'ATLAS Technical Proposal', CERN/LHHCC/94-43, LHCC/P2, CERN, Geneva, Switzerland, 1994] is one of the four experiments under construction along the Large Hadron Collider (LHC) ring, which will produce interactions at a center of mass energy of 14 TeV at 40 MHz rate. The detector consists of more than 140 million electronic channels. The challenging experimental environment and the extreme detector complexity impose the necessity of a common scalable distributed monitoring framework, which can be tuned for the optimal use by different ATLAS detectors at the various levels of the ATLAS data flow.

  11. The GNAM system in the ATLAS online monitoring framework

    Energy Technology Data Exchange (ETDEWEB)

    Salvatore, D. [INFN Cosenza and Dip. di Fisica, Universita della Calabria, ponte P. Bucci 31 C, 87036 Rende (Italy)], E-mail: daniela.salvatore@cern.ch; Adragna, P [Queen Mary, University of London, London (United Kingdom); Bosman, M [IFAE, Institut de Fisica de Altes Energies, UAB/Barcelona (Spain); Burckhart, D [CERN, Geneva (Switzerland); Caprini, M [National Institute for Physics and Nuclear Engineering, Bucharest (Romania); Corso-Radu, A [University of California Irvine, Irvine, California (United States); Costa, M J [CERN, Geneva (Switzerland); Della Pietra, M [INFN Sezione diNapoli, Napoli (Italy); Dotti, A [Universita and INFN Pisa, Pisa (Italy); Eschrich, I [University of California Irvine, Irvine, California (United States); Ferrari, R [INFN Sezione di Pavia, Pavia (Italy); Ferrer, M L [INFN Laboratori Nazionali di Frascati, Frascati (Italy); Gaudio, G [INFN Sezione di Pavia, Pavia (Italy); Hadavand, H [Southern Methodist University, Dallas (United States); Hauschild, M [CERN, Geneva (Switzerland); Hillier, S [University of Birmingham, Birmingham (United Kingdom); Kehoe, B [Southern Methodist University, Dallas (United States); Kolos, S [University of California Irvine, Irvine, California (United States); Kordas, K [INFN Laboratori Nazionali di Frascati, Frascati (Italy); Mcpherson, R [University of Victoria, Vancouver (Canada)

    2007-10-15

    ATLAS [ATLAS Collaboration, 'ATLAS Technical Proposal', CERN/LHHCC/94-43, LHCC/P2, CERN, Geneva, Switzerland, 1994] is one of the four experiments under construction along the Large Hadron Collider (LHC) ring, which will produce interactions at a center of mass energy of 14 TeV at 40 MHz rate. The detector consists of more than 140 million electronic channels. The challenging experimental environment and the extreme detector complexity impose the necessity of a common scalable distributed monitoring framework, which can be tuned for the optimal use by different ATLAS detectors at the various levels of the ATLAS data flow.

  12. The GNAM system in the ATLAS online monitoring framework

    International Nuclear Information System (INIS)

    Salvatore, D.; Adragna, P.; Bosman, M.; Burckhart, D.; Caprini, M.; Corso-Radu, A.; Costa, M.J.; Della Pietra, M.; Dotti, A.; Eschrich, I.; Ferrari, R.; Ferrer, M.L.; Gaudio, G.; Hadavand, H.; Hauschild, M.; Hillier, S.; Kehoe, B.; Kolos, S.; Kordas, K.; Mcpherson, R.

    2007-01-01

    ATLAS [ATLAS Collaboration, 'ATLAS Technical Proposal', CERN/LHHCC/94-43, LHCC/P2, CERN, Geneva, Switzerland, 1994] is one of the four experiments under construction along the Large Hadron Collider (LHC) ring, which will produce interactions at a center of mass energy of 14 TeV at 40 MHz rate. The detector consists of more than 140 million electronic channels. The challenging experimental environment and the extreme detector complexity impose the necessity of a common scalable distributed monitoring framework, which can be tuned for the optimal use by different ATLAS detectors at the various levels of the ATLAS data flow

  13. The Architecture and Administration of the ATLAS Online Computing System

    CERN Document Server

    Dobson, M; Ertorer, E; Garitaonandia, H; Leahu, L; Leahu, M; Malciu, I M; Panikashvili, E; Topurov, A; Ünel, G; Computing In High Energy and Nuclear Physics

    2006-01-01

    The needs of ATLAS experiment at the upcoming LHC accelerator, CERN, in terms of data transmission rates and processing power require a large cluster of computers (of the order of thousands) administrated and exploited in a coherent and optimal manner. Requirements like stability, robustness and fast recovery in case of failure impose a server-client system architecture with servers distributed in a tree like structure and clients booted from the network. For security reasons, the system should be accessible only through an application gateway and, also to ensure the autonomy of the system, the network services should be provided internally by dedicated machines in synchronization with CERN IT department's central services. The paper describes a small scale implementation of the system architecture that fits the given requirements and constraints. Emphasis will be put on the mechanisms and tools used to net boot the clients via the "Boot With Me" project and to synchronize information within the cluster via t...

  14. ATLAS, an integrated structural analysis and design system. Volume 4: Random access file catalog

    Science.gov (United States)

    Gray, F. P., Jr. (Editor)

    1979-01-01

    A complete catalog is presented for the random access files used by the ATLAS integrated structural analysis and design system. ATLAS consists of several technical computation modules which output data matrices to corresponding random access file. A description of the matrices written on these files is contained herein.

  15. Technical Design Report for the Phase-I Upgrade of the ATLAS TDAQ System

    CERN Document Server

    AUTHOR|(CDS)2069742; Abbott, Brad; Abdallah, Jalal; Abdel Khalek, Samah; Abdinov, Ovsat; Aben, Rosemarie; Abi, Babak; AbouZeid, Ossama; Abramowicz, Halina; Abreu, Henso; Abreu, Ricardo; Abulaiti, Yiming; Acharya, Bobby Samir; Achenbach, Ralf; Adamczyk, Leszek; Adams, David; Adelman, Jahred; Adomeit, Stefanie; Adye, Tim; Aefsky, Scott; Agatonovic-Jovin, Tatjana; Aguilar-Saavedra, Juan Antonio; Agustoni, Marco; Ahlen, Steven; Ahmad, Ashfaq; Ahmadov, Faig; Aielli, Giulio; Åkesson, Torsten Paul Ake; Akimoto, Ginga; Akimov, Andrei; Alam, Muhammad Aftab; Albert, Justin; Albrand, Solveig; Alconada Verzini, Maria Josefina; Aleksa, Martin; Aleksandrov, Igor; Alexa, Calin; Alexander, Gideon; Alexandre, Gauthier; Alexandrov, Evgeny; Alexopoulos, Theodoros; Alhroob, Muhammad; Alimonti, Gianluca; Alio, Lion; Alison, John; Allbrooke, Benedict; Allison, Lee John; Allport, Phillip; Allwood-Spiers, Sarah; Almond, John; Aloisio, Alberto; Alon, Raz; Alonso, Alejandro; Alonso, Francisco; Altheimer, Andrew David; Alvarez Gonzalez, Barbara; Alviggi, Mariagrazia; Amaral Coutinho, Yara; Amelung, Christoph; Amor Dos Santos, Susana Patricia; Amoroso, Simone; Amram, Nir; Amundsen, Glenn; Anastopoulos, Christos; Ancu, Lucian Stefan; Andari, Nansi; Andeen, Timothy; Anders, Christoph Falk; Anders, Gabriel; Anderson, John Thomas; Anderson, Kelby; Andreazza, Attilio; Andrei, George Victor; Anduaga, Xabier; Angelidakis, Stylianos; Angelozzi, Ivan; Anger, Philipp; Angerami, Aaron; Anisenkov, Alexey; Anjos, Nuno; Annovi, Alberto; Antonaki, Ariadni; Antonelli, Mario; Antonov, Alexey; Antos, Jaroslav; Anulli, Fabio; Aoki, Masato; Aperio Bella, Ludovica; Apolle, Rudi; Arabidze, Giorgi; Aracena, Ignacio; Arai, Yasuo; Araujo Ferraz, Victor; Arce, Ayana; Arguin, Jean-Francois; Argyropoulos, Spyridon; Arik, Metin; Armbruster, Aaron James; Arnaez, Olivier; Arnal, Vanessa; Arslan, Ozan; Artamonov, Andrei; Artoni, Giacomo; Asai, Shoji; Asbah, Nedaa; Ask, Stefan; Åsman, Barbro; Asquith, Lily; Assamagan, Ketevi; Astalos, Robert; Atkinson, Markus; Atlay, Naim Bora; Auerbach, Benjamin; Augsten, Kamil; Augusto, José; Aurousseau, Mathieu; Avolio, Giuseppe; Azuelos, Georges; Azuma, Yuya; Baak, Max; Baas, Alessandra; Bach, Andre; Bachacou, Henri; Bachas, Konstantinos; Backes, Moritz; Backhaus, Malte; Backus Mayes, John; Badescu, Elisabeta; Bagiacchi, Paolo; Bagnaia, Paolo; Bai, Yu; Bain, Travis; Baines, John; Baker, Oliver Keith; Baker, Sarah; Balek, Petr; Ballestrero, Sergio; Balli, Fabrice; Banas, Elzbieta; Banerjee, Swagato; Bangert, Andrea Michelle; Bansal, Vikas; Bansil, Hardeep Singh; Barak, Liron; Barber, Tom; Barberio, Elisabetta Luigia; Barberis, Dario; Barbero, Marlon; Barillari, Teresa; Barisonzi, Marcello; Barklow, Timothy; Barlow, Nick; Barnett, Bruce; Barnett, Michael; Baroncelli, Antonio; Barone, Gaetano; Barr, Alan; Barreiro, Fernando; Barreiro Guimarães da Costa, João; Bartoldus, Rainer; Barton, Adam Edward; Bartos, Pavol; Bartsch, Valeria; Bassalat, Ahmed; Basye, Austin; Bates, Richard; Batkova, Lucia; Batley, Richard; Batraneanu, Silvia; Battistin, Michele; Bauer, Florian; Bauss, Bruno; Bawa, Harinder Singh; Beacham, James Baker; Beau, Tristan; Beauchemin, Pierre-Hugues; Bechtle, Philip; Beck, Hans Peter; Becker, Anne Kathrin; Becker, Sebastian; Beckingham, Matthew; Beddall, Andrew; Beddall, Ayda; Bedikian, Sourpouhi; Bednyakov, Vadim; Bee, Christopher; Beemster, Lars; Beermann, Thomas; Begel, Michael; Behr, Katharina; Belanger-Champagne, Camille; Bell, Paul; Bell, William; Bella, Gideon; Bellagamba, Lorenzo; Bellerive, Alain; Bellomo, Massimiliano; Belloni, Alberto; Belotskiy, Konstantin; Beltramello, Olga; Benary, Odette; Benchekroun, Driss; Bendtz, Katarina; Benekos, Nektarios; Benhammou, Yan; Benhar Noccioli, Eleonora; Benitez Garcia, Jorge-Armando; Benjamin, Douglas; Bensinger, James; Bentvelsen, Stan; Beretta, Matteo; Berge, David; Bergeaas Kuutmann, Elin; Berger, Nicolas; Berghaus, Frank; Berglund, Elina; Beringer, Jürg; Bernard, Clare; Bernat, Pauline; Bernius, Catrin; Bernlochner, Florian Urs; Berry, Tracey; Berta, Peter; Bertella, Claudia; Bertelsen, Henrik; Bertolucci, Federico; Besana, Maria Ilaria; Besjes, Geert-Jan; Bessidskaia Bylund, Olga; Besson, Nathalie; Betancourt, Christopher; Bethke, Siegfried; Bhimji, Wahid; Bianchi, Riccardo-Maria; Bianchini, Louis; Bianco, Michele; Biebel, Otmar; Bieniek, Stephen Paul; Bierwagen, Katharina; Biesiada, Jed; Biglietti, Michela; Bilbao De Mendizabal, Javier; Bilokon, Halina; Bindi, Marcello; Binet, Sebastien; Bingul, Ahmet; Bini, Cesare; Bittner, Bernhard; Black, Curtis; Black, James; Black, Kevin; Blackburn, Daniel; Blair, Robert; Blanchard, Jean-Baptiste; Blazek, Tomas; Bloch, Ingo; Blocker, Craig; Blum, Walter; Blumenschein, Ulrike; Bobbink, Gerjan; Bobrovnikov, Victor; Bocchetta, Simona Serena; Bocci, Andrea; Boddy, Christopher Richard; Boehler, Michael; Boek, Jennifer; Boek, Thorsten Tobias; Bogdan, Mircea Arghir; Bogdanchikov, Alexander; Bohm, Christian; Boisvert, Veronique; Bold, Tomasz; Boldyrev, Alexey; Bolnet, Nayanka Myriam; Bomben, Marco; Bona, Marcella; Boonekamp, Maarten; Borga, Andrea; Borisov, Anatoly; Borissov, Guennadi; Borri, Marcello; Borroni, Sara; Bortfeldt, Jonathan; Bortolotto, Valerio; Bos, Kors; Boscherini, Davide; Bosman, Martine; Boterenbrood, Hendrik; Bouchami, Jihene; Boudreau, Joseph; Bouhova-Thacker, Evelina Vassileva; Boumediene, Djamel Eddine; Bourdarios, Claire; Bousson, Nicolas; Boutouil, Sara; Boveia, Antonio; Boyd, James; Boyko, Igor; Bozovic-Jelisavcic, Ivanka; Bracinik, Juraj; Brandt, Andrew; Brandt, Gerhard; Brandt, Oleg; Bratzler, Uwe; Brau, Benjamin; Brau, James; Brawn, Ian; Brazzale, Simone Federico; Brelier, Bertrand; Brendlinger, Kurt; Brennan, Amelia Jean; Brenner, Richard; Bressler, Shikma; Bristow, Kieran; Bristow, Timothy Michael; Britton, Dave; Brochu, Frederic; Brock, Ian; Brock, Raymond; Bronner, Johanna; Brooijmans, Gustaaf; Brooks, Timothy; Brooks, William; Brosamer, Jacquelyn; Brost, Elizabeth; Brown, Gareth; Brown, Jonathan; Bruckman de Renstrom, Pawel; Bruncko, Dusan; Bruneliere, Renaud; Brunet, Sylvie; Bruni, Alessia; Bruni, Graziano; Bruschi, Marco; Bryngemark, Lene; Buanes, Trygve; Buat, Quentin; Bucci, Francesca; Buchholz, Peter; Buckingham, Ryan; Buckley, Andrew; Buda, Stelian Ioan; Budagov, Ioulian; Budick, Burton; Buehrer, Felix; Bugge, Lars; Bugge, Magnar Kopangen; Bulekov, Oleg; Bundock, Aaron Colin; Bunse, Moritz; Burdin, Sergey; Burghgrave, Blake; Burke, Stephen; Burmeister, Ingo; Busato, Emmanuel; Büscher, Volker; Bussey, Peter; Buszello, Claus-Peter; Butler, Bart; Butler, John; Butt, Aatif Imtiaz; Buttar, Craig; Butterworth, Jonathan; Buttinger, William; Buzatu, Adrian; Byszewski, Marcin; Cabrera Urbán, Susana; Caforio, Davide; Cakir, Orhan; Calafiura, Paolo; Calderini, Giovanni; Calfayan, Philippe; Calkins, Robert; Caloba, Luiz; Calvet, David; Calvet, Samuel; Camacho Toro, Reina; Cameron, David; Caminada, Lea Michaela; Caminal Armadans, Roger; Campana, Simone; Campanelli, Mario; Campoverde, Angel; Canale, Vincenzo; Canelli, Florencia; Canepa, Anadi; Cantero, Josu; Cantrill, Robert; Cao, Tingting; Capeans Garrido, Maria Del Mar; Caprini, Irinel; Caprini, Mihai; Capua, Marcella; Caputo, Regina; Cardarelli, Roberto; Carli, Tancredi; Carlino, Gianpaolo; Carminati, Leonardo; Caron, Sascha; Carquin, Edson; Carrillo-Montoya, German D; Carter, Janet; Casadei, Diego; Casado, Maria Pilar; Castaneda-Miranda, Elizabeth; Castelli, Angelantonio; Castillo Gimenez, Victoria; Castro, Nuno Filipe; Catastini, Pierluigi; Catinaccio, Andrea; Catmore, James; Cattai, Ariella; Cattani, Giordano; Caughron, Seth; Cavaliere, Viviana; Cavalli, Donatella; Cavalli-Sforza, Matteo; Cavasinni, Vincenzo; Ceradini, Filippo; Cerio, Benjamin; Cerny, Karel; Cerqueira, Augusto Santiago; Cerri, Alessandro; Cerrito, Lucio; Cerutti, Fabio; Cerv, Matevz; Cervelli, Alberto; Cetin, Serkant Ali; Chafaq, Aziz; Chakraborty, Dhiman; Chalupkova, Ina; Chan, Kevin; Chang, Philip; Chapleau, Bertrand; Chapman, John Derek; Charfeddine, Driss; Charlton, Dave; Chavda, Vikash; Chavez Barajas, Carlos Alberto; Cheatham, Susan; Chekanov, Sergei; Chekulaev, Sergey; Chelkov, Gueorgui; Chelstowska, Magda Anna; Chen, Chunhui; Chen, Hucheng; Chen, Karen; Chen, Liming; Chen, Shenjian; Chen, Xin; Chen, Yujiao; Cheng, Hok Chuen; Cheng, Yangyang; Cheplakov, Alexander; Cherkaoui El Moursli, Rajaa; Chernyatin, Valeriy; Cheu, Elliott; Chevalier, Laurent; Chiarella, Vitaliano; Chiefari, Giovanni; Childers, John Taylor; Chilingarov, Alexandre; Chiodini, Gabriele; Chisholm, Andrew; Chislett, Rebecca Thalatta; Chitan, Adrian; Chizhov, Mihail; Chouridou, Sofia; Chow, Bonnie Kar Bo; Christidi, Ilektra-Athanasia; Chudoba, Jiri; Ciapetti, Guido; Ciftci, Abbas Kenan; Ciftci, Rena; Cinca, Diane; Cindro, Vladimir; Ciocio, Alessandra; Ciodaro Xavier, Thiago; Cirkovic, Predrag; Citraro, Saverio; Citron, Zvi Hirsh; Citterio, Mauro; Ciubancan, Mihai; Clark, Allan G; Clark, Philip James; Clarke, Robert; Clemens, Jean-Claude; Clement, Benoit; Clement, Christophe; Coadou, Yann; Cobal, Marina; Coccaro, Andrea; Cochran, James H; Coffey, Laurel; Cogan, Joshua Godfrey; Coggeshall, James; Cole, Brian; Cole, Stephen; Colijn, Auke-Pieter; Collins-Tooth, Christopher; Collot, Johann; Colombo, Tommaso; Colon, German; Compostella, Gabriele; Conde Muiño, Patricia; Coniavitis, Elias; Conidi, Maria Chiara; Connelly, Ian; Consonni, Sofia Maria; Consorti, Valerio; Constantinescu, Serban; Conti, Geraldine; Conventi, Francesco; Cooke, Mark; Cooper, Ben; Cooper-Sarkar, Amanda; Cooper-Smith, Neil; Copic, Katherine; Cornelissen, Thijs; Corradi, Massimo; Corriveau, Francois; Corso-Radu, Alina; Cortes-Gonzalez, Arely; Cortiana, Giorgio; Costa, Giuseppe; Costa, María José; Costanzo, Davide; Côté, David; Cottin, Giovanna; Coura Torres, Rodrigo; Cowan, Glen; Cox, Brian; Cranmer, Kyle; Cree, Graham; Crépé-Renaudin, Sabine; Crescioli, Francesco; Crispin Ortuzar, Mireia; Cristinziani, Markus; Crone, Gordon Jeremy; Crosetti, Giovanni; Cuciuc, Constantin-Mihai; Cuenca Almenar, Cristóbal; Cuhadar Donszelmann, Tulay; Cummings, Jane; Curatolo, Maria; Cuthbert, Cameron; Czirr, Hendrik; Czodrowski, Patrick; Czyczula, Zofia; D'Auria, Saverio; D'Onofrio, Monica; D'Orazio, Alessia; Da Cunha Sargedas De Sousa, Mario Jose; Da Via, Cinzia; Dabrowski, Wladyslaw; Dafinca, Alexandru; Dai, Tiesheng; Dale, Orjan; Dallaire, Frederick; Dallapiccola, Carlo; Dam, Mogens; Daniells, Andrew Christopher; Dano Hoffmann, Maria; Dao, Valerio; Darbo, Giovanni; Darlea, Georgiana Lavinia; Darmora, Smita; Dassoulas, James; Davey, Will; David, Claire; Davidek, Tomas; Davies, Eleanor; Davies, Merlin; Davignon, Olivier; Davison, Adam; Davygora, Yuriy; Dawe, Edmund; Dawson, Ian; Daya-Ishmukhametova, Rozmin; De, Kaushik; de Asmundis, Riccardo; De Castro, Stefano; De Cecco, Sandro; de Graat, Julien; De Groot, Nicolo; de Jong, Paul; De la Torre, Hector; De Lorenzi, Francesco; De Nooij, Lucie; De Pedis, Daniele; De Salvo, Alessandro; De Sanctis, Umberto; De Santo, Antonella; De Vivie De Regie, Jean-Baptiste; Dearnaley, William James; Debbe, Ramiro; Debenedetti, Chiara; Dechenaux, Benjamin; Dedovich, Dmitri; Degenhardt, James; Deigaard, Ingrid; Del Peso, Jose; Del Prete, Tarcisio; Delemontex, Thomas; Deliot, Frederic; Deliyergiyev, Maksym; Dell'Acqua, Andrea; Dell'Asta, Lidia; Dell'Orso, Mauro; Della Pietra, Massimo; della Volpe, Domenico; Delmastro, Marco; Delsart, Pierre-Antoine; Deluca, Carolina; Demers, Sarah; Demichev, Mikhail; Demilly, Aurelien; Denisov, Sergey; Derendarz, Dominik; Derkaoui, Jamal Eddine; Derue, Frederic; Dervan, Paul; Desch, Klaus Kurt; Deviveiros, Pier-Olivier; Dewhurst, Alastair; Dhaliwal, Saminder; Di Ciaccio, Anna; Di Ciaccio, Lucia; Di Domenico, Antonio; Di Donato, Camilla; Di Girolamo, Alessandro; Di Girolamo, Beniamino; Di Mattia, Alessandro; Di Micco, Biagio; Di Nardo, Roberto; Di Simone, Andrea; Di Sipio, Riccardo; Di Valentino, David; Diaz, Marco Aurelio; Diehl, Edward; Dietrich, Janet; Dietzsch, Thorsten; Diglio, Sara; Dimitrievska, Aleksandra; Dingfelder, Jochen; Dionisi, Carlo; Dita, Petre; Dita, Sanda; Dittus, Fridolin; Djama, Fares; Djobava, Tamar; do Vale, Maria Aline Barros; Doan, Thi Kieu Oanh; Dobos, Daniel; Dobson, Ellie; Doglioni, Caterina; Doherty, Tom; Dohmae, Takeshi; Dolejsi, Jiri; Dolezal, Zdenek; Donadelli, Marisilvia; Donati, Simone; Dondero, Paolo; Donini, Julien; Dopke, Jens; Doria, Alessandra; Dova, Maria-Teresa; Doyle, Tony; Drake, Gary; Dris, Manolis; Dubbert, Jörg; Dube, Sourabh; Dubreuil, Emmanuelle; Duchovni, Ehud; Duckeck, Guenter; Ducu, Otilia Anamaria; Duda, Dominik; Dudarev, Alexey; Dudziak, Fanny; Duflot, Laurent; Duguid, Liam; Dührssen, Michael; Dunford, Monica; Duran Yildiz, Hatice; Düren, Michael; Dwuznik, Michal; Ebke, Johannes; Edmunds, Daniel; Edson, William; Edwards, Clive; Edwards, Nicholas Charles; Ehrenfeld, Wolfgang; Eifert, Till; Eigen, Gerald; Einsweiler, Kevin; Ekelof, Tord; El Kacimi, Mohamed; Ellert, Mattias; Elles, Sabine; Ellinghaus, Frank; Ellis, Nicolas; Elmsheuser, Johannes; Elsing, Markus; Emeliyanov, Dmitry; Enari, Yuji; Endner, Oliver Chris; Endo, Masaki; Erdmann, Johannes; Ereditato, Antonio; Ermoline, Iouri; Ernis, Gunar; Ernst, Jesse; Ernst, Michael; Ernwein, Jean; Errede, Deborah; Errede, Steven; Ertel, Eugen; Escalier, Marc; Esch, Hendrik; Escobar, Carlos; Esposito, Bellisario; Etienvre, Anne-Isabelle; Etzion, Erez; Evangelakou, Despoina; Evans, Hal; Fabbri, Laura; Facini, Gabriel; Fakhrutdinov, Rinat; Falciano, Speranza; Faltova, Jana; Fang, Yaquan; Fanti, Marcello; Farbin, Amir; Farilla, Addolorata; Farooque, Trisha; Farrell, Steven; Farrington, Sinead; Farthouat, Philippe; Fassi, Farida; Fassnacht, Patrick; Fassouliotis, Dimitrios; Fatholahzadeh, Baharak; Faulkner, Peter; Favareto, Andrea; Fayard, Louis; Federic, Pavol; Fedin, Oleg; Fedorko, Wojciech; Fehling-Kaschek, Mirjam; Feigl, Simon; Feligioni, Lorenzo; Feng, Cunfeng; Feng, Eric; Feng, Haolu; Fenyuk, Alexander; Fernandez Perez, Sonia; Ferrag, Samir; Ferrando, James; Ferrara, Valentina; Ferrari, Arnaud; Ferrari, Pamela; Ferrari, Roberto; Ferreira de Lima, Danilo Enoque; Ferrer, Antonio; Ferrere, Didier; Ferretti, Claudio; Ferretto Parodi, Andrea; Fiascaris, Maria; Fiedler, Frank; Filipčič, Andrej; Filipuzzi, Marco; Filthaut, Frank; Fincke-Keeler, Margret; Finelli, Kevin Daniel; Fiolhais, Miguel; Fiorini, Luca; Firan, Ana; Fischer, Julia; Fisher, Matthew; Fitzgerald, Eric Andrew; Flechl, Martin; Fleck, Ivor; Fleischmann, Philipp; Fleischmann, Sebastian; Fletcher, Gareth Thomas; Fletcher, Gregory; Flick, Tobias; Floderus, Anders; Flores Castillo, Luis; Florez Bustos, Andres Carlos; Flowerdew, Michael; Formica, Andrea; Forti, Alessandra; Fortin, Dominique; Fournier, Daniel; Fox, Harald; Francavilla, Paolo; Franchini, Matteo; Franchino, Silvia; Francis, David; Franklin, Melissa; Franz, Sebastien; Fraternali, Marco; Fratina, Sasa; French, Sky; Friedrich, Conrad; Friedrich, Felix; Froidevaux, Daniel; Front, David Moris; Frost, James; Fukunaga, Chikara; Fullana Torregrosa, Esteban; Fulsom, Bryan Gregory; Fusayasu, Takahiro; Fuster, Juan; Gabaldon, Carolina; Gabizon, Ofir; Gabrielli, Alessandro; Gabrielli, Andrea; Gadatsch, Stefan; Gadfort, Thomas; Gadomski, Szymon; Gagliardi, Guido; Gagnon, Pauline; Galea, Cristina; Galhardo, Bruno; Gallas, Elizabeth; Gallo, Valentina Santina; Gallop, Bruce; Gallus, Petr; Galster, Gorm Aske Gram Krohn; Gan, KK; Gandrajula, Reddy Pratap; Gao, Jun; Gao, Yongsheng; Garay Walls, Francisca; Garberson, Ford; García, Carmen; García Navarro, José Enrique; Garcia-Sciveres, Maurice; Gardner, Robert; Garelli, Nicoletta; Garonne, Vincent; Gatti, Claudio; Gaudio, Gabriella; Gaur, Bakul; Gauthier, Lea; Gavrilenko, Igor; Gay, Colin; Gaycken, Goetz; Gazis, Evangelos; Ge, Peng; Gecse, Zoltan; Gee, Norman; Geerts, Daniël Alphonsus Adrianus; Geich-Gimbel, Christoph; Gellerstedt, Karl; Gemme, Claudia; Gemmell, Alistair; Genest, Marie-Hélène; Gentile, Simonetta; Gentsos, Christos; George, Matthias; George, Simon; Gerbaudo, Davide; Gershon, Avi; Ghibaudi, Marco; Ghodbane, Nabil; Giacobbe, Benedetto; Giagu, Stefano; Giangiobbe, Vincent; Giannetti, Paola; Gianotti, Fabiola; Gibson, Stephen; Gillam, Thomas; Gillberg, Dag; Gingrich, Douglas; Giokaris, Nikos; Giordani, MarioPaolo; Giordano, Raffaele; Giorgi, Francesco Michelangelo; Giovannini, Paola; Giraud, Pierre-Francois; Giugni, Danilo; Giuliani, Claudia; Giulini, Maddalena; Giunta, Michele; Gjelsten, Børge Kile; Gkaitatzis, Stamatios; Gkialas, Ioannis; Gladilin, Leonid; Glasman, Claudia; Glatzer, Julian; Glazov, Alexandre; Glonti, George; Goblirsch-Kolb, Maximilian; Goddard, Jack Robert; Godfrey, Jennifer; Godlewski, Jan; Goeringer, Christian; Goldfarb, Steven; Golling, Tobias; Golubkov, Dmitry; Gomes, Agostinho; Gomez Fajardo, Luz Stella; Gonçalo, Ricardo; Goncalves Gama, Rafael; Goncalves Pinto Firmino Da Costa, Joao; Gonella, Laura; González de la Hoz, Santiago; Gonzalez Parra, Garoe; Gonzalez Silva, Laura; Gonzalez-Sevilla, Sergio; Goossens, Luc; Gorbounov, Petr Andreevich; Gordon, Howard; Gorelov, Igor; Gorini, Benedetto; Gorini, Edoardo; Gorišek, Andrej; Gornicki, Edward; Goshaw, Alfred; Gössling, Claus; Gostkin, Mikhail Ivanovitch; Gouighri, Mohamed; Goujdami, Driss; Goulette, Marc Phillippe; Goussiou, Anna; Goy, Corinne; Gozpinar, Serdar; Grabas, Herve Marie Xavier; Graber, Lars; Grabowska-Bold, Iwona; Grafström, Per; Grahn, Karl-Johan; Gramling, Johanna; Gramstad, Eirik; Grancagnolo, Sergio; Grassi, Valerio; Gratchev, Vadim; Gray, Heather; Gray, Julia Ann; Graziani, Enrico; Grebenyuk, Oleg; Green, Barry; Greenwood, Zeno Dixon; Gregersen, Kristian; Gregor, Ingrid-Maria; Grenier, Philippe; Griffiths, Justin; Grillo, Alexander; Grimm, Kathryn; Grinstein, Sebastian; Gris, Philippe Luc Yves; Grishkevich, Yaroslav; Grivaz, Jean-Francois; Grohs, Johannes Philipp; Grohsjean, Alexander; Gross, Eilam; Grosse-Knetter, Joern; Grossi, Giulio Cornelio; Groth-Jensen, Jacob; Grout, Zara Jane; Grybel, Kai; Guan, Liang; Guescini, Francesco; Guest, Daniel; Gueta, Orel; Guicheney, Christophe; Guido, Elisa; Guillemin, Thibault; Guindon, Stefan; Gul, Umar; Gumpert, Christian; Gunther, Jaroslav; Guo, Jun; Gupta, Shaun; Gutierrez, Phillip; Gutierrez Ortiz, Nicolas Gilberto; Gutschow, Christian; Guttman, Nir; Guyot, Claude; Gwenlan, Claire; Gwilliam, Carl; Haas, Andy; Haas, Stefan; Haber, Carl; Hadavand, Haleh Khani; Haefner, Petra; Hageböck, Stephan; Hakobyan, Hrachya; Haleem, Mahsana; Hall, David; Halladjian, Garabed; Hamacher, Klaus; Hamal, Petr; Hamano, Kenji; Hamer, Matthias; Hamilton, Andrew; Hamilton, Samuel; Han, Liang; Hanagaki, Kazunori; Hanawa, Keita; Hance, Michael; Hanke, Paul; Hansen, Jørgen Beck; Hansen, Jorn Dines; Hansen, Peter Henrik; Hansson, Per; Hara, Kazuhiko; Hard, Andrew; Harenberg, Torsten; Harkusha, Siarhei; Harper, Devin; Harrington, Robert; Harris, Orin; Harrison, Paul Fraser; Hartjes, Fred; Hasegawa, Satoshi; Hasegawa, Yoji; Hassani, Samira; Haug, Sigve; Hauschild, Michael; Hauser, Reiner; Havranek, Miroslav; Hawkes, Christopher; Hawkings, Richard John; Hawkins, Anthony David; Hayashi, Takayasu; Hayden, Daniel; Hays, Chris; Hayward, Helen; Haywood, Stephen; Head, Simon; Heck, Tobias; Hedberg, Vincent; Heelan, Louise; Heim, Sarah; Heim, Timon; Heinemann, Beate; Heinrich, Lukas; Heisterkamp, Simon; Hejbal, Jiri; Helary, Louis; Heller, Claudio; Heller, Matthieu; Hellman, Sten; Hellmich, Dennis; Helsens, Clement; Henderson, James; Henderson, Robert; Hengler, Christopher; Henrichs, Anna; Henriques Correia, Ana Maria; Henrot-Versille, Sophie; Hensel, Carsten; Herbert, Geoffrey Henry; Hernández Jiménez, Yesenia; Herrberg-Schubert, Ruth; Herten, Gregor; Hertenberger, Ralf; Hervas, Luis; Hesketh, Gavin Grant; Hessey, Nigel; Hickling, Robert; Higón-Rodriguez, Emilio; Higuchi, Kota; Hill, John; Hiller, Karl Heinz; Hillert, Sonja; Hillier, Stephen; Hinchliffe, Ian; Hines, Elizabeth; Hirose, Minoru; Hirschbuehl, Dominic; Hobbs, John; Hod, Noam; Hodgkinson, Mark; Hodgson, Paul; Hoecker, Andreas; Hoeferkamp, Martin; Hoffman, Julia; Hoffmann, Dirk; Hofmann, Julia Isabell; Hohlfeld, Marc; Holmes, Tova Ray; Hong, Tae Min; Hooft van Huysduynen, Loek; Hostachy, Jean-Yves; Hou, Suen; Hoummada, Abdeslam; Howard, Jacob; Howarth, James; Hrabovsky, Miroslav; Hristova, Ivana; Hrivnac, Julius; Hryn'ova, Tetiana; Hsu, Pai-hsien Jennifer; Hsu, Shih-Chieh; Hu, Diedi; Hu, Xueye; Huang, Yanping; Hubacek, Zdenek; Hubaut, Fabrice; Huegging, Fabian; Huettmann, Antje; Huffman, Todd Brian; Hughes, Emlyn; Hughes, Gareth; Huhtinen, Mika; Hülsing, Tobias Alexander; Hurwitz, Martina; Huseynov, Nazim; Huston, Joey; Huth, John; Iacobucci, Giuseppe; Iakovidis, Georgios; Ibragimov, Iskander; Iconomidou-Fayard, Lydia; Ideal, Emma; Iengo, Paolo; Igonkina, Olga; Iizawa, Tomoya; Ikegami, Yoichi; Ikematsu, Katsumasa; Ikeno, Masahiro; Ilchenko, Iurii; Iliadis, Dimitrios; Ilic, Nikolina; Inamaru, Yuki; Ince, Tayfun; Ioannou, Pavlos; Iodice, Mauro; Iordanidou, Kalliopi; Ippolito, Valerio; Irles Quiles, Adrian; Isaksson, Charlie; Ishino, Masaya; Ishitsuka, Masaki; Ishmukhametov, Renat; Issever, Cigdem; Istin, Serhat; Iturbe Ponce, Julia Mariana; Ivashin, Anton; Iwanski, Wieslaw; Iwasaki, Hiroyuki; Izen, Joseph; Izzo, Vincenzo; Jackson, Brett; Jackson, John; Jackson, Matthew; Jackson, Paul; Jaekel, Martin; Jain, Vivek; Jakobi, Katharina Bianca; Jakobs, Karl; Jakobsen, Sune; Jakoubek, Tomas; Jakubek, Jan; Jamin, David Olivier; Jana, Dilip; Jansen, Eric; Jansen, Hendrik; Janssen, Jens; Jansweijer, Peter Paul Maarten; Janus, Michel; Jarlskog, Göran; Jeanty, Laura; Jeng, Geng-yuan; Jennens, David; Jenni, Peter; Jentzsch, Jennifer; Jeske, Carl; Jézéquel, Stéphane; Jha, Manoj Kumar; Ji, Haoshuang; Ji, Weina; Jia, Jiangyong; Jiang, Yi; Jimenez Belenguer, Marcos; Jin, Shan; Jinaru, Adam; Jinnouchi, Osamu; Joergensen, Morten Dam; Johansson, Erik; Johansson, Per; Johns, Kenneth; Jon-And, Kerstin; Jones, Graham; Jones, Roger; Jones, Tim; Joos, Markus; Jorge, Pedro; Joshi, Kiran Daniel; Jovicevic, Jelena; Ju, Xiangyang; Jung, Christian; Jungst, Ralph Markus; Jussel, Patrick; Juste Rozas, Aurelio; Kaci, Mohammed; Kaczmarska, Anna; Kado, Marumi; Kagan, Harris; Kagan, Michael; Kahra, Christian; Kajomovitz, Enrique; Kaluza, Adam; Kama, Sami; Kanaya, Naoko; Kaneda, Michiru; Kaneti, Steven; Kanno, Takayuki; Kantserov, Vadim; Kanzaki, Junichi; Kaplan, Benjamin; Kapliy, Anton; Kar, Deepak; Karakostas, Konstantinos; Karastathis, Nikolaos; Karnevskiy, Mikhail; Karpov, Sergey; Karthik, Krishnaiyengar; Kartvelishvili, Vakhtang; Karyukhin, Andrey; Kashif, Lashkar; Kasieczka, Gregor; Kass, Richard; Kastanas, Alex; Kataoka, Yousuke; Katre, Akshay; Katzy, Judith; Kaushik, Venkatesh; Kawagoe, Kiyotomo; Kawamoto, Tatsuo; Kawamura, Gen; Kazama, Shingo; Kazanin, Vassili; Kazarinov, Makhail; Kazarov, Andrei; Keeler, Richard; Kehoe, Robert; Keil, Markus; Keller, John; Kempster, Jacob Julian; Keoshkerian, Houry; Kepka, Oldrich; Kerševan, Borut Paul; Kersten, Susanne; Kessoku, Kohei; Keung, Justin; Keyes, Robert; Khalil-zada, Farkhad; Khandanyan, Hovhannes; Khanov, Alexander; Kharchenko, Dmitri; Khodinov, Alexander; Khoo, Teng Jian; Khoriauli, Gia; Khoroshilov, Andrey; Khovanskiy, Valery; Khramov, Evgeniy; Khubua, Jemal; Kiese, Patric Karl; Kim, Hyeon Jin; Kim, Shinhong; Kimura, Naoki; Kind, Oliver; King, Barry; King, Matthew; King, Robert Steven Beaufoy; King, Samuel Burton; Kirk, Julie; Kiryunin, Andrey; Kishimoto, Tomoe; Kisielewska, Danuta; Kitamura, Takumi; Kiuchi, Kenji; Kladiva, Eduard; Klein, Max; Klein, Uta; Kleinknecht, Konrad; Klimek, Pawel; Klimentov, Alexei; Klimkovich, Tatsiana; Klingenberg, Reiner; Klinger, Joel Alexander; Klioutchnikova, Tatiana; Klok, Peter; Kluge, Eike-Erik; Kluit, Peter; Kluth, Stefan; Kneringer, Emmerich; Knoops, Edith; Knue, Andrea; Kobayashi, Dai; Kobayashi, Tomio; Kobel, Michael; Kocian, Martin; Kodys, Peter; Koevesarki, Peter; Koffas, Thomas; Koffeman, Els; Kogan, Lucy Anne; Kohlmann, Simon; Kohout, Zdenek; Kohriki, Takashi; Koi, Tatsumi; Kolanoski, Hermann; Koletsou, Iro; Koll, James; Kolos, Serguei; Komar, Aston; Komori, Yuto; Kondo, Takahiko; Köneke, Karsten; König, Adriaan; K{ö}nig, Sebastian; Kono, Takanori; Konoplich, Rostislav; Konstantinidis, Nikolaos; Kopeliansky, Revital; Koperny, Stefan; Köpke, Lutz; Kopp, Anna Katharina; Korcyl, Krzysztof; Kordas, Kostantinos; Korn, Andreas; Korol, Aleksandr; Korolkov, Ilya; Korolkova, Elena; Korotkov, Vladislav; Kortner, Oliver; Kortner, Sandra; Kostyukhin, Vadim; Kotov, Vladislav; Kotwal, Ashutosh; Kourkoumelis, Christine; Kouskoura, Vasiliki; Koutsman, Alex; Kowalewski, Robert Victor; Kowalski, Tadeusz; Kozanecki, Witold; Kozhin, Anatoly; Kral, Vlastimil; Kramarenko, Viktor; Kramberger, Gregor; Krasnopevtsev, Dimitriy; Krasny, Mieczyslaw Witold; Krasznahorkay, Attila; Kraus, Jana; Kravchenko, Anton; Kreiss, Sven; Kretzschmar, Jan; Kreutzfeldt, Kristof; Krieger, Nina; Krieger, Peter; Kroeninger, Kevin; Kroha, Hubert; Kroll, Joe; Kroseberg, Juergen; Krstic, Jelena; Kruchonak, Uladzimir; Krüger, Hans; Kruker, Tobias; Krumnack, Nils; Krumshteyn, Zinovii; Kruse, Amanda; Kruse, Mark; Kruskal, Michael; Kubota, Takashi; Kuday, Sinan; Kuehn, Susanne; Kugel, Andreas; Kuhl, Andrew; Kuhl, Thorsten; Kukhtin, Victor; Kulchitsky, Yuri; Kuleshov, Sergey; Kuna, Marine; Kunigo, Takuto; Kunkle, Joshua; Kupco, Alexander; Kurashige, Hisaya; Kurochkin, Yurii; Kurumida, Rie; Kus, Vlastimil; Kuwertz, Emma Sian; Kuze, Masahiro; Kvita, Jiri; La Rosa, Alessandro; La Rotonda, Laura; Lablak, Said; Lacasta, Carlos; Lacava, Francesco; Lacey, James; Lacker, Heiko; Lacour, Didier; Lacuesta, Vicente Ramón; Ladygin, Evgueni; Lafaye, Remi; Laforge, Bertrand; Lagouri, Theodota; Lai, Stanley; Laier, Heiko; Laisne, Emmanuel; Lambourne, Luke; Lampen, Caleb; Lampl, Walter; Lançon, Eric; Landgraf, Ulrich; Landon, Murrough; Lang, Valerie Susanne; Lange, Clemens; Lankford, Andrew; Lanni, Francesco; Lantzsch, Kerstin; Lanza, Agostino; Laplace, Sandrine; Lapoire, Cecile; Laporte, Jean-Francois; Lari, Tommaso; Larner, Aimee; Lassnig, Mario; Laurelli, Paolo; Laurens, Philippe; Lavorini, Vincenzo; Lavrijsen, Wim; Laycock, Paul; Le, Bao Tran; Le Dortz, Olivier; Le Guirriec, Emmanuel; Le Menedeu, Eve; LeCompte, Thomas; Ledroit-Guillon, Fabienne Agnes Marie; Lee, Claire, Alexandra; Lee, Hurng-Chun; Lee, Jason; Lee, Shih-Chang; Lee, Lawrence; Lefebvre, Guillaume; Lefebvre, Michel; Legger, Federica; Leggett, Charles; Lehan, Allan; Lehmacher, Marc; Lehmann Miotto, Giovanna; Lei, Xiaowen; Leister, Andrew Gerard; Leite, Marco Aurelio Lisboa; Leitner, Rupert; Lellouch, Daniel; Lemmer, Boris; Leney, Katharine; Lenz, Tatjana; Lenzi, Bruno; Leone, Robert; Leonhardt, Kathrin; Leonidopoulos, Christos; Leontsinis, Stefanos; Leroy, Claude; Lester, Christopher; Lester, Christopher Michael; Levêque, Jessica; Levin, Daniel; Levinson, Lorne; Lewis, Adrian; Lewis, George; Leyko, Agnieszka; Leyton, Michael; Li, Bing; Li, Bo; Li, Haifeng; Li, Ho Ling; Li, Shu; Li, Xuefei; Liang, Zhijun; Liao, Hongbo; Liberali, Valentino; Liberti, Barbara; Lie, Ki; Liebal, Jessica; Liebig, Wolfgang; Limbach, Christian; Limosani, Antonio; Lin, Simon; Linde, Frank; Lindquist, Brian Edward; Linnemann, James; Lipeles, Elliot; Lipniacka, Anna; Lisovyi, Mykhailo; Liss, Tony; Lister, Alison; Litke, Alan; Liu, Bo; Liu, Dong; Liu, Jianbei; Liu, Kun; Liu, Lulu; Liu, Miaoyuan; Liu, Minghui; Liu, Yanwen; Livan, Michele; Livermore, Sarah; Lleres, Annick; Llorente Merino, Javier; Lloyd, Stephen; Lo Sterzo, Francesco; Lobodzinska, Ewelina; Loch, Peter; Lockman, William; Loddenkoetter, Thomas; Loebinger, Fred; Loevschall-Jensen, Ask Emil; Loginov, Andrey; Loh, Chang Wei; Lohse, Thomas; Lohwasser, Kristin; Lokajicek, Milos; Lombardo, Vincenzo Paolo; Long, Brian Alexander; Long, Jonathan; Long, Robin Eamonn; Lopes, Lourenco; Lopez Mateos, David; Lopez Paredes, Brais; Lorenz, Jeanette; Lorenzo Martinez, Narei; Losada, Marta; Loscutoff, Peter; Lou, XinChou; Lounis, Abdenour; Love, Jeremy; Love, Peter; Lowe, Andrew; Lu, Feng; Lubatti, Henry; Luci, Claudio; Luciano, Pierluigi; Lucotte, Arnaud; Ludwig, Dörthe; Luehring, Frederick; Lukas, Wolfgang; Luminari, Lamberto; Lundberg, Johan; Lundberg, Olof; Lund-Jensen, Bengt; Lungwitz, Matthias; Luongo, Carmela; Lupu, Nachman; Lynn, David; Lysak, Roman; Lytken, Else; Ma, Hong; Ma, Lian Liang; Maccarrone, Giovanni; Macchiolo, Anna; Maček, Boštjan; Macey, Tom; Machado Miguens, Joana; Macina, Daniela; Madar, Romain; Maddocks, Harvey Jonathan; Mader, Wolfgang; Madsen, Alexander; Maeno, Mayuko; Maeno, Tadashi; Magnoni, Luca; Magradze, Erekle; Mahboubi, Kambiz; Mahlstedt, Joern; Mahmoud, Sara; Maiani, Camilla; Maidantchik, Carmen; Maio, Amélia; Majewski, Stephanie; Makida, Yasuhiro; Makovec, Nikola; Mal, Prolay; Malaescu, Bogdan; Maldaner, Stephan; Malecki, Pawel; Maleev, Victor; Malek, Fairouz; Mallik, Usha; Malon, David; Malone, Caitlin; Maltezos, Stavros; Malyshev, Vladimir; Malyukov, Sergei; Mamuzic, Judita; Mandelli, Beatrice; Mandelli, Luciano; Mandić, Igor; Mandrysch, Rocco; Maneira, José; Manfredini, Alessandro; Manhaes de Andrade Filho, Luciano; Manjarres Ramos, Joany Andreina; Mann, Alexander; Manning, Peter; Manousakis-Katsikakis, Arkadios; Mansoulie, Bruno; Mantifel, Rodger; Mapelli, Livio; March, Luis; Marchand, Jean-Francois; Marchese, Fabrizio; Marchiori, Giovanni; Marcisovsky, Michal; Marino, Christopher; Marroquim, Fernando; Marshall, Zach; Marti, Lukas Fritz; Marti-Garcia, Salvador; Martin, Brian; Martin, Brian; Martin, Tim; Martin, Victoria Jane; Martin dit Latour, Bertrand; Martinez, Homero; Martinez, Mario; Martin-Haugh, Stewart; Martyniuk, Alex; Marx, Marilyn; Marzin, Antoine; Masetti, Lucia; Mashimo, Tetsuro; Mashinistov, Ruslan; Masik, Jiri; Maslennikov, Alexey; Massa, Ignazio; Massa, Lorenzo; Massol, Nicolas; Mastrandrea, Paolo; Mastroberardino, Anna; Masubuchi, Tatsuya; Matsunaga, Hiroyuki; Matsushita, Takashi; Mättig, Peter; Mättig, Stefan; Mattmann, Johannes; Mattravers, Carly; Maurer, Julien; Maxfield, Stephen; Maximov, Dmitriy; Mazini, Rachid; Mazzaferro, Luca; Mc Goldrick, Garrin; Mc Kee, Shawn Patrick; McCarn, Allison; McCarthy, Robert; McCarthy, Tom; Mcfayden, Josh; Mchedlidze, Gvantsa; Mclaughlan, Tom; McMahon, Steve; McPherson, Robert; Meade, Andrew; Mechnich, Joerg; Medinnis, Michael; Meehan, Samuel; Meera-Lebbai, Razzak; Meessen, Christophe; Mehlhase, Sascha; Mehta, Andrew; Meier, Karlheinz; Meineck, Christian; Meirose, Bernhard; Melachrinos, Constantinos; Mellado Garcia, Bruce Rafael; Meloni, Federico; Mendoza Navas, Luis; Mengarelli, Alberto; Menke, Sven; Meoni, Evelin; Mercurio, Kevin Michael; Mergelmeyer, Sebastian; Meric, Nicolas; Mermod, Philippe; Merola, Leonardo; Meroni, Chiara; Merritt, Frank; Merritt, Hayes; Messina, Andrea; Metcalfe, Jessica; Mete, Alaettin Serhan; Meyer, Carsten; Meyer, Christopher; Meyer, Jean-Pierre; Meyer, Jochen; Meyer, Joerg; Middleton, Robin; Migas, Sylwia; Mijović, Liza; Mikenberg, Giora; Mikestikova, Marcela; Mikuž, Marko; Miller, David; Mills, Corrinne; Milov, Alexander; Milstead, David; Milstein, Dmitry; Minaenko, Andrey; Miñano Moya, Mercedes; Minashvili, Irakli; Mincer, Allen; Mindur, Bartosz; Mineev, Mikhail; Ming, Yao; Mir, Lluisa-Maria; Mitani, Takashi; Mitrevski, Jovan; Mitsou, Vasiliki A; Mitsui, Shingo; Miucci, Antonio; Miyagawa, Paul; Mjörnmark, Jan-Ulf; Moa, Torbjoern; Moeller, Victoria; Mohapatra, Soumya; Molander, Simon; Moles-Valls, Regina; Mönig, Klaus; Monini, Caterina; Monk, James; Monnier, Emmanuel; Montejo Berlingen, Javier; Monticelli, Fernando; Monzani, Simone; Moore, Roger; Mora Herrera, Clemencia; Moraes, Arthur; Morange, Nicolas; Morel, Julien; Moreno, Deywis; Moreno Llácer, María; Morettini, Paolo; Morgenstern, Marcus; Morii, Masahiro; Moritz, Sebastian; Morley, Anthony Keith; Mornacchi, Giuseppe; Morris, John; Morvaj, Ljiljana; Moser, Hans-Guenther; Mosidze, Maia; Moss, Josh; Mount, Richard; Mountricha, Eleni; Moyse, Edward; Muanza, Steve; Mudd, Richard; Mueller, Felix; Mueller, James; Mueller, Klemens; Mueller, Thibaut; Mueller, Timo; Muenstermann, Daniel; Munwes, Yonathan; Murillo Garcia, Raul; Murillo Quijada, Javier Alberto; Murray, Bill; Mussche, Ido; Musto, Elisa; Myagkov, Alexey; Myska, Miroslav; Nackenhorst, Olaf; Nadal, Jordi; Nagai, Koichi; Nagai, Ryo; Nagai, Yoshikazu; Nagano, Kunihiro; Nagarkar, Advait; Nagasaka, Yasushi; Nagel, Martin; Nairz, Armin Michael; Nakahama, Yu; Nakamura, Koji; Nakamura, Tomoaki; Nakano, Itsuo; Namasivayam, Harisankar; Nanava, Gizo; Napier, Austin; Narayan, Rohin; Nash, Michael; Nattermann, Till; Naumann, Thomas; Navarro, Gabriela; Nayyar, Ruchika; Neal, Homer; Nechaeva, Polina; Neep, Thomas James; Negri, Andrea; Negri, Guido; Negrini, Matteo; Nektarijevic, Snezana; Nelson, Andrew; Nelson, Timothy Knight; Nemecek, Stanislav; Nemethy, Peter; Nepomuceno, Andre Asevedo; Nessi, Marzio; Neubauer, Mark; Neumann, Manuel; Neusiedl, Andrea; Neves, Ricardo; Nevski, Pavel; Newman, Paul; Nguyen, Duong Hai; Nguyen Thi Hong, Van; Nickerson, Richard; Nicolaidou, Rosy; Nielsen, Jason; Nikiforou, Nikiforos; Nikiforov, Andriy; Nikolaenko, Vladimir; Nikolaidis, Spyridon; Nikolic-Audit, Irena; Nikolics, Katalin; Nikolopoulos, Konstantinos; Nilsson, Paul; Ninomiya, Yoichi; Nisati, Aleandro; Nisius, Richard; Nobe, Takuya; Nodulman, Lawrence; Nomachi, Masaharu; Nomidis, Ioannis; Norberg, Scarlet; Nordberg, Markus; Nozaki, Mitsuaki; Nozka, Libor; Ntekas, Konstantinos; Nuncio-Quiroz, Adriana-Elizabeth; Nunes Hanninger, Guilherme; Nunnemann, Thomas; Nurse, Emily; Nuti, Francesco; O'Brien, Brendan Joseph; O'grady, Fionnbarr; O'Neil, Dugan; O'Shea, Val; Oakes, Louise Beth; Oakham, Gerald; Oberlack, Horst; Ocariz, Jose; Ochi, Atsuhiko; Ochoa, Ines; Oda, Susumu; Odaka, Shigeru; Ogren, Harold; Oh, Alexander; Oh, Seog; Ohm, Christian; Ohshima, Takayoshi; Ohshita, Hidetoshi; Okamura, Wataru; Okawa, Hideki; Okumura, Yasuyuki; Okuyama, Toyonobu; Olariu, Albert; Olivares Pino, Sebastian Andres; Oliveira Damazio, Denis; Oliver Garcia, Elena; Olivito, Dominick; Olszewski, Andrzej; Olszowska, Jolanta; Onofre, António; Onyisi, Peter; Oreglia, Mark; Oren, Yona; Orestano, Domizia; Orlando, Nicola; Oropeza Barrera, Cristina; Orr, Robert; Osculati, Bianca; Ospanov, Rustem; Otero y Garzon, Gustavo; Otono, Hidetoshi; Ouchrif, Mohamed; Ouellette, Eric; Ould-Saada, Farid; Ouraou, Ahmimed; Oussoren, Koen Pieter; Ouyang, Qun; Ovcharova, Ana; Owen, Mark; Owen, Simon; Ozcan, Veysi Erkcan; Ozturk, Nurcan; Pachal, Katherine; Pacheco Pages, Andres; Padilla Aranda, Cristobal; Pagan Griso, Simone; Paganis, Efstathios; Pahl, Christoph; Paige, Frank; Pais, Preema; Pajchel, Katarina; Palacino, Gabriel; Palestini, Sandro; Pallin, Dominique; Palma, Alberto; Palmer, Jody; Pan, Yibin; Panagiotopoulou, Evgenia; Panduro Vazquez, William; Panes, Boris; Pani, Priscilla; Panikashvili, Natalia; Panitkin, Sergey; Pantea, Dan; Papadopoulou, Theodora; Papageorgiou, Konstantinos; Paramonov, Alexander; Paredes Hernandez, Daniela; Parker, Michael Andrew; Parodi, Fabrizio; Parsons, John; Parzefall, Ulrich; Pasqualucci, Enrico; Passaggio, Stefano; Pastore, Francesca; Pásztor, Gabriella; Pataraia, Sophio; Patel, Nikhul; Pater, Joleen; Patricelli, Sergio; Pauly, Thilo; Pearce, James; Pedersen, Maiken; Pedraza Lopez, Sebastian; Pedro, Rute; Peleganchuk, Sergey; Pelikan, Daniel; Peng, Haiping; Penning, Bjoern; Penwell, John; Perepelitsa, Dennis; Perez Codina, Estel; Pérez García-Estañ, María Teresa; Perini, Laura; Pernegger, Heinz; Perrella, Sabrina; Peschke, Richard; Peshekhonov, Vladimir; Peters, Krisztian; Peters, Yvonne; Petersen, Brian; Petersen, Troels; Petit, Elisabeth; Petridis, Andreas; Petridou, Chariclia; Petrolo, Emilio; Petrucci, Fabrizio; Petteni, Michele; Pezoa, Raquel; Phillips, Peter William; Piacquadio, Giacinto; Pianori, Elisabetta; Picazio, Attilio; Piccaro, Elisa; Piccinini, Maurizio; Piec, Sebastian Marcin; Piegaia, Ricardo; Piendibene, Marco; Pignotti, David; Pilcher, James; Pilkington, Andrew; Pinamonti, Michele; Pinder, Alex; Pinfold, James; Pingel, Almut; Pinto, Belmiro; Pizio, Caterina; Pleier, Marc-Andre; Pleskot, Vojtech; Plotnikova, Elena; Plucinski, Pawel; Poddar, Sahill; Podlyski, Fabrice; Poettgen, Ruth; Poggioli, Luc; Pohl, David-leon; Pohl, Martin; Polesello, Giacomo; Policicchio, Antonio; Polifka, Richard; Polini, Alessandro; Pollard, Christopher Samuel; Polychronakos, Venetios; Pomeroy, Daniel; Pommès, Kathy; Pontecorvo, Ludovico; Pope, Bernard; Popeneciu, Gabriel Alexandru; Popovic, Dragan; Portell Bueso, Xavier; Pospisil, Stanislav; Potamianos, Karolos; Potrap, Igor; Potter, Christina; Potter, Christopher; Poveda, Joaquin; Pozdnyakov, Valery; Pozo Astigarraga, Mikel Eukeni; Prabhu, Robindra; Pralavorio, Pascal; Pranko, Aliaksandr; Prasad, Srivas; Pravahan, Rishiraj; Prell, Soeren; Price, Darren; Price, Joe; Price, Lawrence; Primavera, Margherita; Proissl, Manuel; Prokofiev, Kirill; Prokoshin, Fedor; Protopapadaki, Eftychia-sofia; Protopopescu, Serban; Proudfoot, James; Prudent, Xavier; Przybycien, Mariusz; Przysiezniak, Helenka; Psoroulas, Serena; Ptacek, Elizabeth; Pueschel, Elisa; Puldon, David; Purohit, Milind; Puzo, Patrick; Pylypchenko, Yuriy; Qian, Jianming; Qian, Weiming; Quadt, Arnulf; Quarrie, David; Quayle, William; Quilty, Donnchadha; Quinonez, Fernando; Radescu, Voica; Radhakrishnan, Sooraj Krishnan; Radloff, Peter; Ragusa, Francesco; Rahal, Ghita; Rajagopalan, Srinivasan; Rammensee, Michael; Rammes, Marcus; Randle-Conde, Aidan Sean; Rangel-Smith, Camila; Rao, Kanury; Rauscher, Felix; Rave, Stefan; Rave, Tobias Christian; Ravenscroft, Thomas; Raymond, Michel; Read, Alexander Lincoln; Rebuzzi, Daniela; Redelbach, Andreas; Redlinger, George; Reece, Ryan; Reeves, Kendall; Rehnisch, Laura; Reinsch, Andreas; Reisin, Hernan; Reiss, Andreas; Relich, Matthew; Rembser, Christoph; Renaud, Adrien; Rescigno, Marco; Resconi, Silvia; Rezanova, Olga; Reznicek, Pavel; Rezvani, Reyhaneh; Richter, Robert; Ridel, Melissa; Rieck, Patrick; Rijssenbeek, Michael; Rimoldi, Adele; Rinaldi, Lorenzo; Ritsch, Elmar; Riu, Imma; Rizatdinova, Flera; Rizvi, Eram; Robertson, Steven; Robichaud-Veronneau, Andree; Robinson, Dave; Robinson, James; Robson, Aidan; Rocha de Lima, Jose Guilherme; Roda, Chiara; Roda Dos Santos, Denis; Rodrigues, Luis; Roe, Shaun; Røhne, Ole; Romaniouk, Anatoli; Romano, Marino; Romeo, Gaston; Romero Adam, Elena; Romero Maltrana, Diego; Rompotis, Nikolaos; Roos, Lydia; Ros, Eduardo; Rosati, Stefano; Rosbach, Kilian; Rose, Anthony; Rose, Matthew; Rosendahl, Peter Lundgaard; Rosenthal, Oliver; Rossetti, Valerio; Rossi, Elvira; Rossi, Leonardo Paolo; Rosten, Rachel; Rotaru, Marina; Roth, Itamar; Rothberg, Joseph; Rousseau, David; Royon, Christophe; Rozanov, Alexandre; Rozen, Yoram; Ruan, Xifeng; Rubbo, Francesco; Rubinskiy, Igor; Rud, Viacheslav; Rudolph, Christian; Rudolph, Matthew Scott; Rühr, Frederik; Ruiz-Martinez, Aranzazu; Rurikova, Zuzana; Rusakovich, Nikolai; Ruschke, Alexander; Rutherfoord, John; Ruthmann, Nils; Ruzicka, Pavel; Ryabov, Yury; Rybar, Martin; Rybkin, Grigori; Ryder, Nick; Saavedra, Aldo; Sacerdoti, Sabrina; Saddique, Asif; Sadeh, Iftach; Sadrozinski, Hartmut; Sadykov, Renat; Safai Tehrani, Francesco; Sakamoto, Hiroshi; Sakurai, Yuki; Salamanna, Giuseppe; Salamon, Andrea; Saleem, Muhammad; Salek, David; Sales De Bruin, Pedro Henrique; Salihagic, Denis; Salnikov, Andrei; Salt, José; Salvatore, Daniela; Salvatore, Pasquale Fabrizio; Salvucci, Antonio; Salzburger, Andreas; Sampsonidis, Dimitrios; Sanchez, Arturo; Sánchez, Javier; Sanchez Martinez, Victoria; Sandaker, Heidi; Sander, Heinz Georg; Sanders, Michiel; Sandhoff, Marisa; Sandoval, Tanya; Sandoval, Carlos; Sandstroem, Rikard; Sankey, Dave; Sansoni, Andrea; Santoni, Claudio; Santonico, Rinaldo; Santos, Helena; Santoyo Castillo, Itzebelt; Sapp, Kevin; Sapronov, Andrey; Saraiva, João; Sarkisyan-Grinbaum, Edward; Sarrazin, Bjorn; Sartisohn, Georg; Sasaki, Osamu; Sasaki, Yuichi; Sauvan, Emmanuel; Sauvan, Jean-Baptiste; Savage, Graham; Savard, Pierre; Savu, Dan Octavian; Sawyer, Craig; Sawyer, Lee; Saxon, James; Sbarra, Carla; Sbrizzi, Antonio; Scanlon, Tim; Scannicchio, Diana; Scarcella, Mark; Schaarschmidt, Jana; Schacht, Peter; Schaefer, Douglas; Schaeffer, Jan; Schaelicke, Andreas; Schaepe, Steffen; Schaetzel, Sebastian; Schäfer, Uli; Schaffer, Arthur; Schaile, Dorothee; Schamberger, R~Dean; Scharf, Veit; Schegelsky, Valery; Scheirich, Daniel; Schernau, Michael; Scherzer, Max; Schettino, Vinicius; Schiavi, Carlo; Schieck, Jochen; Schillo, Christian; Schioppa, Marco; Schlenker, Stefan; Schmidt, Evelyn; Schmieden, Kristof; Schmitt, Christian; Schmitt, Christopher; Schmitt, Klaus; Schmitt, Sebastian; Schneider, Basil; Schnellbach, Yan Jie; Schnoor, Ulrike; Schoeffel, Laurent; Schoening, Andre; Schoenrock, Bradley Daniel; Schorlemmer, Andre Lukas; Schott, Matthias; Schouten, Doug; Schovancova, Jaroslava; Schramm, Steven; Schreyer, Manuel; Schroeder, Christian; Schroer, Nicolai; Schuh, Natascha; Schultens, Martin Johannes; Schultz-Coulon, Hans-Christian; Schulz, Holger; Schumacher, Markus; Schumm, Bruce; Schune, Philippe; Schwartzman, Ariel; Schwegler, Philipp; Schwemling, Philippe; Schwienhorst, Reinhard; Schwindling, Jerome; Schwindt, Thomas; Schwoerer, Maud; Sciacca, Gianfranco; Scifo, Estelle; Sciolla, Gabriella; Scott, Bill; Scuri, Fabrizio; Scutti, Federico; Searcy, Jacob; Sedov, George; Sedykh, Evgeny; Seidel, Sally; Seiden, Abraham; Seifert, Frank; Seixas, José; Sekhniaidze, Givi; Sekula, Stephen; Selbach, Karoline Elfriede; Seliverstov, Dmitry; Sellers, Graham; Semprini-Cesari, Nicola; Serfon, Cedric; Serin, Laurent; Serkin, Leonid; Serre, Thomas; Seuster, Rolf; Severini, Horst; Sforza, Federico; Sfyrla, Anna; Shabalina, Elizaveta; Shamim, Mansoora; Shan, Lianyou; Shank, James; Shao, Qi Tao; Shapiro, Marjorie; Shatalov, Pavel; Shaw, Kate; Shehu, Ciwake Yusufu; Sherwood, Peter; Shimizu, Shima; Shimmin, Chase Owen; Shimojima, Makoto; Shiyakova, Mariya; Shmeleva, Alevtina; Shochet, Mel; Shooltz, Dean; Short, Daniel; Shrestha, Suyog; Shulga, Evgeny; Shupe, Michael; Shushkevich, Stanislav; Sicho, Petr; Sicoe, Alexandru Dan; Sidiropoulou, Ourania; Sidorov, Dmitri; Sidoti, Antonio; Siegert, Frank; Sijacki, Djordje; Silbert, Ohad; Silva, José; Silva Oliveira, Marcos Vinicius; Silver, Yiftah; Silverstein, Daniel; Silverstein, Samuel; Simak, Vladislav; Simard, Olivier; Simic, Ljiljana; Simion, Stefan; Simioni, Eduard; Simmons, Brinick; Simoniello, Rosa; Simonyan, Margar; Sinervo, Pekka; Sinev, Nikolai; Sipica, Valentin; Siragusa, Giovanni; Sircar, Anirvan; Sivoklokov, Serguei; Siyad, Mohamed Jimcaale; Sjölin, Jörgen; Sjursen, Therese; Skinnari, Louise Anastasia; Skottowe, Hugh Philip; Skovpen, Kirill; Skubic, Patrick; Slater, Mark; Slavicek, Tomas; Sliwa, Krzysztof; Smakhtin, Vladimir; Smart, Ben; Smestad, Lillian; Smirnov, Sergei; Smirnov, Yury; Smirnova, Lidia; Smirnova, Oxana; Smizanska, Maria; Smolek, Karel; Snesarev, Andrei; Snidero, Giacomo; Snow, Joel; Snyder, Scott; Sobie, Randall; Socher, Felix; Soffer, Abner; Soh, Dart-yin; Solans, Carlos; Solar, Michael; Solc, Jaroslav; Soldatov, Evgeny; Soldevila, Urmila; Solfaroli Camillocci, Elena; Solodkov, Alexander; Soloviev, Igor; Solovyanov, Oleg; Solovyev, Victor; Soni, Nitesh; Sood, Alexander; Sopko, Bruno; Sopko, Vit; Sorin, Veronica; Sosebee, Mark; Sotiropoulou, Calliope Louisa; Soualah, Rachik; Soueid, Paul; Soukharev, Andrey; South, David; Spagnolo, Stefania; Spanò, Francesco; Spearman, William Robert; Spighi, Roberto; Spigo, Giancarlo; Spiwoks, Ralf; Spousta, Martin; Spreitzer, Teresa; St Denis, Richard Dante; Stabile, Alberto; Stahlman, Jonathan; Staley, Richard; Stamen, Rainer; Stanecka, Ewa; Stanek, Robert; Stanescu, Cristian; Stanescu-Bellu, Madalina; Stanitzki, Marcel Michael; Stapnes, Steinar; Starchenko, Evgeny; Stark, Jan; Staroba, Pavel; Starovoitov, Pavel; Staszewski, Rafal; Steele, Genevieve; Steinbach, Peter; Steinberg, Peter; Stelzer, Bernd; Stelzer, Harald Joerg; Stelzer-Chilton, Oliver; Stenzel, Hasko; Stern, Sebastian; Stewart, Graeme; Stillings, Jan Andre; Stockton, Mark; Stoebe, Michael; Stoerig, Kathrin; Stoicea, Gabriel; Stonjek, Stefan; Stradling, Alden; Straessner, Arno; Strandberg, Jonas; Strandberg, Sara; Strauss, Emanuel; Strauss, Michael; Strizenec, Pavol; Ströhmer, Raimund; Strom, David; Stroynowski, Ryszard; Stucci, Stefania Antonia; Stugu, Bjarne; Stupak, John; Styles, Nicholas Adam; Su, Dong; Su, Jun; Subramania, Halasya Siva; Subramaniam, Rajivalochan; Succurro, Antonella; Sugaya, Yorihito; Suhr, Chad; Suk, Michal; Sulin, Vladimir; Sultansoy, Saleh; Sumida, Toshi; Sun, Xiaohu; Sundermann, Jan Erik; Suruliz, Kerim; Sutton, Mark; Suzuki, Yu; Svatos, Michal; Swedish, Stephen; Swiatlowski, Maximilian; Sykora, Ivan; Sykora, Tomas; Ta, Duc; Tackmann, Kerstin; Taenzer, Joe; Taffard, Anyes; Tafirout, Reda; Taghavirad, Saeed; Taiblum, Nimrod; Takahashi, Yuta; Takai, Helio; Takashima, Ryuichi; Takeda, Hiroshi; Takeshita, Tohru; Takubo, Yosuke; Talby, Mossadek; Talyshev, Alexey; Tam, Jason; Tamsett, Matthew; Tan, Kong Guan; Tanaka, Junichi; Tanaka, Reisaburo; Tanaka, Shuji; Tanasijczuk, Andres Jorge; Tani, Kazutoshi; Tannoury, Nancy; Tapprogge, Stefan; Tarem, Shlomit; Tarrade, Fabien; Tartarelli, Giuseppe Francesco; Tas, Petr; Tasevsky, Marek; Tashiro, Takuya; Tassi, Enrico; Tavares Delgado, Ademar; Tayalati, Yahya; Taylor, Christopher; Taylor, Frank; Taylor, Geoffrey; Taylor, Wendy; Teischinger, Florian Alfred; Teixeira Dias Castanheira, Matilde; Teixeira-Dias, Pedro; Temming, Kim Katrin; Ten Kate, Herman; Teng, Ping-Kun; Terada, Susumu; Terashi, Koji; Terron, Juan; Terzo, Stefano; Testa, Marianna; Teuscher, Richard; Therhaag, Jan; Theveneaux-Pelzer, Timothée; Thoma, Sascha; Thomas, Juergen; Thomas-Wilsker, Joshuha; Thompson, Emily; Thompson, Paul; Thompson, Peter; Thompson, Stan; Thomsen, Lotte Ansgaard; Thomson, Evelyn; Thomson, Mark; Thong, Wai Meng; Tian, Feng; Tibbetts, Mark James; Tikhomirov, Vladimir; Tikhonov, Yury; Timoshenko, Sergey; Tiouchichine, Elodie; Tipton, Paul; Tisserant, Sylvain; Todorov, Theodore; Todorova-Nova, Sharka; Toggerson, Brokk; Tojo, Junji; Tokár, Stanislav; Tokushuku, Katsuo; Tollefson, Kirsten; Tomlinson, Lee; Tomoto, Makoto; Tompkins, Lauren; Toms, Konstantin; Torrence, Eric; Torres, Heberth; Torró Pastor, Emma; Toth, Jozsef; Touchard, Francois; Tovey, Daniel; Tran, Huong Lan; Trefzger, Thomas; Tremblet, Louis; Tricoli, Alessandro; Trigger, Isabel Marian; Trincaz-Duvoid, Sophie; Tripiana, Martin; Triplett, Nathan; Trischuk, William; Trocmé, Benjamin; Troncon, Clara; Trottier-McDonald, Michel; Trovatelli, Monica; True, Patrick; Trzebinski, Maciej; Trzupek, Adam; Tsarouchas, Charilaos; Tseng, Jeffrey; Tsiareshka, Pavel; Tsionou, Dimitra; Tsipolitis, Georgios; Tsirintanis, Nikolaos; Tsiskaridze, Shota; Tsiskaridze, Vakhtang; Tskhadadze, Edisher; Tsukerman, Ilya; Tsulaia, Vakhtang; Tsung, Jieh-Wen; Tsuno, Soshi; Tsybychev, Dmitri; Tua, Alan; Tudorache, Alexandra; Tudorache, Valentina; Tuna, Alexander Naip; Tupputi, Salvatore; Turchikhin, Semen; Turecek, Daniel; Turra, Ruggero; Tuts, Michael; Twomey, Matthew Shaun; Tykhonov, Andrii; Tylmad, Maja; Uchida, Kirika; Ueda, Ikuo; Ueno, Ryuichi; Ughetto, Michael; Ugland, Maren; Uhlenbrock, Mathias; Ukegawa, Fumihiko; Unal, Guillaume; Undrus, Alexander; Unel, Gokhan; Ungaro, Francesca; Unno, Yoshinobu; Urbaniec, Dustin; Urquijo, Phillip; Urrejola, Pedro; Usai, Giulio; Usanova, Anna; Vacavant, Laurent; Vacek, Vaclav; Vachon, Brigitte; Valencic, Nika; Valentinetti, Sara; Valero, Alberto; Valery, Loic; Valkar, Stefan; Valladolid Gallego, Eva; Vallecorsa, Sofia; Valls Ferrer, Juan Antonio; Van Der Deijl, Pieter; van der Geer, Rogier; van der Graaf, Harry; Van Der Leeuw, Robin; van der Ster, Daniel; van Eldik, Niels; van Gemmeren, Peter; Van Nieuwkoop, Jacobus; van Vulpen, Ivo; van Woerden, Marius Cornelis; Vanadia, Marco; Vandelli, Wainer; Vaniachine, Alexandre; Vankov, Peter; Vannucci, Francois; Vardanyan, Gagik; Vari, Riccardo; Varnes, Erich; Varol, Tulin; Varouchas, Dimitris; Vartapetian, Armen; Varvell, Kevin; Vassilakopoulos, Vassilios; Vazeille, Francois; Vazquez Schroeder, Tamara; Veatch, Jason; Veloso, Filipe; Veneziano, Stefano; Ventura, Andrea; Ventura, Daniel; Venturi, Manuela; Venturi, Nicola; Venturini, Alessio; Vercesi, Valerio; Verducci, Monica; Verkerke, Wouter; Vermeulen, Jos; Vest, Anja; Vetterli, Michel; Viazlo, Oleksandr; Vichou, Irene; Vickey, Trevor; Vickey Boeriu, Oana Elena; Viehhauser, Georg; Vieira De Souza, Julio; Viel, Simon; Vigne, Ralph; Villa, Mauro; Villaplana Perez, Miguel; Vilucchi, Elisabetta; Vincter, Manuella; Vinogradov, Vladimir; Virzi, Joseph; Vitells, Ofer; Vivarelli, Iacopo; Vives Vaque, Francesc; Vlachos, Sotirios; Vladoiu, Dan; Vlasak, Michal; Vogel, Adrian; Vokac, Petr; Volpi, Guido; Volpi, Matteo; von der Schmitt, Hans; von Radziewski, Holger; von Toerne, Eckhard; Vorobel, Vit; Vos, Marcel; Voss, Rudiger; Vossebeld, Joost; Vranjes, Nenad; Vranjes Milosavljevic, Marija; Vrba, Vaclav; Vreeswijk, Marcel; Vu Anh, Tuan; Vuillermet, Raphael; Vukotic, Ilija; Vykydal, Zdenek; Wagner, Peter; Wagner, Wolfgang; Wahrmund, Sebastian; Wakabayashi, Jun; Walder, James; Walker, Rodney; Walkowiak, Wolfgang; Wall, Richard; Waller, Peter; Walsh, Brian; Wang, Chao; Wang, Chiho; Wang, Haichen; Wang, Hulin; Wang, Jike; Wang, Jin; Wang, Kuhan; Wang, Rui; Wang, Song-Ming; Wang, Tan; Wang, Xiaoxiao; Warburton, Andreas; Ward, Patricia; Wardrope, David Robert; Washbrook, Andrew; Wasicki, Christoph; Watanabe, Ippei; Watkins, Peter; Watson, Alan; Watson, Ian; Watson, Miriam; Watts, Gordon; Watts, Stephen; Waugh, Ben; Webb, Samuel; Weber, Michele; Weber, Stefan Wolf; Webster, Jordan S; Weidberg, Anthony; Weigell, Philipp; Weingarten, Jens; Weiser, Christian; Weits, Hartger; Wells, Phillippa; Wenaus, Torre; Wendland, Dennis; Weng, Zhili; Wengler, Thorsten; Wenig, Siegfried; Wenzel, Volker; Wermes, Norbert; Werner, Matthias; Werner, Per; Wessels, Martin; Wetter, Jeffrey; Whalen, Kathleen; White, Andrew; White, Martin; White, Ryan; Whiteson, Daniel; Whittington, Denver; Wickens, Fred; Wiedenmann, Werner; Wielers, Monika; Wienemann, Peter; Wiglesworth, Craig; Wiik-Fuchs, Liv Antje Mari; Wijeratne, Peter Alexander; Wildauer, Andreas; Wildt, Martin Andre; Wilkens, Henric George; Will, Jonas Zacharias; Williams, Hugh; Williams, Sarah; Willocq, Stephane; Wilson, Alan; Wilson, John; Wingerter-Seez, Isabelle; Winkelmann, Stefan; Winklmeier, Frank; Wittgen, Matthias; Wittig, Tobias; Wittkowski, Josephine; Wollstadt, Simon Jakob; Wolter, Marcin Wladyslaw; Wolters, Helmut; Wosiek, Barbara; Wotschack, Jorg; Woudstra, Martin; Wozniak, Krzysztof; Wraight, Kenneth; Wright, Michael; Wu, Sau Lan; Wu, Xin; Wu, Yusheng; Wulf, Evan; Wyatt, Terry Richard; Wynne, Benjamin; Xella, Stefania; Xiao, Meng; Xu, Da; Xu, Lailin; Yabsley, Bruce; Yacoob, Sahal; Yamada, Miho; Yamaguchi, Hiroshi; Yamaguchi, Yohei; Yamamoto, Akira; Yamamoto, Kyoko; Yamamoto, Shimpei; Yamamura, Taiki; Yamanaka, Takashi; Yamauchi, Katsuya; Yamazaki, Yuji; Yan, Zhen; Yang, Haijun; Yang, Hongtao; Yang, Un-Ki; Yang, Yi; Yanush, Serguei; Yao, Liwen; Yasu, Yoshiji; Yatsenko, Elena; Yau Wong, Kaven Henry; Ye, Jingbo; Ye, Shuwei; Yen, Andy L; Yildirim, Eda; Yilmaz, Metin; Yoosoofmiya, Reza; Yorita, Kohei; Yoshida, Rikutaro; Yoshihara, Keisuke; Young, Charles; Young, Christopher John; Youssef, Saul; Yu, David Ren-Hwa; Yu, Jaehoon; Yu, Jiaming; Yu, Jie; Yuan, Li; Yurkewicz, Adam; Zabinski, Bartlomiej; Zaidan, Remi; Zaitsev, Alexander; Zaman, Aungshuman; Zambito, Stefano; Zanello, Lucia; Zanzi, Daniele; Zaytsev, Alexander; Zeitnitz, Christian; Zeman, Martin; Zemla, Andrzej; Zengel, Keith; Zenin, Oleg; Ženiš, Tibor; Zerwas, Dirk; Zevi della Porta, Giovanni; Zhang, Dongliang; Zhang, Huaqiao; Zhang, Jinlong; Zhang, Lei; Zhang, Xueyao; Zhang, Zhiqing; Zhao, Zhengguo; Zhemchugov, Alexey; Zhong, Jiahang; Zhou, Bing; Zhou, Lei; Zhou, Ning; Zhu, Cheng Guang; Zhu, Hongbo; Zhu, Junjie; Zhu, Yingchun; Zhuang, Xuai; Zibell, Andre; Zieminska, Daria; Zimine, Nikolai; Zimmermann, Christoph; Zimmermann, Robert; Zimmermann, Simone; Zimmermann, Stephanie; Zinonos, Zinonas; Ziolkowski, Michael; Zobernig, Georg; Zoccoli, Antonio; zur Nedden, Martin; Zurzolo, Giovanni; Zutshi, Vishnu; Zwalinski, Lukasz; CERN. Geneva. The LHC experiments Committee; LHCC

    2013-01-01

    The Phase-I upgrade of the ATLAS Trigger and Data Acquisition (TDAQ) system is to allow the ATLAS experiment to efficiently trigger and record data at instantaneous luminosities that are up to three times that of the original LHC design while maintaining trigger thresholds close to those used in the initial run of the LHC.

  16. Online radiation dose measurement system for ATLAS experiment

    Energy Technology Data Exchange (ETDEWEB)

    Mandic, I.; Cindro, V.; Dolenc, I.; Gorisek, A.; Kramberger, G. [Jozef Stefan Institute, Jamova 39, Ljubljana (Slovenia); Mikuz, M. [Jozef Stefan Institute, Jamova 39, Ljubljana (Slovenia); Faculty of Mathematics and Physics, University of Ljubljana (Slovenia); Bronner, J.; Hartet, J. [Physikalisches Institut, Universitat Freiburg, Hermann-Herder-Str. 3, Freiburg (Germany); Franz, S. [CERN, Geneva (Switzerland)

    2009-07-01

    In experiments at Large Hadron Collider, detectors and electronics will be exposed to high fluxes of photons, charged particles and neutrons. Damage caused by the radiation will influence performance of detectors. It will therefore be important to continuously monitor the radiation dose in order to follow the level of degradation of detectors and electronics and to correctly predict future radiation damage. A system for online radiation monitoring using semiconductor radiation sensors at large number of locations has been installed in the ATLAS experiment. Ionizing dose in SiO{sub 2} will be measured with RadFETs, displacement damage in silicon in units of 1-MeV(Si) equivalent neutron fluence with p-i-n diodes. At 14 monitoring locations where highest radiation levels are expected the fluence of thermal neutrons will be measured from current gain degradation in dedicated bipolar transistors. The design of the system and tests of its performance in mixed radiation field is described in this paper. First results from this test campaign confirm that doses can be measured with sufficient sensitivity (mGy for total ionizing dose measurements, 10{sup 9} n/cm{sup 2} for NIEL (non-ionizing energy loss) measurements, 10{sup 12} n/cm{sup 2} for thermal neutrons) and accuracy (about 20%) for usage in the ATLAS detector

  17. Online radiation dose measurement system for ATLAS experiment

    International Nuclear Information System (INIS)

    Mandic, I.; Cindro, V.; Dolenc, I.; Gorisek, A.; Kramberger, G.; Mikuz, M.; Bronner, J.; Hartet, J.; Franz, S.

    2009-01-01

    In experiments at Large Hadron Collider, detectors and electronics will be exposed to high fluxes of photons, charged particles and neutrons. Damage caused by the radiation will influence performance of detectors. It will therefore be important to continuously monitor the radiation dose in order to follow the level of degradation of detectors and electronics and to correctly predict future radiation damage. A system for online radiation monitoring using semiconductor radiation sensors at large number of locations has been installed in the ATLAS experiment. Ionizing dose in SiO 2 will be measured with RadFETs, displacement damage in silicon in units of 1-MeV(Si) equivalent neutron fluence with p-i-n diodes. At 14 monitoring locations where highest radiation levels are expected the fluence of thermal neutrons will be measured from current gain degradation in dedicated bipolar transistors. The design of the system and tests of its performance in mixed radiation field is described in this paper. First results from this test campaign confirm that doses can be measured with sufficient sensitivity (mGy for total ionizing dose measurements, 10 9 n/cm 2 for NIEL (non-ionizing energy loss) measurements, 10 12 n/cm 2 for thermal neutrons) and accuracy (about 20%) for usage in the ATLAS detector

  18. Module and electronics developments for the ATLAS ITK pixel system

    CERN Document Server

    Munoz Sanchez, Francisca Javiela; The ATLAS collaboration

    2017-01-01

    ATLAS is preparing for an extensive modification of its detector in the course of the planned HL-LHC accelerator upgrade around 2025 which includes a replacement of the entire tracking system by an all-silicon detector (Inner Tracker, ITk). The five innermost layers of ITk will comprise of a pixel detector built of new sensor and readout electronics technologies to improve the tracking performance and cope with the severe HL-LHC environment in terms of occupancy and radiation. The total area of the new pixel system could measure up to 14 m2, depending on the final layout choice that is expected to take place in 2017. A new on-detector readout chip is designed in the context of the RD53 collaboration in 65 nm CMOS technology. This paper will present the on-going R&D within the ATLAS ITK project towards the new pixel modules and the off-detector electronics. Planar and 3D sensors are being re-designed with cell sizes of 50x50 or 25x100 μm2, compatible with the RD53 chip. A sensor thickness equal or less th...

  19. A Readout Driver for the ATLAS LAr Calorimeter at a High Luminosity LHC

    CERN Document Server

    Kielburg-Jeka, A; The ATLAS collaboration

    2010-01-01

    A new readout driver (ROD) is being developed as a central part of the signal processing of the ATLAS liquid-argon calorimeters for operation at the sLHC. In the architecture of the upgraded readout system, the ROD modules will have several challenging tasks: receiving of up to 1.4 Tb/s of data per board from the detector front-end on multiple high-speed serial links, low-latency data processing, data buffering, and data transmission to the ATLAS trigger and DAQ systems. In order to evaluate the different components, prototype boards in ATCA format equipped with modern Xilinx and Altera FPGAs have been built. We will report on the measured performance of the SERDES devices, the parallel signal processing using DSP slices, the implementation of trigger interfaces, using e.g. multi-Gb Ethernet, as well as the development of the ATCA infrastructure on the ROD prototype modules.

  20. A Readout Driver for the ATLAS LAr Calorimeter at a High Luminosity LHC

    CERN Document Server

    Kielburg-Jeka, A

    2011-01-01

    A new readout driver (ROD) is being developed as a central part of the signal processing of the ATLAS liquid-argon calorimeters for operation at the High Luminosity LHC (HL-LHC). In the architecture of the upgraded readout system, the ROD modules will have several challenging tasks: receiving of up to 1.4 Tb/s of data per board from the detector front-end on multiple high-speed serial links, low-latency data processing, data buffering, and data transmission to the ATLAS trigger and DAQ systems. In order to evaluate the different components, prototype boards in ATCA format equipped with modern Xilinx and Altera FPGAs have been built. We will report on the measured performance of the SERDES devices, the parallel signal processing using DSP slices, the implementation of trigger interfaces, using e.g. multi-Gb Ethernet, as well as the development of the ATCA infrastructure on the ROD prototype modules.

  1. A modern and versatile data-acquisition package for calorimeter prototypes test-beams H4DAQ

    CERN Document Server

    Marini, Andrea Carlo

    2017-01-01

    The upgrade of the calorimeters for the HL-LHC or for future colliders requires an extensive programme of tests to qualify different detector prototypes with dedicated test beams. A common data-acquisition system (called H4DAQ) was developed for the H4 test beam line at the North Area of the CERN SPS in 2014 and it has since been adopted by an increasing number of teams involved in the CMS experiment and AIDA groups. Several different calorimeter prototypes and precision timing detectors have used H4DAQ from 2014 to 2017, and it has proved to be a versatile application, portable to many other beam test environments (the CERN beam lines EA-T9 at the PS, H2 and H4 at the SPS, and at the INFN Frascati Beam Test Facility).The H4DAQ is fast, simple, modular and can be configured to support different setups. The different functionalities of the DAQ core software are split into three configurable finite state machines the data readout, run control, and event builder. The distribution of information and data betw...

  2. Efficient network monitoring for large data acquisition systems

    International Nuclear Information System (INIS)

    Savu, D.O.; Martin, B.; Al-Shabibi, A.; Sjoen, R.; Batraneanu, S.M.; Stancu, S.N.

    2012-01-01

    Though constantly evolving and improving, the available network monitoring solutions have limitations when applied to the infrastructure of a high speed realtime data acquisition (DAQ) system. DAQ networks are particular computer networks where experts have to pay attention to both individual subsections as well as system wide traffic flows while monitoring the network. The ATLAS Network at the Large Hadron Collider (LHC) has more than 200 switches interconnecting 3500 hosts and totaling 8500 high speed links. The use of heterogeneous tools for monitoring various infrastructure parameters, in order to assure optimal DAQ system performance, proved to be a tedious and time consuming task for experts. To alleviate this problem we used our networking and DAQ expertise to build a flexible and scalable monitoring system providing an intuitive user interface with the same look and feel irrespective of the data provider that is used. Our system uses custom developed components for critical performance monitoring and seamlessly integrates complementary data from auxiliary tools, such as NAGIOS, information services or custom databases. A number of techniques (e.g. normalization, aggregation and data caching) were used in order to improve the user interface response time. The end result is a unified monitoring interface, for fast and uniform access to system statistics, which significantly reduced the time spent by experts for ad-hoc and post-mortem analysis. (authors)

  3. The ATLAS Data Flow System for LHC Run II

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00305920; The ATLAS collaboration

    2016-01-01

    After its first shutdown, the LHC will provide pp collisions with increased luminosity and energy. In the ATLAS experiment, the Trigger and Data Acquisition (TDAQ) system has been upgraded to deal with the increased event rates. The Data Flow (DF) element of the TDAQ is a distributed hardware and software system responsible for buffering and transporting event data from the readout system to the High Level Trigger (HLT) and to the event storage. The DF has been reshaped in order to profit from the technological progress and to maximize the flexibility and efficiency of the data selection process. The updated DF is radically different from the previous implementation both in terms of architecture and expected performance. The pre-existing two level software filtering, known as L2 and the Event Filter, and the Event Building are now merged into a single process, performing incremental data collection and analysis. This design has many advantages, among which are: the radical simplification of the architecture, ...

  4. The ATLAS Data Flow System for Run 2

    CERN Document Server

    Kazarov, Andrei; The ATLAS collaboration

    2015-01-01

    After its first shutdown, the LHC will provide pp collisions with increased luminosity and energy. In the ATLAS experiment, the Trigger and Data Acquisition (TDAQ) system has been upgraded to deal with the increased event rates. The Data Flow (DF) element of the TDAQ is a distributed hardware and software system responsible for buffering and transporting event data from the readout system to the High Level Trigger (HLT) and to the event storage. The DF has been reshaped in order to profit from the technological progress and to maximize the flexibility and efficiency of the data selection process. The updated DF is radically different from the previous implementation both in terms of architecture and expected performance. The pre-existing two level software filtering, known as L2 and the Event Filter, and the Event Building are now merged into a single process, performing incremental data collection and analysis. This design has many advantages, among which are: the radical simplification of the architecture, ...

  5. ATLAS tile calorimeter cesium calibration control and analysis software

    International Nuclear Information System (INIS)

    Solovyanov, O; Solodkov, A; Starchenko, E; Karyukhin, A; Isaev, A; Shalanda, N

    2008-01-01

    An online control system to calibrate and monitor ATLAS Barrel hadronic calorimeter (TileCal) with a movable radioactive source, driven by liquid flow, is described. To read out and control the system an online software has been developed, using ATLAS TDAQ components like DVS (Diagnostic and Verification System) to verify the hardware before running, IS (Information Server) for data and status exchange between networked computers, and other components like DDC (DCS to DAQ Connection), to connect to PVSS-based slow control systems of Tile Calorimeter, high voltage and low voltage. A system of scripting facilities, based on Python language, is used to handle all the calibration and monitoring processes from hardware perspective to final data storage, including various abnormal situations. A QT based graphical user interface to display the status of the calibration system during the cesium source scan is described. The software for analysis of the detector response, using online data, is discussed. Performance of the system and first experience from the ATLAS pit are presented

  6. ATLAS tile calorimeter cesium calibration control and analysis software

    Energy Technology Data Exchange (ETDEWEB)

    Solovyanov, O; Solodkov, A; Starchenko, E; Karyukhin, A; Isaev, A; Shalanda, N [Institute for High Energy Physics, Protvino 142281 (Russian Federation)], E-mail: Oleg.Solovyanov@ihep.ru

    2008-07-01

    An online control system to calibrate and monitor ATLAS Barrel hadronic calorimeter (TileCal) with a movable radioactive source, driven by liquid flow, is described. To read out and control the system an online software has been developed, using ATLAS TDAQ components like DVS (Diagnostic and Verification System) to verify the hardware before running, IS (Information Server) for data and status exchange between networked computers, and other components like DDC (DCS to DAQ Connection), to connect to PVSS-based slow control systems of Tile Calorimeter, high voltage and low voltage. A system of scripting facilities, based on Python language, is used to handle all the calibration and monitoring processes from hardware perspective to final data storage, including various abnormal situations. A QT based graphical user interface to display the status of the calibration system during the cesium source scan is described. The software for analysis of the detector response, using online data, is discussed. Performance of the system and first experience from the ATLAS pit are presented.

  7. Improving Security in the ATLAS PanDA System

    International Nuclear Information System (INIS)

    Caballero, J; Maeno, T; Potekhin, M; Wenaus, T; Nilsson, P; Stewart, G

    2011-01-01

    The security challenges faced by users of the grid are considerably different to those faced in previous environments. The adoption of pilot jobs systems by LHC experiments has mitigated many of the problems associated with the inhomogeneities found on the grid and has greatly improved job reliability; however, pilot jobs systems themselves must then address many security issues, including the execution of multiple users' code under a common 'grid' identity. In this paper we describe the improvements and evolution of the security model in the ATLAS PanDA (Production and Distributed Analysis) system. We describe the security in the PanDA server which is in place to ensure that only authorized members of the VO are allowed to submit work into the system and that jobs are properly audited and monitored. We discuss the security in place between the pilot code itself and the PanDA server, ensuring that only properly authenticated workload is delivered to the pilot for execution. When the code to be executed is from a 'normal' ATLAS user, as opposed to the production system or other privileged actor, then the pilot may use an EGEE developed identity switching tool called gLExec. This changes the grid proxy available to the job and also switches the UNIX user identity to protect the privileges of the pilot code proxy. We describe the problems in using this system and how they are overcome. Finally, we discuss security drills which have been run using PanDA and show how these improved our operational security procedures.

  8. Simulation of the ATLAS New Small Wheel (NSW) System

    CERN Document Server

    Maekawa, Koki; The ATLAS collaboration

    2017-01-01

    The instantaneous luminosity of the Large Hadron Collider (LHC) at CERN will be increased up to a factor of five with respect to the present design value by undergoing an extensive upgrade program over the coming decade. In order to benefit from the expected high luminosity performance that will be provided by the Phase-1 upgraded LHC, the first station of the ATLAS muon end-cap Small Wheel system will need to be replaced by a New Small Wheel (NSW) detector. The NSW is going to be installed in the ATLAS detector in the forward region of 1.3 < |η| < 2.7 during the second long LHC shutdown. The NSW will have to operate in a high background radiation region, while reconstructing muon tracks with high precision as well as furnishing information for the Level-1 trigger. A detailed study of the final design and validation of the readout electronics for a set of precision tracking (Micromegas) and trigger chambers (small-strip Thin Gap Chambers or sTGC) that are able to work at high rates with excellent real-...

  9. A Fast Hardware Tracker for the ATLAS Trigger System

    CERN Document Server

    Neubauer, M; The ATLAS collaboration

    2011-01-01

    In hadron collider experiments, triggering the detector to store interesting events for offline analysis is a challenge due to the high rates and multiplicities of particles produced. The LHC will soon operate at a center-of-mass energy of 14 TeV and at high instantaneous luminosities of the order of $10^{34}$ to $10^{35}$ cm$^{-2}$ s$^{-1}$. A multi-level trigger strategy is used in ATLAS, with the first level (LVL1) implemented in hardware and the second and third levels (LVL2 and EF) implemented in a large computer farm. Maintaining high trigger efficiency for the physics we are most interested in while at the same time suppressing high rate physics from inclusive QCD processes is a difficult but important problem. It is essential that the trigger system be flexible and robust, with sufficient redundancy and operating margin. Providing high quality track reconstruction over the full ATLAS detector by the start of processing at LVL2 is an important element to achieve these needs. As the instantaneous lumino...

  10. Simulation of the ATLAS New Small Wheel (NSW) System

    CERN Document Server

    Maekawa, Koki; The ATLAS collaboration

    2017-01-01

    The instantaneous luminosity of the Large Hadron Collider (LHC) at CERN will be increased up to a factor of five with respect to the present design value by undergoing an extensive upgrade program over the coming decade. In order to benefit from the expected high luminosity performance, the first station of the ATLAS muon end-cap Small Wheel system will need to be replaced by a New Small Wheel (NSW) detector during the second long LHC shutdown. The NSW will have to operate in a high background radiation region, while reconstructing muon tracks with high precision as well as furnishing information for the Level-1 trigger. The NSW simulation has been developed to model the actual response of the detector and its fast electronics. The simulation has been used to get a deep understanding of the trigger logic timing, the tracking-segment finding efficiency, track rate and track-pointing resolutions at the high background hit rate expected during the next phases of ATLAS at LHC. The results of these performance stu...

  11. gLExec Integration with the ATLAS PanDA Workload Management System

    CERN Document Server

    Karavakis, Edward; Campana, Simone; De, Kaushik; Di Girolamo, Alessandro; Litmaath, Maarten; Maeno, Tadashi; Medrano Llamas, Ramon; Nilsson, Paul; Wenaus, Torre

    2015-01-01

    ATLAS user jobs are executed on Worker Nodes (WNs) by pilots sent to sites by pilot factories. This paradigm serves to allow a high job reliability and although it has clear advantages, such as making the working environment homogeneous, the approach presents security and traceability challenges. To address these challenges, gLExec can be used to let the payloads for each user be executed under a different UNIX user id that uniquely identifies the ATLAS user. This paper describes the recent improvements and evolution of the security model within the ATLAS PanDA system, including improvements in the PanDA pilot, in the PanDA server and their integration with MyProxy, a credential caching system that entitles a person or a service to act in the name of the issuer of the credential. Finally, it presents results from ATLAS user jobs running with gLExec and describes the deployment campaign within ATLAS.

  12. A dynamic system for ATLAS software installation on OSG grid sites

    International Nuclear Information System (INIS)

    Zhao, X; Maeno, T; Wenaus, T; Leuhring, F; Youssef, S; Brunelle, J; De Salvo, A; Thompson, A S

    2010-01-01

    A dynamic and reliable system for installing the ATLAS software releases on Grid sites is crucial to guarantee the timely and smooth start of ATLAS production and reduce its failure rate. In this paper, we discuss the issues encountered in the previous software installation system, and introduce the new approach, which is built upon the new development in the areas of the ATLAS workload management system (PanDA), and software package management system (pacman). It is also designed to integrate with the EGEE ATLAS software installation framework. In the new system, ATLAS software releases are packaged as pacball, a uniquely identifiable and reproducible self-installing data file. The distribution of pacballs to remote sites is managed by ATLAS data management system (DQ2) and PanDA server. The installation on remote sites is automatically triggered by the PanDA pilot jobs. The installation job payload connects to a central ATLAS software installation portal, making the information of installation status easily accessible across OSG and EGEE Grids. The issues encountered in running the new system in production, and our future plan for improvement, will also be discussed.

  13. Network based on statistical multiplexing for event selection and event builder systems in high energy physics experiments; Reseau a multiplexage statistique pour les systemes de selection et de reconstruction d'evenements dans les experiences de physique des hautes energies

    Energy Technology Data Exchange (ETDEWEB)

    Calvet, D

    2000-03-01

    Systems for on-line event selection in future high energy physics experiments will use advanced distributed computing techniques and will need high speed networks. After a brief description of projects at the Large Hadron Collider, the architectures initially proposed for the Trigger and Data AcQuisition (TD/DAQ) systems of ATLAS and CMS experiments are presented and analyzed. A new architecture for the ATLAS T/DAQ is introduced. Candidate network technologies for this system are described. This thesis focuses on ATM. A variety of network structures and topologies suited to partial and full event building are investigated. The need for efficient networking is shown. Optimization techniques for high speed messaging and their implementation on ATM components are described. Small scale demonstrator systems consisting of up to 48 computers ({approx}1:20 of the final level 2 trigger) connected via ATM are described. Performance results are presented. Extrapolation of measurements and evaluation of needs lead to a proposal of implementation for the main network of the ATLAS T/DAQ system. (author)

  14. Network based on statistical multiplexing for event selection and event builder systems in high energy physics experiments

    International Nuclear Information System (INIS)

    Calvet, D.

    2000-03-01

    Systems for on-line event selection in future high energy physics experiments will use advanced distributed computing techniques and will need high speed networks. After a brief description of projects at the Large Hadron Collider, the architectures initially proposed for the Trigger and Data AcQuisition (TD/DAQ) systems of ATLAS and CMS experiments are presented and analyzed. A new architecture for the ATLAS T/DAQ is introduced. Candidate network technologies for this system are described. This thesis focuses on ATM. A variety of network structures and topologies suited to partial and full event building are investigated. The need for efficient networking is shown. Optimization techniques for high speed messaging and their implementation on ATM components are described. Small scale demonstrator systems consisting of up to 48 computers (∼1:20 of the final level 2 trigger) connected via ATM are described. Performance results are presented. Extrapolation of measurements and evaluation of needs lead to a proposal of implementation for the main network of the ATLAS T/DAQ system. (author)

  15. The Associative Memory system for the FTK processor at ATLAS

    CERN Document Server

    Cipriani, R; The ATLAS collaboration; Donati, S; Giannetti, P; Lanza, A; Luciano, P; Magalotti, D; Piendibene, M

    2013-01-01

    Modern experiments search for extremely rare processes hidden in much larger background levels. As the experiment complexity, the accelerator backgrounds and luminosity increase we need increasingly complex and exclusive selections. We present results and performances of a new prototype of Associative Memory system, the core of the Fast Tracker processor (FTK). FTK is a real time tracking device for the Atlas experiment trigger upgrade. The AM system provides massive computing power to minimize the online execution time of complex tracking algorithms. The time consuming pattern recognition problem, generally referred to as the “combinatorial challenge”, is beat by the Associative Memory (AM) technology exploiting parallelism to the maximum level: it compares the event to pre-calculated “expectations” or “patterns” (pattern matching) at once looking for candidate tracks called “roads”. The problem is solved by the time data are loaded into the AM devices. We report on the tests of the integrate...

  16. The Associative Memory system for the FTK processor at ATLAS

    CERN Document Server

    Cipriani, R; The ATLAS collaboration; Donati, S; Giannetti, P; Lanza, A; Luciano, P; Magalotti, D; Piendibene, M

    2014-01-01

    Modern experiments search for extremely rare processes hidden in much larger background levels. As the experiment complexity, the accelerator backgrounds and luminosity increase we need increasingly complex and exclusive selections. We present results and performances of a new prototype of Associative Memory system, the core of the Fast Tracker processor (FTK). FTK is a real time tracking device for the Atlas experiment trigger upgrade. The AM system provides massive computing power to minimize the online execution time of complex tracking algorithms. The time consuming pattern recognition problem, generally referred to as the “combinatorial challenge”, is beat by the Associative Memory (AM) technology exploiting parallelism to the maximum level: it compares the event to pre-calculated “expectations” or “patterns” (pattern matching) at once looking for candidate tracks called “roads”. The problem is solved by the time data are loaded into the AM devices. We report on the tests of the integrate...

  17. The Associative Memory system for the FTK processor at ATLAS

    CERN Document Server

    Cipriani, R; The ATLAS collaboration; Donati, S; Giannetti, P; Lanza, A; Luciano, P; Magalotti, D; Piendibene, M

    2013-01-01

    Experiments at the LHC hadron collider search for extremely rare processes hidden in much larger background levels. As the experiment complexity, the accelerator backgrounds and instantaneus luminosity increase, increasingly complex and exclusive selections are necessary. We present results and performances of a new prototype of Associative Memory (AM) system, the core of the Fast Tracker processor (FTK). FTK is a real time tracking device for the ATLAS experiment trigger upgrade. The AM system provides massive computing power to minimize the online execution time of complex tracking algorithms. The time consuming pattern recognition problem, generally referred to as the "combinatorial challenge", is beat by the AM technology exploiting parallelism to the maximum level. The Associative Memory compares the event to pre-calculated "expectations" or "patterns" (pattern matching) at once and look for candidate tracks called "roads". The problem is solved by the time data are loaded into the AM devices. We report ...

  18. Quarkonia production in small and large systems measured by ATLAS

    CERN Document Server

    Lopez, Jorge; The ATLAS collaboration

    2018-01-01

    The experimentally observed dissociation and regeneration of bound quarkonium states in heavy-ion collisions provide a powerful tool to probe the dynamics of the hot, dense plasma. These measurements are sensitive to the effects of color screening, color recombination, or other, new suppression mechanisms. In the large-statistics Run 2 lead-lead and proton-lead collision data, these phenomena can be probed with unprecedented precision. Measurements of the ground and excited quarkonia states, as well as their separation into prompt and non-prompt components, provide further opportunities to study the dynamics of heavy parton energy loss in these large systems. In addition, quarkonium production rates, and their excited to ground states ratios, in small, asymmetric systems are an interesting probe of cold nuclear matter effects. In this talk, the latest ATLAS results on quarkonia production will be presented, including new, differential measurements of charmonium suppression and azimuthal modulation in lead-lea...

  19. The ATLAS Data Acquisition system in LHC Run 2

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00042480; The ATLAS collaboration

    2017-01-01

    The LHC has been providing pp collisions with record luminosity and energy since the start of Run 2 in 2015. The Trigger and Data Acquisition system of the ATLAS experiment has been upgraded to deal with the increased performance required by this new operational mode. The dataflow system and associated network infrastructure have been reshaped in order to benefit from technological progress and to maximize the flexibility and efficiency of the data selection process. The new design is radically different from the previous implementation both in terms of architecture and performance, with the previous two-level structure merged into a single processing farm, performing incremental data collection and analysis. In addition, logical farm slicing, with each slice managed by a dedicated supervisor, has been dropped in favour of global management by a single farm master operating at 100 kHz. This farm master has also been integrated with a new software-based Region of Interest builder, replacing the previous VMEbus...

  20. The Fiber Optic System for the Advanced Topographic Laser Altimeter System (ATLAS) Instrument

    Science.gov (United States)

    Ott, Melanie N.; Thomes, Joe; Onuma, Eleanya; Switzer, Robert; Chuska, Richard; Blair, Diana; Frese, Erich; Matyseck, Marc

    2016-01-01

    The Advanced Topographic Laser Altimeter System (ATLAS) Instrument has been in integration and testing over the past 18 months in preparation for the Ice, Cloud and Land Elevation Satellite - 2 (ICESat-2) Mission, scheduled to launch in 2017. ICESat-2 is the follow on to ICESat which launched in 2003 and operated until 2009. ATLAS will measure the elevation of ice sheets, glaciers and sea ice or the "cryosphere" (as well as terrain) to provide data for assessing the earth's global climate changes. Where ICESat's instrument, the Geo-Science Laser Altimeter (GLAS) used a single beam measured with a 70 m spot on the ground and a distance between spots of 170 m, ATLAS will measure a spot size of 10 m with a spacing of 70 cm using six beams to measure terrain height changes as small as 4 mm. The ATLAS pulsed transmission system consists of two lasers operating at 532 nm with transmitter optics for beam steering, a diffractive optical element that splits the signal into 6 separate beams, receivers for start pulse detection and a wavelength tracking system. The optical receiver telescope system consists of optics that focus all six beams into optical fibers that feed a filter system that transmits the signal via fiber assemblies to the detectors. Also included on the instrument is a system that calibrates the alignment of the transmitted pulses to the receiver optics for precise signal capture. The larger electro optical subsystems for transmission, calibration, and signal receive, stay aligned and transmitting sufficiently due to the optical fiber system that links them together. The robust design of the fiber optic system, consisting of a variety of multi fiber arrays and simplex assemblies with multiple fiber core sizes and types, will enable the system to maintain consistent critical alignments for the entire life of the mission. Some of the development approaches used to meet the challenging optical system requirements for ATLAS are discussed here.

  1. The fiber optic system for the Advanced Topographic Laser Altimeter System (ATLAS) instrument.

    Science.gov (United States)

    Ott, Melanie N; Thomes, Joe; Onuma, Eleanya; Switzer, Robert; Chuska, Richard; Blair, Diana; Frese, Erich; Matyseck, Marc

    2016-08-28

    The Advanced Topographic Laser Altimeter System (ATLAS) Instrument has been in integration and testing over the past 18 months in preparation for the Ice, Cloud and Land Elevation Satellite - 2 (ICESat-2) Mission, scheduled to launch in 2017. ICESat-2 is the follow on to ICESat which launched in 2003 and operated until 2009. ATLAS will measure the elevation of ice sheets, glaciers and sea ice or the "cryosphere" (as well as terrain) to provide data for assessing the earth's global climate changes. Where ICESat's instrument, the Geo-Science Laser Altimeter (GLAS) used a single beam measured with a 70 m spot on the ground and a distance between spots of 170 m, ATLAS will measure a spot size of 10 m with a spacing of 70 cm using six beams to measure terrain height changes as small as 4 mm.[1] The ATLAS pulsed transmission system consists of two lasers operating at 532 nm with transmitter optics for beam steering, a diffractive optical element that splits the signal into 6 separate beams, receivers for start pulse detection and a wavelength tracking system. The optical receiver telescope system consists of optics that focus all six beams into optical fibers that feed a filter system that transmits the signal via fiber assemblies to the detectors. Also included on the instrument is a system that calibrates the alignment of the transmitted pulses to the receiver optics for precise signal capture. The larger electro optical subsystems for transmission, calibration, and signal receive, stay aligned and transmitting sufficiently due to the optical fiber system that links them together. The robust design of the fiber optic system, consisting of a variety of multi fiber arrays and simplex assemblies with multiple fiber core sizes and types, will enable the system to maintain consistent critical alignments for the entire life of the mission. Some of the development approaches used to meet the challenging optical system requirements for ATLAS are discussed here.

  2. ATLAS Operations: Experience and Evolution in the Data Taking Era

    CERN Document Server

    Ueda, I; The ATLAS collaboration; Goossens, L; Stewart, G; Jezequel, S; Nairz, A; Negri, G; Campana, S; Di Girolamo, A

    2011-01-01

    This paper summarises the operational experience and improvements of the ATLAS hierarchical multi-tier computing infrastructure in the past year leading to taking and processing of the first collisions in 2009 and 2010. Special focus will be given to Tier-0 which is responsible, among other things, for a prompt processing of the raw data coming from the online DAQ system and is thus critical part of the chain. We will give an overview of the Tier-0 architecture, and improvements based on the operational experience. Emphasis will be put on the new developments, namely the Task Management System opening Tier-0 to expert users and Web 2.0 monitoring and management suite. We then overview the achieved performances with the distributed computing system, discuss observed data access patterns over the grid and describe how we used this information to improve analysis rates.

  3. ATLAS Operations: Experience and Evolution in the Data Taking Era

    International Nuclear Information System (INIS)

    Ueda, I

    2011-01-01

    This paper summarises the operational experience and improvements of the ATLAS hierarchical multi-tier computing infrastructure in the past year leading to taking and processing of the first collisions in 2009 and 2010. Special focus will be given to the Tier-0 which is responsible, among other things, for a prompt processing of the raw data coming from the online DAQ system and is thus a critical part of the chain. We will give an overview of the Tier-0 architecture, and improvements based on the operational experience. Emphasis will be put on the new developments, namely the Task Management System opening Tier-0 to expert users and Web 2.0 monitoring and management suite. We then overview the achieved performances with the distributed computing system, discuss observed data access patterns over the grid and describe how we used this information to improve analysis rates.

  4. Status and Evolution of ATLAS Workload Management System PanDA

    CERN Document Server

    AUTHOR|(CDS)2067365; The ATLAS collaboration

    2012-01-01

    The ATLAS experiment at the LHC uses a sophisticated workload management system, PanDA, to provide access for thousands of physicists to distributed computing resources of unprecedented scale. This system has proved to be robust and scalable during three years of LHC operations. We describe the design and performance of PanDA in ATLAS. The features which make PanDA successful in ATLAS could be applicable to other exabyte scale scientific projects. We describe plans to evolve PanDA towards a general workload management system for the new BigData initiative announced by the US government. Other planned future improvements to PanDA will also be described

  5. VMEbus based computer and real-time UNIX as infrastructure of DAQ

    International Nuclear Information System (INIS)

    Yasu, Y.; Fujii, H.; Nomachi, M.; Kodama, H.; Inoue, E.; Tajima, Y.; Takeuchi, Y.; Shimizu, Y.

    1994-01-01

    This paper describes what the authors have constructed as the infrastructure of data acquisition system (DAQ). The paper reports recent developments concerned with HP VME board computer with LynxOS (HP742rt/HP-RT) and Alpha/OSF1 with VMEbus adapter. The paper also reports current status of developing a Benchmark Suite for Data Acquisition (DAQBENCH) for measuring not only the performance of VME/CAMAC access but also that of the context switching, the inter-process communications and so on, for various computers including Workstation-based systems and VME board computers

  6. Calculation Sheet for the Basic Design of the ATLAS Fluid System

    Energy Technology Data Exchange (ETDEWEB)

    Park, Hyun Sik; Moon, S. K.; Yun, B. J.; Kwon, T. S.; Choi, K. Y.; Cho, S.; Park, C. K.; Lee, S. J.; Kim, Y. S.; Song, C. H.; Baek, W. P.; Hong, S. D

    2007-03-15

    The basic design of an integral effect test loop for pressurized water reactors (PWRs), the ATLAS (Advanced Thermal-hydraulic Test Loop for Accident Simulation), has been carried out by Thermal-Hydraulics Safety Research Team in Korea Atomic Energy Research Institute (KAERI). The ATLAS facility has been designed to have the length scale of 1/2 and area scale of 1/144 compared with the reference plant, APR1400, and is scaled for full pressure and temperature conditions. This report includes calculation sheets for the basic design of ATLAS fluid systems, which are consisted of a reactor pressure vessel with core simulator, the primary loop piping, a pressurizer, reactor coolant pumps, steam generators, the secondary system, the safety system, the auxiliary system, and the heat loss compensation system. The present calculation sheets will be used to help understanding the basic design of the ATLAS fluid system and its based scaling methodology.

  7. Calculation Sheet for the Basic Design of the ATLAS Fluid System

    International Nuclear Information System (INIS)

    Park, Hyun Sik; Moon, S. K.; Yun, B. J.; Kwon, T. S.; Choi, K. Y.; Cho, S.; Park, C. K.; Lee, S. J.; Kim, Y. S.; Song, C. H.; Baek, W. P.; Hong, S. D.

    2007-03-01

    The basic design of an integral effect test loop for pressurized water reactors (PWRs), the ATLAS (Advanced Thermal-hydraulic Test Loop for Accident Simulation), has been carried out by Thermal-Hydraulics Safety Research Team in Korea Atomic Energy Research Institute (KAERI). The ATLAS facility has been designed to have the length scale of 1/2 and area scale of 1/144 compared with the reference plant, APR1400, and is scaled for full pressure and temperature conditions. This report includes calculation sheets for the basic design of ATLAS fluid systems, which are consisted of a reactor pressure vessel with core simulator, the primary loop piping, a pressurizer, reactor coolant pumps, steam generators, the secondary system, the safety system, the auxiliary system, and the heat loss compensation system. The present calculation sheets will be used to help understanding the basic design of the ATLAS fluid system and its based scaling methodology

  8. Error Management in ATLAS TDAQ: An Intelligent Systems approach

    CERN Document Server

    Slopper, John Erik

    2010-01-01

    This thesis is concerned with the use of intelligent system techniques (IST) within a large distributed software system, specically the ATLAS TDAQ system which has been developed and is currently in use at the European Laboratory for Particle Physics(CERN). The overall aim is to investigate and evaluate a range of ITS techniques in order to improve the error management system (EMS) currently used within the TDAQ system via error detection and classication. The thesis work will provide a reference for future research and development of such methods in the TDAQ system. The thesis begins by describing the TDAQ system and the existing EMS, with a focus on the underlying expert system approach, in order to identify areas where improvements can be made using IST techniques. It then discusses measures of evaluating error detection and classication techniques and the factors specic to the TDAQ system. Error conditions are then simulated in a controlled manner using an experimental setup and datasets were gathered fro...

  9. Network Resiliency Implementation in the ATLAS TDAQ System

    CERN Document Server

    Stancu, S N; The ATLAS collaboration; Batraneanu, S M; Ballestrero, S; Caramarcu, C; Martin, B; Savu, D O; Sjoen, R V; Valsan, L

    2010-01-01

    The ATLAS TDAQ (Trigger and Data Acquisition) system performs the real-time selection of events produced by the detector. For this purpose approximately 2000 computers are deployed and interconnected through various high speed networks, whose architecture has already been described. This article focuses on the implementation and validation of network connectivity resiliency (previously presented at a conceptual level). Redundancy and eventually load balancing are achieved through the synergy of various protocols: 802.3ad link aggregation, OSPF (Open Shortest Path First), VRRP (Virtual Router Redundancy Protocol), MST (Multiple Spanning Trees). An innovative method for cost-effective redundant connectivity of high-throughput high-availability servers is presented. Furthermore, real-life examples showing how redundancy works, and more importantly how it might fail despite careful planning are presented.

  10. Integrated System for Performance Monitoring of ATLAS TDAQ Network

    CERN Document Server

    Savu, D; The ATLAS collaboration; Martin, B; Sjoen, R; Batraneanu, S; Stancu, S

    2010-01-01

    The ATLAS TDAQ Network consists of three separate networks spanning four levels of the experimental building. Over 200 edge switches and 5 multi-blade chassis routers are used to interconnect 2000 processors, adding up to more than 7000 high speed interfaces. In order to substantially speed-up ad-hoc and post mortem analysis, a scalable, yet flexible, integrated system for monitoring both network statistics and environmental conditions, processor parameters and data taking characteristics was required. For successful up-to-the-minute monitoring, information from many SNMP compliant devices, independent databases and custom APIs was gathered, stored and displayed in an optimal way. Easy navigation and compact aggregation of multiple data sources were the main requirements; characteristics not found in any of the tested products, either open-source or commercial. This paper describes how performance, scalability and display issues were addressed and what challenges the project faced during development and deplo...

  11. Calibration and Monitoring systems of the ATLAS Tile Hadron Calorimeter

    CERN Document Server

    BOUMEDIENE, D; The ATLAS collaboration

    2012-01-01

    The TileCal is the hadronic calorimeter covering the most central region of the ATLAS experiment at LHC. It is a sampling calorimeter with iron plates as absorber and plastic scintillating tiles as the active material. The scintillation light produced by the passage of charged particles is transmitted by wavelength shifting fibers to about 10000 photomultiplier tubes (PMTs). Integrated on the calorimeter there is a composite device that allows to monitor and/or equalize the signals at various stages of its formation. This device is based on signal generation from different sources: radioactive, LASER and charge injection and minimum bias events produces in proton-proton collisions. In this contribution is given a brief description of the different systems hardware and presented the latest results on their performance, like the determination of the conversion factors, linearity and stability.

  12. Network Resiliency Implementation in the ATLAS TDAQ System

    CERN Document Server

    Stancu, S N; The ATLAS collaboration

    2010-01-01

    The ATLAS TDAQ system performs the real time selection of events produced by the detector. For this purpose approximately 2000 computers are deployed and interconnected through various high speed networks, whose architecture has already been described. This article focuses on the implementation and validation of network connectivity resiliency (previously presented at a conceptual level). Redundancy and eventually load balancing are achieved through the synergy of various protocols: 802.3ad link aggregation, OSPF, VRRP, MSTP. An innovative method for cost efficient redundant connectivity of high-throughput high-availability servers is presented. Furthermore, real life examples showing how redundancy works, and more importantly how it might fail despite careful planning are presented.

  13. PanDA: distributed production and distributed analysis system for ATLAS

    International Nuclear Information System (INIS)

    Maeno, T

    2008-01-01

    A new distributed software system was developed in the fall of 2005 for the ATLAS experiment at the LHC. This system, called PANDA, provides an integrated service architecture with late binding of jobs, maximal automation through layered services, tight binding with ATLAS Distributed Data Management system [1], advanced error discovery and recovery procedures, and other features. In this talk, we will describe the PANDA software system. Special emphasis will be placed on the evolution of PANDA based on one and half year of real experience in carrying out Computer System Commissioning data production [2] for ATLAS. The architecture of PANDA is well suited for the computing needs of the ATLAS experiment, which is expected to be one of the first HEP experiments to operate at the petabyte scale

  14. Network based on statistical multiplexing for event selection and event builder systems in high energy physics experiments; Reseau a multiplexage statistique pour les systemes de selection et de reconstruction d'evenements dans les experiences de physique des hautes energies

    Energy Technology Data Exchange (ETDEWEB)

    Calvet, D

    2000-03-01

    Systems for on-line event selection in future high energy physics experiments will use advanced distributed computing techniques and will need high speed networks. After a brief description of projects at the Large Hadron Collider, the architectures initially proposed for the Trigger and Data AcQuisition (TD/DAQ) systems of ATLAS and CMS experiments are presented and analyzed. A new architecture for the ATLAS T/DAQ is introduced. Candidate network technologies for this system are described. This thesis focuses on ATM. A variety of network structures and topologies suited to partial and full event building are investigated. The need for efficient networking is shown. Optimization techniques for high speed messaging and their implementation on ATM components are described. Small scale demonstrator systems consisting of up to 48 computers ({approx}1:20 of the final level 2 trigger) connected via ATM are described. Performance results are presented. Extrapolation of measurements and evaluation of needs lead to a proposal of implementation for the main network of the ATLAS T/DAQ system. (author)

  15. IT Infrastructure Design and Implementation Considerations for the ATLAS TDAQ System

    CERN Document Server

    Dobson, M; The ATLAS collaboration; Caramarcu, C; Dumitru, I; Valsan, L; Darlea, G L; Bujor, F; Bogdanchikov, A G; Korol, A A; Zaytsev, A S; Ballestrero, S

    2013-01-01

    This paper gives a thorough overview of the ATLAS TDAQ SysAdmin group activities which deals with administration of the TDAQ computing environment supporting Front End detector hardware, Data Flow, Event Filter and other subsystems of the ATLAS detector operating on the LHC accelerator at CERN. The current installation consists of approximately 1500 netbooted nodes managed by more than 60 dedicated servers, a high performance centralized storage system, about 50 multi-screen user interface systems installed in the control rooms and various hardware and critical service monitoring machines. In the final configuration, the online computer farm will be capable of hosting tens of thousands applications running simultaneously. The ATLAS TDAQ computing environment is now serving more than 3000 users subdivided into approximately 300 categories in correspondence with their roles in the system. The access and role management system is custom built on top of an LDAP schema. The engineering infrastructure of the ATLAS ...

  16. ATLAS magnet common cryogenic, vacuum, electrical and control systems

    CERN Document Server

    Miele, P; Delruelle, N; Geich-Gimbel, C; Haug, F; Olesen, G; Pengo, R; Sbrissa, E; Tyrvainen, H; ten Kate, H H J

    2004-01-01

    The superconducting Magnet System for the ATLAS detector at the LHC at CERN comprises a Barrel Toroid, two End Cap Toroids and a Central Solenoid with overall dimensions of 20 m diameter by 26 m length and a stored energy of 1.6 GJ. Common proximity cryogenic and electrical systems for the toroids are implemented. The Cryogenic System provides the cooling power for the 3 toroid magnets considered as a single cold mass (600 tons) and for the CS. The 21 kA toroid and the 8 kA solenoid electrical circuits comprise both a switch-mode power supply, two circuit breakers, water cooled bus bars, He cooled current leads and the diode resistor ramp-down unit. The Vacuum System consists of a group of primary rotary pumps and sets of high vacuum diffusion pumps connected to each individual cryostat. The Magnet Safety System guarantees the magnet protection and human safety through slow and fast dump treatment. The Magnet Control System ensures control, regulation and monitoring of the operation of the magnets. The update...

  17. Implementation of CMS Central DAQ monitoring services in Node.js

    CERN Document Server

    Vougioukas, Michail

    2015-01-01

    This report summarizes my contribution to the CMS Central DAQ monitoring system, in my capacity as a CERN Summer Students Programme participant, from June to September 2015. Specifically, my work was focused on rewriting – from Apache/PHP to Node.js/Javascript - and optimizing real-time monitoring web services (mostly Elasticsearch-based but also some Oracle-based) for the CMS Data Acquisition (Run II Filterfarm). Moreover, it included an implementation of web server caching, for better scalability when simultaneous web clients use the services. Measurements confirmed that the software developed during this project has indeed a potential to provide scalable services.

  18. The Phase-2 ATLAS ITk Pixel Upgrade

    CERN Document Server

    Macchiolo, Anna; The ATLAS collaboration

    2018-01-01

    The new ATLAS ITk pixel system will be installed during the LHC Phase-II shutdown, to better take advantage of the increased luminosity of the HL-LHC. The detector will consist of 5 layers of stave-like support structures in the most central region and ring-shaped supports in the endcap regions, covering up to |η| < 4. While the outer 3 layers of the Pixel Detector are designed to operate for the full HL-LHC data taking period, the innermost 2 layers of the detector will be replaced around half of the lifetime. The ITk pixel detector will be instrumented with new sensors and readout electronics to provide improved tracking performance and radiation hardness compared to the current detector. Sensors will be read out by new ASICs based on the chip developed by the RD53 Collaboration. The pixel off-detector readout electronics will be implemented in the framework of the general ATLAS trigger and DAQ system with a readout speed of up to 5 Gb/s per data link for the innermost layers. Results of extensive tests...

  19. Online remote monitoring facilities for the ATLAS experiment

    CERN Document Server

    Kolos, S; The ATLAS collaboration; Feng, E; Hauser, R; Yakovlev, A; Zaytsev, A

    2010-01-01

    ATLAS is one of the 4 LHC experiments which started to be operated in the collisions mode in 2010. The ATLAS apparatus itself as well as the Trigger and the DAQ system are extremely complex facilities which have been built up by the collaboration including 144 institutes from 33 countries. The effective running of the experiment is supported by a large number of experts distributed all over the world. This paper describes the online remote monitoring system which has been developed in the ATLAS TDAQ community in order to support efficient participation of the experts from remote institutes in the exploitation of the experiment. The facilities provided by the remote monitoring system are ranging from the WEB based access to the general status and data quality for the ongoing data taking session to the scalable service providing real-time mirroring of the detailed monitoring data from the experimental area to the dedicated computers in the CERN public network, where this data is made available to remote users t...

  20. The ATLAS Data Acquisition System in LHC Run 2

    Science.gov (United States)

    Panduro Vazquez, William; ATLAS Collaboration

    2017-10-01

    The LHC has been providing pp collisions with record luminosity and energy since the start of Run 2 in 2015. The Trigger and Data Acquisition system of the ATLAS experiment has been upgraded to deal with the increased performance required by this new operational mode. The dataflow system and associated network infrastructure have been reshaped in order to benefit from technological progress and to maximize the flexibility and efficiency of the data selection process. The new design is radically different from the previous implementation both in terms of architecture and performance, with the previous two-level structure merged into a single processing farm, performing incremental data collection and analysis. In addition, logical farm slicing, with each slice managed by a dedicated supervisor, has been dropped in favour of global management by a single farm master operating at 100 kHz. This farm master has also been integrated with a new software-based Region of Interest builder, replacing the previous VMEbus-based system. Finally, the Readout system has been completely refitted with new higher performance, lower footprint server machines housing a new custom front-end interface card. Here we will cover the overall design of the system, along with performance results from the start-up phase of LHC Run 2.

  1. Integration of the trigger and data acquisition systems in ATLAS

    Energy Technology Data Exchange (ETDEWEB)

    Abolins, M [Michigan State University, Department of Physics and Astronomy, East Lansing, Michigan (United States); Adragna, P [Department of Physics, Queen Mary and Westfield College, University of London, London (United Kingdom); Aleksandrov, E; Aleksandrov, I [Joint Institute for Nuclear Research, Dubna (Russian Federation); Amorim, A [Laboratorio de Instrumentacao e Fisica Experimental, Lisboa (Portugal); Anderson, K [University of Chicago, Enrico Fermi Institute, Chicago, Illinois (United States); Anduaga, X [National University of La Plata, La Plata (United States); Aracena, I; Bartoldus, R [Stanford Linear Accelerator Center (SLAC), Stanford (United States); Asquith, L [Department of Physics and Astronomy, University College London, London (United Kingdom); Avolio, G; Backlund, S [European Laboratory for Particle Physics (CERN), Geneva (Switzerland); Badescu, E [National Institute for Physics and Nuclear Engineering, Institute of Atomic Physics, Bucharest (Romania); Baines, J [Rutherford Appleton Laboratory, Chilton, Didcot (United Kingdom); Beck, H P [Laboratory for High Energy Physics, University of Bern, Bern (Switzerland); Bee, C [Centre de Physique des Particules de Marseille, IN2P3-CNRS, Marseille (France); Bell, P [Department of Physics and Astronomy, University of Manchester, Manchester (United Kingdom); Bell, W H [Department of Physics and Astronomy, University of Glasgow, Glasgow (United Kingdom); Barria, P; Batreanu, S [and others

    2008-07-01

    During 2006 and the first half of 2007, the installation, integration and commissioning of trigger and data acquisition (TDAQ) equipment in the ATLAS experimental area have progressed. There have been a series of technical runs using the final components of the system already installed in the experimental area. Various tests have been run including ones where level 1 preselected simulated proton-proton events have been processed in a loop mode through the trigger and dataflow chains. The system included the readout buffers containing the events, event building, level 2 and event filter trigger algorithms. The scalability of the system with respect to the number of event building nodes used has been studied and quantities critical for the final system, such as trigger rates and event processing times, have been measured using different trigger algorithms as well as different TDAQ components. This paper presents the TDAQ architecture, the current status of the installation and commissioning and highlights the main test results that validate the system.

  2. Integration of the trigger and data acquisition systems in ATLAS

    International Nuclear Information System (INIS)

    Abolins, M; Adragna, P; Aleksandrov, E; Aleksandrov, I; Amorim, A; Anderson, K; Anduaga, X; Aracena, I; Bartoldus, R; Asquith, L; Avolio, G; Backlund, S; Badescu, E; Baines, J; Beck, H P; Bee, C; Bell, P; Bell, W H; Barria, P; Batreanu, S

    2008-01-01

    During 2006 and the first half of 2007, the installation, integration and commissioning of trigger and data acquisition (TDAQ) equipment in the ATLAS experimental area have progressed. There have been a series of technical runs using the final components of the system already installed in the experimental area. Various tests have been run including ones where level 1 preselected simulated proton-proton events have been processed in a loop mode through the trigger and dataflow chains. The system included the readout buffers containing the events, event building, level 2 and event filter trigger algorithms. The scalability of the system with respect to the number of event building nodes used has been studied and quantities critical for the final system, such as trigger rates and event processing times, have been measured using different trigger algorithms as well as different TDAQ components. This paper presents the TDAQ architecture, the current status of the installation and commissioning and highlights the main test results that validate the system

  3. Integration of the Trigger and Data Acquisition Systems in ATLAS

    International Nuclear Information System (INIS)

    Abolins, M.; Adragna, P.; Aleksandrov, E.; Aleksandrov, I.; Amorim, A.; Anderson, K.; Anduaga, X.; Aracena, I.; Asquith, L.; Avolio, G.; Backlund, S.; Badescu, E.; Baines, J.; Barria, P.; Bartoldus, R.; Batreanu, S.; Beck, H.P.; Bee, C.; Bell, P.; Bell, W.H.; Bellomo, M.

    2011-01-01

    During 2006 and the first half of 2007, the installation, integration and commissioning of trigger and data acquisition (TDAQ) equipment in the ATLAS experimental area have progressed. There have been a series of technical runs using the final components of the system already installed in the experimental area. Various tests have been run including ones where level 1 preselected simulated proton-proton events have been processed in a loop mode through the trigger and dataflow chains. The system included the readout buffers containing the events, event building, level 2 and event filter trigger algorithms. The scalability of the system with respect to the number of event building nodes used has been studied and quantities critical for the final system, such as trigger rates and event processing times, have been measured using different trigger algorithms as well as different TDAQ components. This paper presents the TDAQ architecture, the current status of the installation and commissioning and highlights the main test results that validate the system.

  4. The ATLAS ROBIN. A high-performance data-acquisition module

    Energy Technology Data Exchange (ETDEWEB)

    Kugel, Andreas

    2009-08-19

    This work presents the re-configurable processor ROBIN, which is a key element of the data-acquisition-system of the ATLAS experiment, located at the new LHC at CERN. The ATLAS detector provides data over 1600 channels simultaneously towards the DAQ system. The ATLAS dataflow model follows the ''PULL'' strategy in contrast to the commonly used ''PUSH'' strategy. The data volume transported is reduced by a factor of 10, however the data must be temporarily stored at the entry to the DAQ system. The input layer consists of approx. 160 ROS read-out units comprising 1 PC and 4 ROBIN modules. Each ROBIN device acquires detector data via 3 input channels and performs local buffering. Board control is done via a 64-bit PCI interface. Event selection and data transmission runs via PCI in the baseline bus-based ROS. Alternatively, a local GE interface can take over part or all of the data traffic in the switch-based ROS, in order to reduce the load on the host PC. The performance of the ROBIN module stems from the close cooperation of a fast embedded processor with a complex FPGA. The efficient task-distribution lets the processor handle all complex management functionality, programmed in ''C'' while all movement of data is performed by the FPGA via multiple, concurrently operating DMA engines. The ROBIN-project was carried-out by and international team and comprises the design specification, the development of the ROBIN hardware, firmware (VHDL and C-Code), host-code (C++), prototyping, volume production and installation of 700 boards. The project was led by the author of this thesis. The hardware platform is an evolution of a FPGA processor previously designed by the author. He has contributed elementary concepts of the communication mechanisms and the ''C''-coded embedded application software. He also organised and supervised the prototype and series productions including the various design reports and presentations. The results show that the ROBIN-module is able to meet

  5. The ATLAS ROBIN. A high-performance data-acquisition module

    International Nuclear Information System (INIS)

    Kugel, Andreas

    2009-01-01

    This work presents the re-configurable processor ROBIN, which is a key element of the data-acquisition-system of the ATLAS experiment, located at the new LHC at CERN. The ATLAS detector provides data over 1600 channels simultaneously towards the DAQ system. The ATLAS dataflow model follows the ''PULL'' strategy in contrast to the commonly used ''PUSH'' strategy. The data volume transported is reduced by a factor of 10, however the data must be temporarily stored at the entry to the DAQ system. The input layer consists of approx. 160 ROS read-out units comprising 1 PC and 4 ROBIN modules. Each ROBIN device acquires detector data via 3 input channels and performs local buffering. Board control is done via a 64-bit PCI interface. Event selection and data transmission runs via PCI in the baseline bus-based ROS. Alternatively, a local GE interface can take over part or all of the data traffic in the switch-based ROS, in order to reduce the load on the host PC. The performance of the ROBIN module stems from the close cooperation of a fast embedded processor with a complex FPGA. The efficient task-distribution lets the processor handle all complex management functionality, programmed in ''C'' while all movement of data is performed by the FPGA via multiple, concurrently operating DMA engines. The ROBIN-project was carried-out by and international team and comprises the design specification, the development of the ROBIN hardware, firmware (VHDL and C-Code), host-code (C++), prototyping, volume production and installation of 700 boards. The project was led by the author of this thesis. The hardware platform is an evolution of a FPGA processor previously designed by the author. He has contributed elementary concepts of the communication mechanisms and the ''C''-coded embedded application software. He also organised and supervised the prototype and series productions including the various design reports and presentations. The results show that the ROBIN-module is able to meet

  6. The ATLAS ROBIN. A high-performance data-acquisition module

    Energy Technology Data Exchange (ETDEWEB)

    Kugel, Andreas

    2009-08-19

    This work presents the re-configurable processor ROBIN, which is a key element of the data-acquisition-system of the ATLAS experiment, located at the new LHC at CERN. The ATLAS detector provides data over 1600 channels simultaneously towards the DAQ system. The ATLAS dataflow model follows the ''PULL'' strategy in contrast to the commonly used ''PUSH'' strategy. The data volume transported is reduced by a factor of 10, however the data must be temporarily stored at the entry to the DAQ system. The input layer consists of approx. 160 ROS read-out units comprising 1 PC and 4 ROBIN modules. Each ROBIN device acquires detector data via 3 input channels and performs local buffering. Board control is done via a 64-bit PCI interface. Event selection and data transmission runs via PCI in the baseline bus-based ROS. Alternatively, a local GE interface can take over part or all of the data traffic in the switch-based ROS, in order to reduce the load on the host PC. The performance of the ROBIN module stems from the close cooperation of a fast embedded processor with a complex FPGA. The efficient task-distribution lets the processor handle all complex management functionality, programmed in ''C'' while all movement of data is performed by the FPGA via multiple, concurrently operating DMA engines. The ROBIN-project was carried-out by and international team and comprises the design specification, the development of the ROBIN hardware, firmware (VHDL and C-Code), host-code (C++), prototyping, volume production and installation of 700 boards. The project was led by the author of this thesis. The hardware platform is an evolution of a FPGA processor previously designed by the author. He has contributed elementary concepts of the communication mechanisms and the ''C''-coded embedded application software. He also organised and supervised the prototype and series productions including the various design

  7. Performance of a proximity cryogenic system for the ATLAS central solenoid magnet

    CERN Document Server

    Doi, Y; Makida, Y; Kondo, Y; Kawai, M; Aoki, K; Haruyama, T; Kondo, T; Mizumaki, S; Wachi, Y; Mine, S; Haug, F; Delruelle, N; Passardi, Giorgio; ten Kate, H H J

    2002-01-01

    The ATLAS central solenoid magnet has been designed and constructed as a collaborative work between KEK and CERN for the ATLAS experiment in the LHC project The solenoid provides an axial magnetic field of 2 Tesla at the center of the tracking volume of the ATLAS detector. The solenoid is installed in a common cryostat of a liquid-argon calorimeter in order to minimize the mass of the cryostat wall. The coil is cooled indirectly by using two-phase helium flow in a pair of serpentine cooling line. The cryogen is supplied by the ATLAS cryogenic plant, which also supplies helium to the Toroid magnet systems. The proximity cryogenic system for the solenoid has two major components: a control dewar and a valve unit In addition, a programmable logic controller, PLC, was prepared for the automatic operation and solenoid test in Japan. This paper describes the design of the proximity cryogenic system and results of the performance test. (7 refs).

  8. Evolution of the ATLAS Distributed Computing system during the LHC Long shutdown

    CERN Document Server

    Campana, S; The ATLAS collaboration

    2014-01-01

    The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the WLCG distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileu...

  9. First Results from the Online Radiation Dose Monitoring System in ATLAS experiment

    CERN Document Server

    Mandić, I; The ATLAS collaboration; Deliyergiyev, M; Gorišek, A; Kramberger, G; Mikuž, M; Franz, S; Hartert, J; Dawson, I; Miyagawa, P S; Nicolas, L

    2011-01-01

    High radiation doses which will accumulate in components of ATLAS experiment during data taking will cause damage to detectors and readout electronics. It is therefore important to continuously monitor the doses to estimate the level of degradation caused by radiation. Online radiation monitoring system measures ionizing dose in SiO2 and fluences of 1-MeV(Si) equivalent neutrons and thermal neutrons at several locations in ATLAS detector. In this paper measurements collected during two years of ATLAS data taking are presented and compared to predictions from radiation background simulations.

  10. FELIX: the new detector readout system for the ATLAS experiment

    CERN Document Server

    Zhang, Jinlong; The ATLAS collaboration

    2017-01-01

    After the Phase-I upgrade and onward, the Front-End Link eXchange (FELIX) system will be the interface between the data handling system and the detector front-end electronics and trigger electronics at the ATLAS experiment. FELIX will function as a router between custom serial links and a commodity switch network which will use standard technologies to communicate with data collecting and processing components. The FELIX system is being developed by using commercial-off-the-shelf server PC technology in combination with a FPGA-based PCIe Gen3 I/O card interfacing to GigaBit Transceiver links and with Timing, Trigger and Control connectivity provided by an FMC-based mezzanine card. Dedicated firmware for the Xilinx FPGA (Virtex 7 and Kintex UltraScale) installed on the I/O card alongside an interrupt-driven Linux kernel driver and user-space software will provide the required functionality. On the network side, the FELIX unit connects to both Ethernet-based network and Infiniband. The system architecture of FE...

  11. Control system for ATLAS TileCal HVRemote boards

    CERN Document Server

    AUTHOR|(SzGeCERN)739751; The ATLAS collaboration; Gurriana, Luis; Oleiro Seabra, Luis Filipe; Evans, Guiomar; Gomes, Agostinho; Maio, Amelia; Pinto Silva Rato, Catia Sofia; Almendra Sabino, Joao Maria; Soares Augusto, Jose

    2018-01-01

    One of the proposed solutions for upgrading the high voltage (HV) system of Tilecal, the ATLAS hadron calorimeter, consists in removing the HV regulation boards from the detector and deploying them in a low-radiation room where there is permanent access for maintenance. This option requires many ~100 m long HV cables but removes the requirement of radiation hard boards. That solution simplifies the control system of the HV regulation cards (called HVRemote). It consists of a Detector Control System (DCS) node linked to 256 HVRemote boards through a tree of Ethernet connections. Each HVRemote includes a smart Ethernet transceiver for converting data and commands from the DCS into serial peripheral interface (SPI) signals routed to SPI-capable devices in the HVRemote. The DCS connection to the transceiver and the control of some SPI-capable devices via Ethernet has been tested successfully. A test board (HVRemote-ctrl) with the interfacing sub-system of the HVRemote was fabricated. It is being tested through SP...

  12. Control System for ATLAS TileCal HVRemote boards

    CERN Document Server

    AUTHOR|(SzGeCERN)739751; The ATLAS collaboration; Gurriana, Luis; Oleiro Seabra, Luis Filipe; Evans, Guiomar; Gomes, Agostinho; Maio, Amelia; Pinto Silva Rato, Catia Sofia; Almendra Sabino, Joao Maria; Augusto, Jose

    2017-01-01

    One of the proposed solutions for upgrading the high voltage (HV) system of TileCal, the ATLAS central hadron calorimeter, consists in removing the HV regulation boards from the detector and deploying them in a low-radiation room where there is permanent access for maintenance. This option requires many ∼100 m long HV cables but removes the requirement of radiation hard boards. This solution simplifies the control system of the HV regulation cards (called HVRemote). It consists of a Detector Control System (DCS) node linked to 256 HVRemote boards through a tree of Ethernet connections. Each HVRemote includes a smart Ethernet transceiver for converting data and commands from the DCS into serial peripheral interface (SPI) signals routed to SPI-capable devices in the HVRemote. The DCS connection to the transceiver and the control of some SPI-capable devices via Ethernet has been tested successfully. A test board (HVRemote-Ctrl) with the interfacing sub-system of the HVRemote was fabricated. It is being tested ...

  13. THE ATLAS DATA ACQUISITION SYSTEM IN LHC RUN 2

    CERN Document Server

    Pozo Astigarraga, Mikel Eukeni; The ATLAS collaboration

    2017-01-01

    The LHC has been providing proton-proton collisions with record intensity and energy since the start of Run 2 in 2015. In the ATLAS experiment the Data Acquisition is responsible for the transport and storage of the more complex event data at higher rates that the new collision environment implies. Data from events selected by the first level hardware trigger are subject to further filtration from software running on a commodity server farm. During this time the data are transferred from detector electronics across 1900 optical links to custom buffer hardware hosted across 100 commodity server PCs, and then across the system for processing by high bandwidth network at an average throughput of 30 GB/s. Accepted events are transported to a data logging system for final packaging and transfer to permanent storage, with an average output rate of 1.5 GB/s. The whole system is actively monitored to maximise efficiency and minimise downtime. Due to the scale of the system and the challenging collision environment th...

  14. Role Based Access Control system in the ATLAS experiment

    International Nuclear Information System (INIS)

    Valsan, M L; Dumitru, I; Darlea, G L; Bujor, F; Dobson, M; Miotto, G Lehmann; Schlenker, S; Avolio, G; Scannicchio, D A; Filimonov, V; Khomoutnikov, V; Zaytsev, A S; Korol, A A; Bogdantchikov, A; Caramarcu, C; Ballestrero, S; Twomey, M

    2011-01-01

    The complexity of the ATLAS experiment motivated the deployment of an integrated Access Control System in order to guarantee safe and optimal access for a large number of users to the various software and hardware resources. Such an integrated system was foreseen since the design of the infrastructure and is now central to the operations model. In order to cope with the ever growing needs of restricting access to all resources used within the experiment, the Roles Based Access Control (RBAC) previously developed has been extended and improved. The paper starts with a short presentation of the RBAC design, implementation and the changes made to the system to allow the management and usage of roles to control access to the vast and diverse set of resources. The RBAC implementation uses a directory service based on Lightweight Directory Access Protocol to store the users (∼3000), roles (∼320), groups (∼80) and access policies. The information is kept in sync with various other databases and directory services: human resources, central CERN IT, CERN Active Directory and the Access Control Database used by DCS. The paper concludes with a detailed description of the integration across all areas of the system.

  15. The Associative Memory System Infrastructure of the ATLAS Fast Tracker

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00525014; The ATLAS collaboration

    2016-01-01

    The Associative Memory (AM) system of the Fast Tracker (FTK) processor has been designed to perform pattern matching using the hit information of the ATLAS experiment silicon tracker. The AM is the heart of FTK and is mainly based on the use of ASICs (AM chips) designed on purpose to execute pattern matching with a high degree of parallelism. It finds track candidates at low resolution that are seeds for a full resolution track fitting. The AM system implementation is based on a collection of boards, named “Serial Link Processor” (AMBSLP), since it is based on a network of 900 2 Gb/s serial links to sustain huge data traffic. The AMBSLP has high power consumption (~250 W) and the AM system needs custom power and cooling. This presentation reports on the integration of the AMBSLP inside FTK, the infrastructure needed to run and cool the system which foresees many AMBSLPs in the same crate, the performance of the produced prototypes tested in the global FTK integration, an important milestone to be satisfie...

  16. Common accounting system for monitoring the ATLAS distributed computing resources

    International Nuclear Information System (INIS)

    Karavakis, E; Andreeva, J; Campana, S; Saiz, P; Gayazov, S; Jezequel, S; Sargsyan, L; Schovancova, J; Ueda, I

    2014-01-01

    This paper covers in detail a variety of accounting tools used to monitor the utilisation of the available computational and storage resources within the ATLAS Distributed Computing during the first three years of Large Hadron Collider data taking. The Experiment Dashboard provides a set of common accounting tools that combine monitoring information originating from many different information sources; either generic or ATLAS specific. This set of tools provides quality and scalable solutions that are flexible enough to support the constantly evolving requirements of the ATLAS user community.

  17. System Description of the Electrical Power Supply System for the ATLAS Integral Test Loop

    International Nuclear Information System (INIS)

    Moon, S. K.; Park, J. K.; Kim, Y. S.; Song, C. H.; Baek, W. P.

    2007-02-01

    An integral effect test loop for pressurized water reactors (PWRs), the ATLAS (Advanced Thermal-hydraulic Test Loop for Accident Simulation), is constructed by Thermal-Hydraulics Safety Research Team in Korea Atomic Energy Research Institute (KAERI). The ATLAS facility has been designed to have the length scale of 1/2 and area scale of 1/144 compared with the reference plant, APR1400. This report describes the design and technical specifications of the electrical power supply system which supplies the electrical powers to core heater rods, other heaters, various pumps and other systems. The electrical power supply system had acquired the final approval on the operation from the Korea Electrical Safety Corporation. During performance tests for the operation and control, the electrical power supply system showed completely acceptable operation and control performance

  18. Supporting ATLAS

    CERN Multimedia

    maximilien brice

    2003-01-01

    Eighteen feet made of stainless steel will support the barrel ATLAS detector in the cavern at Point 1. In total, the ATLAS feet system will carry approximately 6000 tons, and will give the same inclination to the detector as the LHC accelerator.

  19. Integration Of PanDA Workload Management System With Supercomputers for ATLAS

    CERN Document Server

    Oleynik, Danila; The ATLAS collaboration; De, Kaushik; Wenaus, Torre; Maeno, Tadashi; Barreiro Megino, Fernando Harald; Nilsson, Paul; Guan, Wen; Panitkin, Sergey

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production ANd Distributed Analysis system) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more t...

  20. A High-Resolution In Vivo Atlas of the Human Brain's Serotonin System

    DEFF Research Database (Denmark)

    Beliveau, Vincent; Ganz-Benjaminsen, Melanie; Feng, Ling

    2017-01-01

    The serotonin (5-hydroxytryptamine, 5-HT) system modulates many important brain functions and is critically involved in many neuropsychiatric disorders. Here, we present a high-resolution, multidimensional, in vivo atlas of four of the human brain's 5-HT receptors (5-HT1A, 5-HT1B, 5-HT2A, and 5-HT4...... with postmortem human brain autoradiography outcomes showed a high correlation for the five 5-HT targets and this enabled us to transform the atlas to represent protein densities (in picomoles per milliliter). We also assessed the regional association between protein concentration and mRNA expression in the human...... brain by comparing the 5-HT density across the atlas with data from the Allen Human Brain atlas and identified receptor- and transporter-specific associations that show the regional relation between the two measures. Together, these data provide unparalleled insight into the serotonin system...

  1. FELIX: the new detector readout system for the ATLAS experiment

    CERN Document Server

    ATLAS TDAQ Collaboration; The ATLAS collaboration

    2017-01-01

    Starting during the upcoming major LHC shutdown from 2019-2021, the ATLAS experiment at CERN will move to the the Front-End Link eXchange (FELIX) system as the interface between the data acquisition system and the trigger and detector front-end electronics. FELIX will function as a router between custom serial links and a commodity switch network, which will use industry standard technologies to communicate with data collection and processing components. The FELIX system is being developed using commercial-off-the-shelf server PC technology in combination with a FPGA-based PCIe Gen3 I/O card hosting GigaBit Transceiver links and with Timing, Trigger and Control connectivity provided by an FMC-based mezzanine card. FELIX functions will be implemented with dedicated firmware for the Xilinx FPGA (Virtex 7 and Kintex UltraScale) installed on the I/O card alongside an interrupt-driven Linux kernel driver and user-space software. On the network side, FELIX is able to connect to both Ethernet or Infiniband network a...

  2. FELIX: the new detector readout system for the ATLAS experiment

    CERN Document Server

    Bauer, Kevin Thomas; The ATLAS collaboration

    2018-01-01

    Starting during the upcoming major LHC shutdown from 2019-2021, the ATLAS experiment at CERN will move to the the Front-End Link eXchange (FELIX) system as the interface between the data acquisition system and the trigger and detector front-end electronics. FELIX will function as a router between custom serial links and a commodity switch network, which will use industry standard technologies to communicate with data collection and processing components. The FELIX system is being developed using commercial-off-the-shelf server PC technology in combination with a FPGA-based PCIe Gen3 I/O card hosting GigaBit Transceiver links and with Timing, Trigger and Control connectivity provided by an FMC-based mezzanine card. FELIX functions will be implemented with dedicated firmware for the Xilinx FPGA (Virtex 7 and Kintex UltraScale) installed on the I/O card alongside an interrupt-driven Linux kernel driver and user-space software. On the network side, FELIX is able to connect to both Ethernet or Infiniband network a...

  3. The ATLAS Data Flow system for the Second LHC Run

    CERN Document Server

    Hauser, Reiner; The ATLAS collaboration

    2015-01-01

    After its first shutdown, LHC will provide pp collisions with increased luminosity and energy. In the ATLAS experiment the Trigger and Data Acquisition (TDAQ) system has been upgraded to deal with the increased event rates. The Data Flow (DF) element of the TDAQ is a distributed hardware and software system responsible for buffering and transporting event data from the Readout system to the High Level Trigger (HLT) and to the event storage. The DF has been reshaped in order to profit from the technological progress and to maximize the flexibility and efficiency of the data selection process. The updated DF is radically different from the previous implementation both in terms of architecture and expected performance. The pre-existing two level software filtering, known as L2 and the Event Filter, and the Event Building are now merged into a single process, performing incremental data collection and analysis. This design has many advantages, among which are: the radical simplification of the architecture, the f...

  4. Upgrades to the ATLAS trigger system   

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00221618; The ATLAS collaboration

    2017-01-01

    In coming years the LHC is expected to undergo upgrades to increase both the energy of proton-proton collisions and the instantaneous luminosity. In order to cope with these more challenging LHC conditions, upgrades of the ATLAS trigger system will be required. This talk will focus on some of the key aspects of these upgrades. Firstly, the upgrade period between 2019-2021 will see an increase in instantaneous luminosity to $3\\times10^{34} \\rm{cm^{-2}s^{-1}}$. Upgrades to the Level 1 trigger system during this time will include improvements for both the muon and calorimeter triggers. These include the upgrade of the first-level Endcap Muon trigger, the calorimeter trigger electronics and the addition of new calorimeter feature extractor hardware, such as the Global Feature Extractor (gFEX). An overview will be given on the design and development status the aforementioned systems, along with the latest testing and validation results. By 2026, the High Luminosity LHC will be able to deliver 14 TeV collisions wit...

  5. The ATLAS Data Acquisition System in LHC Run 2

    CERN Document Server

    Pozo Astigarraga, Mikel Eukeni; The ATLAS collaboration

    2017-01-01

    The LHC has been providing proton-proton collisions with record intensity and energy since the start of Run 2 in 2015. In the ATLAS experiment the Data Acquisition is responsible for the transport and storage of the more complex event data at higher rates that the new collision environment implies. Data from events selected by the first level hardware trigger are subject to further filtration from software running on a commodity load balanced processing farm of some 2000 servers. During this time the data transferred from detector electronics across 1900 optical links to custom buffer hardware hosted across 100 commodity server PCs, and transferred across the system for processing by high bandwidth network at an average throughput of 30 GB/s. Accepted events are then transported to a data logging system for final packaging and transfer to permanent storage, with a final average output bandwidth of 1.5 GB/s. The whole system is actively monitored to maximise efficiency and minimise downtime. Due to the scale o...

  6. Rucio, the next-generation Data Management system in ATLAS

    Science.gov (United States)

    Serfon, C.; Barisits, M.; Beermann, T.; Garonne, V.; Goossens, L.; Lassnig, M.; Nairz, A.; Vigne, R.; ATLAS Collaboration

    2016-04-01

    Rucio is the next-generation of Distributed Data Management (DDM) system benefiting from recent advances in cloud and ;Big Data; computing to address HEP experiments scaling requirements. Rucio is an evolution of the ATLAS DDM system Don Quixote 2 (DQ2), which has demonstrated very large scale data management capabilities with more than 160 petabytes spread worldwide across 130 sites, and accesses from 1,000 active users. However, DQ2 is reaching its limits in terms of scalability, requiring a large number of support staff to operate and being hard to extend with new technologies. Rucio addresses these issues by relying on new technologies to ensure system scalability, cover new user requirements and employ new automation framework to reduce operational overheads. This paper shows the key concepts of Rucio, details the Rucio design, and the technology it employs, the tests that were conducted to validate it and finally describes the migration steps that were conducted to move from DQ2 to Rucio.

  7. Rucio, the next-generation Data Management system in ATLAS

    CERN Document Server

    Serfon, C; Beermann, T; Garonne, V; Goossens, L; Lassnig, M; Nairz, A; Vigne, R

    2016-01-01

    Rucio is the next-generation of Distributed Data Management (DDM) system benefiting from recent advances in cloud and "Big Data" computing to address HEP experiments scaling requirements. Rucio is an evolution of the ATLAS DDM system Don Quijote 2 (DQ2), which has demonstrated very large scale data management capabilities with more than 160 petabytes spread worldwide across 130 sites, and accesses from 1,000 active users. However, DQ2 is reaching its limits in terms of scalability, requiring a large number of support staff to operate and being hard to extend with new technologies. Rucio addresses these issues by relying on new technologies to ensure system scalability, cover new user requirements and employ new automation framework to reduce operational overheads. This paper shows the key concepts of Rucio, details the Rucio design, and the technology it employs, the tests that were conducted to validate it and finally describes the migration steps that were conducted to move from DQ2 to Rucio.

  8. Data Quality system of the ATLAS hadronic Tile calorimeter

    International Nuclear Information System (INIS)

    Nemecek, Stanislav

    2012-01-01

    The Tile Calorimeter (TileCal) is the central section of the hadronic calorimeter of the ATLAS experiment. It is subdivided into a large central barrel and two smaller lateral extended barrels. Each barrel consists of 64 wedges, made of iron plates and scintillating tiles. Two edges of each scintillating tile are air-coupled to wave-length shifting (WLS) fibres which collect the scintillating light and transmit it to photo-multipliers. The total number of channels is about 10000. An essential part of the TileCal detector is the Data Quality (DQ) system. The DQ system is designed to check the status of the electronic channels. It is designed to provide information at two levels - online and offline. The online TileCal DQ system monitors continuously the data while they are recorded and provides a fast feedback. The offline DQ system allows a detailed study, if needed it provides corrections to be applied to the recorded data and it allows to validate the data for physics analysis. In addition to the check of physics data the TileCal DQ systems also operate with calibration data. The TileCal calibration system provides well defined signals and the response to the calibration signals allows checking the behaviour of the electronic channels in detail. The Monitoring and Calibration Web System supports data quality analyses at the level of channels. All online, offline and calibration versions of the TileCal DQ system also provide automatic tests, the results of which allow fast and robust feedback.

  9. The data path of the ATLAS level-1 calorimeter trigger preprocessor

    Energy Technology Data Exchange (ETDEWEB)

    Andrei, George Victor

    2010-10-27

    The PreProcessor of the ATLAS Level-1 Calorimeter Trigger provides digital values of transverse energy in real-time to the subsequent object-finding processors. The input comprises more than 7000 analogue signals of reduced granularity from the calorimeters of the ATLAS detector. The Level-1 trigger decision must be verified. For this, the PreProcessor transmits copies of the real-time digital data to the Data Acquisition (DAQ) system. In addition, the PreProcessor system provides a standard VMEbus interface to the computing infrastructure of the experiment, on which configuration data is loaded and control or monitoring data are read out. A dedicated system that ensures both the transfer of event data to storage in ATLAS and the data transfer over the VME was implemented on the 124 modules of the PreProcessor system in the form of a ''Readout Manager''. The ''Field Programmable Gate Array'' (FPGA) is located on each module. The rst part of this work describes the algorithms developed to meet the functionality of the Readout Manager. The second part deals with the tests that were carried out to ensure the proper functionality of the modules before they were installed at CERN in the ATLAS cavern. (orig.)

  10. The data path of the ATLAS level-1 calorimeter trigger preprocessor

    International Nuclear Information System (INIS)

    Andrei, George Victor

    2010-01-01

    The PreProcessor of the ATLAS Level-1 Calorimeter Trigger provides digital values of transverse energy in real-time to the subsequent object-finding processors. The input comprises more than 7000 analogue signals of reduced granularity from the calorimeters of the ATLAS detector. The Level-1 trigger decision must be verified. For this, the PreProcessor transmits copies of the real-time digital data to the Data Acquisition (DAQ) system. In addition, the PreProcessor system provides a standard VMEbus interface to the computing infrastructure of the experiment, on which configuration data is loaded and control or monitoring data are read out. A dedicated system that ensures both the transfer of event data to storage in ATLAS and the data transfer over the VME was implemented on the 124 modules of the PreProcessor system in the form of a ''Readout Manager''. The ''Field Programmable Gate Array'' (FPGA) is located on each module. The rst part of this work describes the algorithms developed to meet the functionality of the Readout Manager. The second part deals with the tests that were carried out to ensure the proper functionality of the modules before they were installed at CERN in the ATLAS cavern. (orig.)

  11. The data path of the ATLAS level-1 calorimeter trigger preprocessor

    Energy Technology Data Exchange (ETDEWEB)

    Andrei, George Victor

    2010-10-27

    The PreProcessor of the ATLAS Level-1 Calorimeter Trigger provides digital values of transverse energy in real-time to the subsequent object-finding processors. The input comprises more than 7000 analogue signals of reduced granularity from the calorimeters of the ATLAS detector. The Level-1 trigger decision must be verified. For this, the PreProcessor transmits copies of the real-time digital data to the Data Acquisition (DAQ) system. In addition, the PreProcessor system provides a standard VMEbus interface to the computing infrastructure of the experiment, on which configuration data is loaded and control or monitoring data are read out. A dedicated system that ensures both the transfer of event data to storage in ATLAS and the data transfer over the VME was implemented on the 124 modules of the PreProcessor system in the form of a ''Readout Manager''. The ''Field Programmable Gate Array'' (FPGA) is located on each module. The rst part of this work describes the algorithms developed to meet the functionality of the Readout Manager. The second part deals with the tests that were carried out to ensure the proper functionality of the modules before they were installed at CERN in the ATLAS cavern. (orig.)

  12. The Evolution of the Trigger and Data Acquisition System in the ATLAS Experiment

    CERN Document Server

    Garelli, N; The ATLAS collaboration

    2014-01-01

    The ATLAS experiment, aimed at recording the results of LHC proton-proton collisions, is upgrading its Trigger and Data Acquisition (TDAQ) system during the current LHC first long shutdown. The purpose of such upgrade is to add robustness and flexibility to the selection and the conveyance of the physics data, simplify the maintenance of the infrastructure, exploit new technologies and, overall, make ATLAS data-taking capable of dealing with increasing event rates. \

  13. Software framework developed for the slice test of the ATLAS endcap muon trigger system

    CERN Document Server

    Komatsu, S; Ishida, Y; Tanaka, K; Hasuko, K; Kano, H; Matsumoto, Y; Yakamura, Y; Sakamoto, H; Ikeno, M; Nakayoshi, K; Sasaki, O; Yasu, Y; Hasegawa, Y; Totsuka, M; Tsuji, S; Maeno, T; Ichimiya, R; Kurashige, H

    2002-01-01

    A sliced system test of the ATLAS end cap muon level 1 trigger system has been done in 2001 and 2002 separately. We have developed an own software framework for property and run controls for the slice test in 2001. The system is described in C++ throughout. The multi-PC control system is accomplished using the CORBA system. We have then restructured the software system on top of the ATLAS online software framework, and used this one for the slice test in 2002. In this report we discuss two systems in detail with emphasizing the module property configuration and run control. (8 refs).

  14. The detector control web system of