WorldWideScience

Sample records for cms distributed computing

  1. Distributed computing grid experiences in CMS

    CERN Document Server

    Andreeva, Julia; Barrass, T; Bonacorsi, D; Bunn, Julian; Capiluppi, P; Corvo, M; Darmenov, N; De Filippis, N; Donno, F; Donvito, G; Eulisse, G; Fanfani, A; Fanzago, F; Filine, A; Grandi, C; Hernández, J M; Innocente, V; Jan, A; Lacaprara, S; Legrand, I; Metson, S; Newbold, D; Newman, H; Pierro, A; Silvestris, L; Steenberg, C; Stockinger, H; Taylor, Lucas; Thomas, M; Tuura, L; Van Lingen, F; Wildish, Tony

    2005-01-01

    The CMS experiment is currently developing a computing system capable of serving, processing and archiving the large number of events that will be generated when the CMS detector starts taking data. During 2004 CMS undertook a large scale data challenge to demonstrate the ability of the CMS computing system to cope with a sustained data- taking rate equivalent to 25% of startup rate. Its goals were: to run CMS event reconstruction at CERN for a sustained period at 25 Hz input rate; to distribute the data to several regional centers; and enable data access at those centers for analysis. Grid middleware was utilized to help complete all aspects of the challenge. To continue to provide scalable access from anywhere in the world to the data, CMS is developing a layer of software that uses Grid tools to gain access to data and resources, and that aims to provide physicists with a user friendly interface for submitting their analysis jobs. This paper describes the data challenge experience with Grid infrastructure ...

  2. Improving CMS data transfers among its distributed computing facilities

    CERN Document Server

    Flix, J; Sartirana, A

    2001-01-01

    CMS computing needs reliable, stable and fast connections among multi-tiered computing infrastructures. For data distribution, the CMS experiment relies on a data placement and transfer system, PhEDEx, managing replication operations at each site in the distribution network. PhEDEx uses the File Transfer Service (FTS), a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centres and are used by all computing sites in CMS, according to the established policy. FTS needs to be set up according to the Grid site's policies, and properly configured to satisfy the requirements of all Virtual Organizations making use of the Grid resources at the site. Managing the service efficiently requires good knowledge of the CMS needs for all kinds of transfer workflows. This contribution deals with a revision of FTS servers used by CMS, collecting statistics on thei...

  3. Improving CMS data transfers among its distributed computing facilities

    CERN Document Server

    Flix, Jose

    2010-01-01

    CMS computing needs reliable, stable and fast connections among multi-tiered computing infrastructures. For data distribution, the CMS experiment relies on a data placement and transfer system, PhEDEx, managing replication operations at each site in the distribution network. PhEDEx uses the File Transfer Service (FTS), a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centres and are used by all computing sites in CMS, according to the established policy. FTS needs to be set up according to the Grid site's policies, and properly configured to satisfy the requirements of all Virtual Organizations making use of the Grid resources at the site. Managing the service efficiently requires good knowledge of the CMS needs for all kinds of transfer workflows. This contribution deals with a revision of FTS servers used by CMS, collecting statistics on the...

  4. Improving CMS data transfers among its distributed computing facilities

    International Nuclear Information System (INIS)

    Flix, J; Magini, N; Sartirana, A

    2011-01-01

    CMS computing needs reliable, stable and fast connections among multi-tiered computing infrastructures. For data distribution, the CMS experiment relies on a data placement and transfer system, PhEDEx, managing replication operations at each site in the distribution network. PhEDEx uses the File Transfer Service (FTS), a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centres and are used by all computing sites in CMS, according to the established policy. FTS needs to be set up according to the Grid site's policies, and properly configured to satisfy the requirements of all Virtual Organizations making use of the Grid resources at the site. Managing the service efficiently requires good knowledge of the CMS needs for all kinds of transfer workflows. This contribution deals with a revision of FTS servers used by CMS, collecting statistics on their usage, customizing the topologies and improving their setup in order to keep CMS transferring data at the desired levels in a reliable and robust way.

  5. CMS Distributed Computing Integration in the LHC sustained operations era

    International Nuclear Information System (INIS)

    Grandi, C; Bonacorsi, D; Bockelman, B; Fisk, I

    2011-01-01

    After many years of preparation the CMS computing system has reached a situation where stability in operations limits the possibility to introduce innovative features. Nevertheless it is the same need of stability and smooth operations that requires the introduction of features that were considered not strategic in the previous phases. Examples are: adequate authorization to control and prioritize the access to storage and computing resources; improved monitoring to investigate problems and identify bottlenecks on the infrastructure; increased automation to reduce the manpower needed for operations; effective process to deploy in production new releases of the software tools. We present the work of the CMS Distributed Computing Integration Activity that is responsible for providing a liaison between the CMS distributed computing infrastructure and the software providers, both internal and external to CMS. In particular we describe the introduction of new middleware features during the last 18 months as well as the requirements to Grid and Cloud software developers for the future.

  6. Distributed Analysis in CMS

    CERN Document Server

    Fanfani, Alessandra; Sanches, Jose Afonso; Andreeva, Julia; Bagliesi, Giusepppe; Bauerdick, Lothar; Belforte, Stefano; Bittencourt Sampaio, Patricia; Bloom, Ken; Blumenfeld, Barry; Bonacorsi, Daniele; Brew, Chris; Calloni, Marco; Cesini, Daniele; Cinquilli, Mattia; Codispoti, Giuseppe; D'Hondt, Jorgen; Dong, Liang; Dongiovanni, Danilo; Donvito, Giacinto; Dykstra, David; Edelmann, Erik; Egeland, Ricky; Elmer, Peter; Eulisse, Giulio; Evans, Dave; Fanzago, Federica; Farina, Fabio; Feichtinger, Derek; Fisk, Ian; Flix, Josep; Grandi, Claudio; Guo, Yuyi; Happonen, Kalle; Hernandez, Jose M; Huang, Chih-Hao; Kang, Kejing; Karavakis, Edward; Kasemann, Matthias; Kavka, Carlos; Khan, Akram; Kim, Bockjoo; Klem, Jukka; Koivumaki, Jesper; Kress, Thomas; Kreuzer, Peter; Kurca, Tibor; Kuznetsov, Valentin; Lacaprara, Stefano; Lassila-Perini, Kati; Letts, James; Linden, Tomas; Lueking, Lee; Maes, Joris; Magini, Nicolo; Maier, Gerhild; McBride, Patricia; Metson, Simon; Miccio, Vincenzo; Padhi, Sanjay; Pi, Haifeng; Riahi, Hassen; Riley, Daniel; Rossman, Paul; Saiz, Pablo; Sartirana, Andrea; Sciaba, Andrea; Sekhri, Vijay; Spiga, Daniele; Tuura, Lassi; Vaandering, Eric; Vanelderen, Lukas; Van Mulders, Petra; Vedaee, Aresh; Villella, Ilaria; Wicklund, Eric; Wildish, Tony; Wissing, Christoph; Wurthwein, Frank

    2009-01-01

    The CMS experiment expects to manage several Pbytes of data each year during the LHC programme, distributing them over many computing sites around the world and enabling data access at those centers for analysis. CMS has identified the distributed sites as the primary location for physics analysis to support a wide community with thousands potential users. This represents an unprecedented experimental challenge in terms of the scale of distributed computing resources and number of user. An overview of the computing architecture, the software tools and the distributed infrastructure is reported. Summaries of the experience in establishing efficient and scalable operations to get prepared for CMS distributed analysis are presented, followed by the user experience in their current analysis activities.

  7. The CMS Computing Model

    International Nuclear Information System (INIS)

    Bonacorsi, D.

    2007-01-01

    The CMS experiment at LHC has developed a baseline Computing Model addressing the needs of a computing system capable to operate in the first years of LHC running. It is focused on a data model with heavy streaming at the raw data level based on trigger, and on the achievement of the maximum flexibility in the use of distributed computing resources. The CMS distributed Computing Model includes a Tier-0 centre at CERN, a CMS Analysis Facility at CERN, several Tier-1 centres located at large regional computing centres, and many Tier-2 centres worldwide. The workflows have been identified, along with a baseline architecture for the data management infrastructure. This model is also being tested in Grid Service Challenges of increasing complexity, coordinated with the Worldwide LHC Computing Grid community

  8. CMS distributed computing workflow experience

    Science.gov (United States)

    Adelman-McCarthy, Jennifer; Gutsche, Oliver; Haas, Jeffrey D.; Prosper, Harrison B.; Dutta, Valentina; Gomez-Ceballos, Guillelmo; Hahn, Kristian; Klute, Markus; Mohapatra, Ajit; Spinoso, Vincenzo; Kcira, Dorian; Caudron, Julien; Liao, Junhui; Pin, Arnaud; Schul, Nicolas; De Lentdecker, Gilles; McCartin, Joseph; Vanelderen, Lukas; Janssen, Xavier; Tsyganov, Andrey; Barge, Derek; Lahiff, Andrew

    2011-12-01

    The vast majority of the CMS Computing capacity, which is organized in a tiered hierarchy, is located away from CERN. The 7 Tier-1 sites archive the LHC proton-proton collision data that is initially processed at CERN. These sites provide access to all recorded and simulated data for the Tier-2 sites, via wide-area network (WAN) transfers. All central data processing workflows are executed at the Tier-1 level, which contain re-reconstruction and skimming workflows of collision data as well as reprocessing of simulated data to adapt to changing detector conditions. This paper describes the operation of the CMS processing infrastructure at the Tier-1 level. The Tier-1 workflows are described in detail. The operational optimization of resource usage is described. In particular, the variation of different workflows during the data taking period of 2010, their efficiencies and latencies as well as their impact on the delivery of physics results is discussed and lessons are drawn from this experience. The simulation of proton-proton collisions for the CMS experiment is primarily carried out at the second tier of the CMS computing infrastructure. Half of the Tier-2 sites of CMS are reserved for central Monte Carlo (MC) production while the other half is available for user analysis. This paper summarizes the large throughput of the MC production operation during the data taking period of 2010 and discusses the latencies and efficiencies of the various types of MC production workflows. We present the operational procedures to optimize the usage of available resources and we the operational model of CMS for including opportunistic resources, such as the larger Tier-3 sites, into the central production operation.

  9. CMS distributed computing workflow experience

    International Nuclear Information System (INIS)

    Adelman-McCarthy, Jennifer; Gutsche, Oliver; Haas, Jeffrey D; Prosper, Harrison B; Dutta, Valentina; Gomez-Ceballos, Guillelmo; Hahn, Kristian; Klute, Markus; Mohapatra, Ajit; Spinoso, Vincenzo; Kcira, Dorian; Caudron, Julien; Liao Junhui; Pin, Arnaud; Schul, Nicolas; Lentdecker, Gilles De; McCartin, Joseph; Vanelderen, Lukas; Janssen, Xavier; Tsyganov, Andrey

    2011-01-01

    The vast majority of the CMS Computing capacity, which is organized in a tiered hierarchy, is located away from CERN. The 7 Tier-1 sites archive the LHC proton-proton collision data that is initially processed at CERN. These sites provide access to all recorded and simulated data for the Tier-2 sites, via wide-area network (WAN) transfers. All central data processing workflows are executed at the Tier-1 level, which contain re-reconstruction and skimming workflows of collision data as well as reprocessing of simulated data to adapt to changing detector conditions. This paper describes the operation of the CMS processing infrastructure at the Tier-1 level. The Tier-1 workflows are described in detail. The operational optimization of resource usage is described. In particular, the variation of different workflows during the data taking period of 2010, their efficiencies and latencies as well as their impact on the delivery of physics results is discussed and lessons are drawn from this experience. The simulation of proton-proton collisions for the CMS experiment is primarily carried out at the second tier of the CMS computing infrastructure. Half of the Tier-2 sites of CMS are reserved for central Monte Carlo (MC) production while the other half is available for user analysis. This paper summarizes the large throughput of the MC production operation during the data taking period of 2010 and discusses the latencies and efficiencies of the various types of MC production workflows. We present the operational procedures to optimize the usage of available resources and we the operational model of CMS for including opportunistic resources, such as the larger Tier-3 sites, into the central production operation.

  10. CMS on the GRID: Toward a fully distributed computing architecture

    International Nuclear Information System (INIS)

    Innocente, Vincenzo

    2003-01-01

    The computing systems required to collect, analyse and store the physics data at LHC would need to be distributed and global in scope. CMS is actively involved in several grid-related projects to develop and deploy a fully distributed computing architecture. We present here recent developments of tools for automating job submission and for serving data to remote analysis stations. Plans for further test and deployment of a production grid are also described

  11. CMS Distributed Computing Workflow Experience

    CERN Document Server

    Haas, Jeffrey David

    2010-01-01

    The vast majority of the CMS Computing capacity, which is organized in a tiered hierarchy, is located away from CERN. The 7 Tier-1 sites archive the LHC proton-proton collision data that is initially processed at CERN. These sites provide access to all recorded and simulated data for the Tier-2 sites, via wide-area network (WAN) transfers. All central data processing workflows are executed at the Tier-1 level, which contain re-reconstruction and skimming workflows of collision data as well as reprocessing of simulated data to adapt to changing detector conditions. This paper describes the operation of the CMS processing infrastructure at the Tier-1 level. The Tier-1 workflows are described in detail. The operational optimization of resource usage is described. In particular, the variation of different workflows during the data taking period of 2010, their efficiencies and latencies as well as their impact on the delivery of physics results is discussed and lessons are drawn from this experience. The simul...

  12. CMS computing upgrade and evolution

    CERN Document Server

    Hernandez Calama, Jose

    2013-01-01

    The distributed Grid computing infrastructure has been instrumental in the successful exploitation of the LHC data leading to the discovery of the Higgs boson. The computing system will need to face new challenges from 2015 on when LHC restarts with an anticipated higher detector output rate and event complexity, but with only a limited increase in the computing resources. A more efficient use of the available resources will be mandatory. CMS is improving the data storage, distribution and access as well as the processing efficiency. Remote access to the data through the WAN, dynamic data replication and deletion based on the data access patterns, and separation of disk and tape storage are some of the areas being actively developed. Multi-core processing and scheduling is being pursued in order to make a better use of the multi-core nodes available at the sites. In addition, CMS is exploring new computing techniques, such as Cloud Computing, to get access to opportunistic resources or as a means of using wit...

  13. Towards higher reliability of CMS computing facilities

    International Nuclear Information System (INIS)

    Bagliesi, G; Bloom, K; Brew, C; Flix, J; Kreuzer, P; Sciabà, A

    2012-01-01

    The CMS experiment has adopted a computing system where resources are distributed worldwide in more than 50 sites. The operation of the system requires a stable and reliable behaviour of the underlying infrastructure. CMS has established procedures to extensively test all relevant aspects of a site and their capability to sustain the various CMS computing workflows at the required scale. The Site Readiness monitoring infrastructure has been instrumental in understanding how the system as a whole was improving towards LHC operations, measuring the reliability of sites when running CMS activities, and providing sites with the information they need to troubleshoot any problem. This contribution reviews the complete automation of the Site Readiness program, with the description of monitoring tools and their inclusion into the Site Status Board (SSB), the performance checks, the use of tools like HammerCloud, and the impact in improving the overall reliability of the Grid from the point of view of the CMS computing system. These results are used by CMS to select good sites to conduct workflows, in order to maximize workflows efficiencies. The performance against these tests seen at the sites during the first years of LHC running is as well reviewed.

  14. Distributed Grid Experiences in CMS DC04

    CERN Document Server

    Fanfani, A; Grandi, C; Legrand, I; Suresh, S; Campana, S; Donno, F; Jank, W; Sinanis, N; Sciabà, A; García-Abia, P; Hernández, J; Ernst, M; Anzar, A; Fisk, I; Giacchetti, L; Graham, G; Heavey, A; Kaiser, J; Kuropatine, N; Perelmutov, T; Pordes, R; Ratnikova, N; Weigand, J; Wu, Y; Colling, D J; MacEvoy, B; Tallini, H; Wakefield, L; De Filippis, N; Donvito, G; Maggi, G; Bonacorsi, D; Dell'Agnello, L; Martelli, B; Biasotto, M; Fantinel, S; Corvo, M; Fanzago, F; Mazzucato, M; Tuura, L; Martin, T; Letts, J; Bockjoo, K; Prescott, C; Rodríguez, J; Zahn, A; Bradley, D

    2005-01-01

    In March-April 2004 the CMS experiment undertook a Data Challenge (DC04). During the previous 8 months CMS undertook a large simulated event production. The goal of the challenge was to run CMS reconstruction for sustained period at 25Hz in put rate, distribute the data to the CMS Tier-1 centers and analyze them at remote sites. Grid environments developed in Europe by the LHC Computing Grid (LCG) and in the US with Grid2003 were utilized to complete the aspects of the challenge. A description of the experiences, successes and lessons learned from both experiences with grid infrastructure is presented.

  15. CMS computing support at JINR

    International Nuclear Information System (INIS)

    Golutvin, I.; Koren'kov, V.; Lavrent'ev, A.; Pose, R.; Tikhonenko, E.

    1998-01-01

    Participation of JINR specialists in the CMS experiment at LHC requires a wide use of computer resources. In the context of JINR activities in the CMS Project hardware and software resources have been provided for full participation of JINR specialists in the CMS experiment; the JINR computer infrastructure was made closer to the CERN one. JINR also provides the informational support for the CMS experiment (web-server http://sunct2.jinr.dubna.su). Plans for further CMS computing support at JINR are stated

  16. CMS computing on grid

    International Nuclear Information System (INIS)

    Guan Wen; Sun Gongxing

    2007-01-01

    CMS has adopted a distributed system of services which implement CMS application view on top of Grid services. An overview of CMS services will be covered. Emphasis is on CMS data management and workload Management. (authors)

  17. Towards a global monitoring system for CMS computing operations

    CERN Multimedia

    CERN. Geneva; Bauerdick, Lothar A.T.

    2012-01-01

    The operation of the CMS computing system requires a complex monitoring system to cover all its aspects: central services, databases, the distributed computing infrastructure, production and analysis workflows, the global overview of the CMS computing activities and the related historical information. Several tools are available to provide this information, developed both inside and outside of the collaboration and often used in common with other experiments. Despite the fact that the current monitoring allowed CMS to successfully perform its computing operations, an evolution of the system is clearly required, to adapt to the recent changes in the data and workload management tools and models and to address some shortcomings that make its usage less than optimal. Therefore, a recent and ongoing coordinated effort was started in CMS, aiming at improving the entire monitoring system by identifying its weaknesses and the new requirements from the stakeholders, rationalise and streamline existing components and ...

  18. Towards a Global Monitoring System for CMS Computing Operations

    Energy Technology Data Exchange (ETDEWEB)

    Bauerdick, L. A.T. [Fermilab; Sciaba, Andrea [CERN

    2012-01-01

    The operation of the CMS computing system requires a complex monitoring system to cover all its aspects: central services, databases, the distributed computing infrastructure, production and analysis workflows, the global overview of the CMS computing activities and the related historical information. Several tools are available to provide this information, developed both inside and outside of the collaboration and often used in common with other experiments. Despite the fact that the current monitoring allowed CMS to successfully perform its computing operations, an evolution of the system is clearly required, to adapt to the recent changes in the data and workload management tools and models and to address some shortcomings that make its usage less than optimal. Therefore, a recent and ongoing coordinated effort was started in CMS, aiming at improving the entire monitoring system by identifying its weaknesses and the new requirements from the stakeholders, rationalise and streamline existing components and drive future software development. This contribution gives a complete overview of the CMS monitoring system and a description of all the recent activities that have been started with the goal of providing a more integrated, modern and functional global monitoring system for computing operations.

  19. Performance studies and improvements of CMS distributed data transfers

    International Nuclear Information System (INIS)

    Bonacorsi, D; Flix, J; Kaselis, R; Magini, N; Letts, J; Sartirana, A

    2012-01-01

    CMS computing needs reliable, stable and fast connections among multi-tiered distributed infrastructures. CMS experiment relies on File Transfer Services (FTS) for data distribution, a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centers and used by all the computing sites in CMS, subject to established CMS and sites setup policies, including all the virtual organizations making use of the Grid resources at the site, and properly dimensioned to satisfy all the requirements for them. Managing the service efficiently needs good knowledge of the CMS needs for all kind of transfer routes, and the sharing and interference with other VOs using the same FTS transfer managers. This contribution deals with a complete revision of all FTS servers used by CMS, customizing the topologies and improving their setup in order to keep CMS transferring data to the desired levels, as well as performance studies for all kind of transfer routes, including overheads measurements introduced by SRM servers and storage systems, FTS server misconfigurations and identification of congested channels, historical transfer throughputs per stream, file-latency studies,… This information is retrieved directly from the FTS servers through the FTS Monitor webpages and conveniently archived for further analysis. The project provides an interface for all these values, to ease the analysis of the data.

  20. Power distribution studies for CMS forward tracker

    International Nuclear Information System (INIS)

    Todri, A.; Turqueti, M.; Rivera, R.; Kwan, S.

    2009-01-01

    The Electronic Systems Engineering Department of the Computing Division at the Fermi National Accelerator Laboratory is carrying out R and D investigations for the upgrade of the power distribution system of the Compact Muon Solenoid (CMS) Pixel Tracker at the Large Hadron Collider (LHC). Among the goals of this effort is that of analyzing the feasibility of alternative powering schemes for the forward tracker, including DC to DC voltage conversion techniques using commercially available and custom switching regulator circuits. Tests of these approaches are performed using the PSI46 pixel readout chip currently in use at the CMS Tracker. Performance measures of the detector electronics will include pixel noise and threshold dispersion results. Issues related to susceptibility to switching noise will be studied and presented. In this paper, we describe the current power distribution network of the CMS Tracker, study the implications of the proposed upgrade with DC-DC converters powering scheme and perform noise susceptibility analysis.

  1. CMS results in the Combined Computing Readiness Challenge CCRC'08

    International Nuclear Information System (INIS)

    Bonacorsi, D.; Bauerdick, L.

    2009-01-01

    During February and May 2008, CMS participated to the Combined Computing Readiness Challenge (CCRC'08) together with all other LHC experiments. The purpose of this worldwide exercise was to check the readiness of the Computing infrastructure for LHC data taking. Another set of major CMS tests called Computing, Software and Analysis challenge (CSA'08) - as well as CMS cosmic runs - were also running at the same time: CCRC augmented the load on computing with additional tests to validate and stress-test all CMS computing workflows at full data taking scale, also extending this to the global WLCG community. CMS exercised most aspects of the CMS computing model, with very comprehensive tests. During May 2008, CMS moved more than 3.6 Petabytes among more than 300 links in the complex Grid topology. CMS demonstrated that is able to safely move data out of CERN to the Tier-1 sites, sustaining more than 600 MB/s as a daily average for more than seven days in a row, with enough headroom and with hourly peaks of up to 1.7 GB/s. CMS ran hundreds of simultaneous jobs at each Tier-1 site, re-reconstructing and skimming hundreds of millions of events. After re-reconstruction the fresh AOD (Analysis Object Data) has to be synchronized between Tier-1 centers: CMS demonstrated that the required inter-Tier-1 transfers are achievable within a few days. CMS also showed that skimmed analysis data sets can be transferred to Tier-2 sites for analysis at sufficient rate, regionally as well as inter-regionally, achieving all goals in about 90% of >200 links. Simultaneously, CMS also ran a large Tier-2 analysis exercise, where realistic analysis jobs were submitted to a large set of Tier-2 sites by a large number of people to produce a chaotic workload across the systems, and with more than 400 analysis users in May. Taken all together, CMS routinely achieved submissions of 100k jobs/day, with peaks up to 200k jobs/day. The achieved results in CCRC'08 - focussing on the distributed

  2. Alert Messaging in the CMS Distributed Workflow System

    International Nuclear Information System (INIS)

    Maxa, Zdenek

    2012-01-01

    WMAgent is the core component of the CMS workload management system. One of the features of this job managing platform is a configurable messaging system aimed at generating, distributing and processing alerts: short messages describing a given alert-worthy information or pathological condition. Apart from the framework's sub-components running within the WMAgent instances, there is a stand-alone application collecting alerts from all WMAgent instances running across the CMS distributed computing environment. The alert framework has a versatile design that allows for receiving alert messages also from other CMS production applications, such as PhEDEx data transfer manager. We present implementation details of the system, including its Python implementation using ZeroMQ, CouchDB message storage and future visions as well as operational experiences. Inter-operation with monitoring platforms such as Dashboard or Lemon is described.

  3. Experience building and operating the CMS Tier-1 computing centres

    Science.gov (United States)

    Albert, M.; Bakken, J.; Bonacorsi, D.; Brew, C.; Charlot, C.; Huang, Chih-Hao; Colling, D.; Dumitrescu, C.; Fagan, D.; Fassi, F.; Fisk, I.; Flix, J.; Giacchetti, L.; Gomez-Ceballos, G.; Gowdy, S.; Grandi, C.; Gutsche, O.; Hahn, K.; Holzman, B.; Jackson, J.; Kreuzer, P.; Kuo, C. M.; Mason, D.; Pukhaeva, N.; Qin, G.; Quast, G.; Rossman, P.; Sartirana, A.; Scheurer, A.; Schott, G.; Shih, J.; Tader, P.; Thompson, R.; Tiradani, A.; Trunov, A.

    2010-04-01

    The CMS Collaboration relies on 7 globally distributed Tier-1 computing centres located at large universities and national laboratories for a second custodial copy of the CMS RAW data and primary copy of the simulated data, data serving capacity to Tier-2 centres for analysis, and the bulk of the reprocessing and event selection capacity in the experiment. The Tier-1 sites have a challenging role in CMS because they are expected to ingest and archive data from both CERN and regional Tier-2 centres, while they export data to a global mesh of Tier-2s at rates comparable to the raw export data rate from CERN. The combined capacity of the Tier-1 centres is more than twice the resources located at CERN and efficiently utilizing this large distributed resources represents a challenge. In this article we will discuss the experience building, operating, and utilizing the CMS Tier-1 computing centres. We will summarize the facility challenges at the Tier-1s including the stable operations of CMS services, the ability to scale to large numbers of processing requests and large volumes of data, and the ability to provide custodial storage and high performance data serving. We will also present the operations experience utilizing the distributed Tier-1 centres from a distance: transferring data, submitting data serving requests, and submitting batch processing requests.

  4. Experience building and operating the CMS Tier-1 computing centres

    International Nuclear Information System (INIS)

    Albert, M; Bakken, J; Huang, Chih-Hao; Dumitrescu, C; Fagan, D; Fisk, I; Giacchetti, L; Gutsche, O; Holzman, B; Bonacorsi, D; Grandi, C; Brew, C; Jackson, J; Charlot, C; Colling, D; Fassi, F; Flix, J; Gomez-Ceballos, G; Hahn, K; Gowdy, S

    2010-01-01

    The CMS Collaboration relies on 7 globally distributed Tier-1 computing centres located at large universities and national laboratories for a second custodial copy of the CMS RAW data and primary copy of the simulated data, data serving capacity to Tier-2 centres for analysis, and the bulk of the reprocessing and event selection capacity in the experiment. The Tier-1 sites have a challenging role in CMS because they are expected to ingest and archive data from both CERN and regional Tier-2 centres, while they export data to a global mesh of Tier-2s at rates comparable to the raw export data rate from CERN. The combined capacity of the Tier-1 centres is more than twice the resources located at CERN and efficiently utilizing this large distributed resources represents a challenge. In this article we will discuss the experience building, operating, and utilizing the CMS Tier-1 computing centres. We will summarize the facility challenges at the Tier-1s including the stable operations of CMS services, the ability to scale to large numbers of processing requests and large volumes of data, and the ability to provide custodial storage and high performance data serving. We will also present the operations experience utilizing the distributed Tier-1 centres from a distance: transferring data, submitting data serving requests, and submitting batch processing requests.

  5. CMS software and computing for LHC Run 2

    CERN Document Server

    INSPIRE-00067576

    2016-11-09

    The CMS offline software and computing system has successfully met the challenge of LHC Run 2. In this presentation, we will discuss how the entire system was improved in anticipation of increased trigger output rate, increased rate of pileup interactions and the evolution of computing technology. The primary goals behind these changes was to increase the flexibility of computing facilities where ever possible, as to increase our operational efficiency, and to decrease the computing resources needed to accomplish the primary offline computing workflows. These changes have resulted in a new approach to distributed computing in CMS for Run 2 and for the future as the LHC luminosity should continue to increase. We will discuss changes and plans to our data federation, which was one of the key changes towards a more flexible computing model for Run 2. Our software framework and algorithms also underwent significant changes. We will summarize the our experience with a new multi-threaded framework as deployed on ou...

  6. German contributions to the CMS computing infrastructure

    International Nuclear Information System (INIS)

    Scheurer, A

    2010-01-01

    The CMS computing model anticipates various hierarchically linked tier centres to counter the challenges provided by the enormous amounts of data which will be collected by the CMS detector at the Large Hadron Collider, LHC, at CERN. During the past years, various computing exercises were performed to test the readiness of the computing infrastructure, the Grid middleware and the experiment's software for the startup of the LHC which took place in September 2008. In Germany, several tier sites are set up to allow for an efficient and reliable way to simulate possible physics processes as well as to reprocess, analyse and interpret the numerous stored collision events of the experiment. It will be shown that the German computing sites played an important role during the experiment's preparation phase and during data-taking of CMS and, therefore, scientific groups in Germany will be ready to compete for discoveries in this new era of particle physics. This presentation focuses on the German Tier-1 centre GridKa, located at Forschungszentrum Karlsruhe, the German CMS Tier-2 federation DESY/RWTH with installations at the University of Aachen and the research centre DESY. In addition, various local computing resources in Aachen, Hamburg and Karlsruhe are briefly introduced as well. It will be shown that an excellent cooperation between the different German institutions and physicists led to well established computing sites which cover all parts of the CMS computing model. Therefore, the following topics are discussed and the achieved goals and the gained knowledge are depicted: data management and distribution among the different tier sites, Grid-based Monte Carlo production at the Tier-2 as well as Grid-based and locally submitted inhomogeneous user analyses at the Tier-3s. Another important task is to ensure a proper and reliable operation 24 hours a day, especially during the time of data-taking. For this purpose, the meta-monitoring tool 'HappyFace', which was

  7. Enabling opportunistic resources for CMS Computing Operations

    Energy Technology Data Exchange (ETDEWEB)

    Hufnagel, Dick [Fermilab

    2015-11-19

    With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize “opportunistic” resources — resources not owned by, or a priori configured for CMS — to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are used to enable access and bring the CMS environment into these non CMS resources. Here we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.

  8. CMS Monte Carlo production in the WLCG computing grid

    International Nuclear Information System (INIS)

    Hernandez, J M; Kreuzer, P; Hof, C; Khomitch, A; Mohapatra, A; Filippis, N D; Pompili, A; My, S; Abbrescia, M; Maggi, G; Donvito, G; Weirdt, S D; Maes, J; Mulders, P v; Villella, I; Wakefield, S; Guan, W; Fanfani, A; Evans, D; Flossdorf, A

    2008-01-01

    Monte Carlo production in CMS has received a major boost in performance and scale since the past CHEP06 conference. The production system has been re-engineered in order to incorporate the experience gained in running the previous system and to integrate production with the new CMS event data model, data management system and data processing framework. The system is interfaced to the two major computing Grids used by CMS, the LHC Computing Grid (LCG) and the Open Science Grid (OSG). Operational experience and integration aspects of the new CMS Monte Carlo production system is presented together with an analysis of production statistics. The new system automatically handles job submission, resource monitoring, job queuing, job distribution according to the available resources, data merging, registration of data into the data bookkeeping, data location, data transfer and placement systems. Compared to the previous production system automation, reliability and performance have been considerably improved. A more efficient use of computing resources and a better handling of the inherent Grid unreliability have resulted in an increase of production scale by about an order of magnitude, capable of running in parallel at the order of ten thousand jobs and yielding more than two million events per day

  9. Exploiting Analytics Techniques in CMS Computing Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Bonacorsi, D. [Bologna U.; Kuznetsov, V. [Cornell U.; Magini, N. [Fermilab; Repečka, A. [Vilnius U.; Vaandering, E. [Fermilab

    2017-11-22

    The CMS experiment has collected an enormous volume of metadata about its computing operations in its monitoring systems, describing its experience in operating all of the CMS workflows on all of the Worldwide LHC Computing Grid Tiers. Data mining efforts into all these information have rarely been done, but are of crucial importance for a better understanding of how CMS did successful operations, and to reach an adequate and adaptive modelling of the CMS operations, in order to allow detailed optimizations and eventually a prediction of system behaviours. These data are now streamed into the CERN Hadoop data cluster for further analysis. Specific sets of information (e.g. data on how many replicas of datasets CMS wrote on disks at WLCG Tiers, data on which datasets were primarily requested for analysis, etc) were collected on Hadoop and processed with MapReduce applications profiting of the parallelization on the Hadoop cluster. We present the implementation of new monitoring applications on Hadoop, and discuss the new possibilities in CMS computing monitoring introduced with the ability to quickly process big data sets from mulltiple sources, looking forward to a predictive modeling of the system.

  10. Using Amazon's Elastic Compute Cloud to dynamically scale CMS computational resources

    International Nuclear Information System (INIS)

    Evans, D; Fisk, I; Holzman, B; Pordes, R; Tiradani, A; Melo, A; Sheldon, P; Metson, S

    2011-01-01

    Large international scientific collaborations such as the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider have traditionally addressed their data reduction and analysis needs by building and maintaining dedicated computational infrastructure. Emerging cloud computing services such as Amazon's Elastic Compute Cloud (EC2) offer short-term CPU and storage resources with costs based on usage. These services allow experiments to purchase computing resources as needed, without significant prior planning and without long term investments in facilities and their management. We have demonstrated that services such as EC2 can successfully be integrated into the production-computing model of CMS, and find that they work very well as worker nodes. The cost-structure and transient nature of EC2 services makes them inappropriate for some CMS production services and functions. We also found that the resources are not truely 'on-demand' as limits and caps on usage are imposed. Our trial workflows allow us to make a cost comparison between EC2 resources and dedicated CMS resources at a University, and conclude that it is most cost effective to purchase dedicated resources for the 'base-line' needs of experiments such as CMS. However, if the ability to use cloud computing resources is built into an experiment's software framework before demand requires their use, cloud computing resources make sense for bursting during times when spikes in usage are required.

  11. The Archive Solution for Distributed Workflow Management Agents of the CMS Experiment at LHC

    Energy Technology Data Exchange (ETDEWEB)

    Kuznetsov, Valentin [Cornell U.; Fischer, Nils Leif [Heidelberg U.; Guo, Yuyi [Fermilab

    2018-03-19

    The CMS experiment at the CERN LHC developed the Workflow Management Archive system to persistently store unstructured framework job report documents produced by distributed workflow management agents. In this paper we present its architecture, implementation, deployment, and integration with the CMS and CERN computing infrastructures, such as central HDFS and Hadoop Spark cluster. The system leverages modern technologies such as a document oriented database and the Hadoop eco-system to provide the necessary flexibility to reliably process, store, and aggregate $\\mathcal{O}$(1M) documents on a daily basis. We describe the data transformation, the short and long term storage layers, the query language, along with the aggregation pipeline developed to visualize various performance metrics to assist CMS data operators in assessing the performance of the CMS computing system.

  12. CMS data and workflow management system

    CERN Document Server

    Fanfani, A; Bacchi, W; Codispoti, G; De Filippis, N; Pompili, A; My, S; Abbrescia, M; Maggi, G; Donvito, G; Silvestris, L; Calzolari, F; Sarkar, S; Spiga, D; Cinquili, M; Lacaprara, S; Biasotto, M; Farina, F; Merlo, M; Belforte, S; Kavka, C; Sala, L; Harvey, J; Hufnagel, D; Fanzago, F; Corvo, M; Magini, N; Rehn, J; Toteva, Z; Feichtinger, D; Tuura, L; Eulisse, G; Bockelman, B; Lundstedt, C; Egeland, R; Evans, D; Mason, D; Gutsche, O; Sexton-Kennedy, L; Dagenhart, D W; Afaq, A; Guo, Y; Kosyakov, S; Lueking, L; Sekhri, V; Fisk, I; McBride, P; Bauerdick, L; Bakken, J; Rossman, P; Wicklund, E; Wu, Y; Jones, C; Kuznetsov, V; Riley, D; Dolgert, A; van Lingen, F; Narsky, I; Paus, C; Klute, M; Gomez-Ceballos, G; Piedra-Gomez, J; Miller, M; Mohapatra, A; Lazaridis, C; Bradley, D; Elmer, P; Wildish, T; Wuerthwein, F; Letts, J; Bourilkov, D; Kim, B; Smith, P; Hernandez, J M; Caballero, J; Delgado, A; Flix, J; Cabrillo-Bartolome, I; Kasemann, M; Flossdorf, A; Stadie, H; Kreuzer, P; Khomitch, A; Hof, C; Zeidler, C; Kalini, S; Trunov, A; Saout, C; Felzmann, U; Metson, S; Newbold, D; Geddes, N; Brew, C; Jackson, J; Wakefield, S; De Weirdt, S; Adler, V; Maes, J; Van Mulders, P; Villella, I; Hammad, G; Pukhaeva, N; Kurca, T; Semneniouk, I; Guan, W; Lajas, J A; Teodoro, D; Gregores, E; Baquero, M; Shehzad, A; Kadastik, M; Kodolova, O; Chao, Y; Ming Kuo, C; Filippidis, C; Walzel, G; Han, D; Kalinowski, A; Giro de Almeida, N M; Panyam, N

    2008-01-01

    CMS expects to manage many tens of peta bytes of data to be distributed over several computing centers around the world. The CMS distributed computing and analysis model is designed to serve, process and archive the large number of events that will be generated when the CMS detector starts taking data. The underlying concepts and the overall architecture of the CMS data and workflow management system will be presented. In addition the experience in using the system for MC production, initial detector commissioning activities and data analysis will be summarized.

  13. Using Amazon's Elastic Compute Cloud to scale CMS' compute hardware dynamically.

    CERN Document Server

    Melo, Andrew Malone

    2011-01-01

    Large international scientific collaborations such as the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider have traditionally addressed their data reduction and analysis needs by building and maintaining dedicated computational infrastructure. Emerging cloud-computing services such as Amazon's Elastic Compute Cloud (EC2) offer short-term CPU and storage resources with costs based on usage. These services allow experiments to purchase computing resources as needed, without significant prior planning and without long term investments in facilities and their management. We have demonstrated that services such as EC2 can successfully be integrated into the production-computing model of CMS, and find that they work very well as worker nodes. The cost-structure and transient nature of EC2 services makes them inappropriate for some CMS production services and functions. We also found that the resources are not truely on-demand as limits and caps on usage are imposed. Our trial workflows allow us t...

  14. 75 FR 30839 - Privacy Act of 1974; CMS Computer Match No. 2010-03, HHS Computer Match No. 1003, SSA Computer...

    Science.gov (United States)

    2010-06-02

    ... 1974; CMS Computer Match No. 2010-03, HHS Computer Match No. 1003, SSA Computer Match No. 1048, IRS... Services (CMS). ACTION: Notice of renewal of an existing computer matching program (CMP) that has an...'' section below for comment period. DATES: Effective Dates: CMS filed a report of the Computer Matching...

  15. CMS Connect

    Science.gov (United States)

    Balcas, J.; Bockelman, B.; Gardner, R., Jr.; Hurtado Anampa, K.; Jayatilaka, B.; Aftab Khan, F.; Lannon, K.; Larson, K.; Letts, J.; Marra Da Silva, J.; Mascheroni, M.; Mason, D.; Perez-Calero Yzquierdo, A.; Tiradani, A.

    2017-10-01

    The CMS experiment collects and analyzes large amounts of data coming from high energy particle collisions produced by the Large Hadron Collider (LHC) at CERN. This involves a huge amount of real and simulated data processing that needs to be handled in batch-oriented platforms. The CMS Global Pool of computing resources provide +100K dedicated CPU cores and another 50K to 100K CPU cores from opportunistic resources for these kind of tasks and even though production and event processing analysis workflows are already managed by existing tools, there is still a lack of support to submit final stage condor-like analysis jobs familiar to Tier-3 or local Computing Facilities users into these distributed resources in an integrated (with other CMS services) and friendly way. CMS Connect is a set of computing tools and services designed to augment existing services in the CMS Physics community focusing on these kind of condor analysis jobs. It is based on the CI-Connect platform developed by the Open Science Grid and uses the CMS GlideInWMS infrastructure to transparently plug CMS global grid resources into a virtual pool accessed via a single submission machine. This paper describes the specific developments and deployment of CMS Connect beyond the CI-Connect platform in order to integrate the service with CMS specific needs, including specific Site submission, accounting of jobs and automated reporting to standard CMS monitoring resources in an effortless way to their users.

  16. CMS analysis operations

    International Nuclear Information System (INIS)

    Andreeva, J; Maier, G; Spiga, D; Calloni, M; Colling, D; Fanzago, F; D'Hondt, J; Maes, J; Van Mulders, P; Villella, I; Klem, J; Letts, J; Padhi, S; Sarkar, S

    2010-01-01

    During normal data taking CMS expects to support potentially as many as 2000 analysis users. Since the beginning of 2008 there have been more than 800 individuals who submitted a remote analysis job to the CMS computing infrastructure. The bulk of these users will be supported at the over 40 CMS Tier-2 centres. Supporting a globally distributed community of users on a globally distributed set of computing clusters is a task that requires reconsidering the normal methods of user support for Analysis Operations. In 2008 CMS formed an Analysis Support Task Force in preparation for large-scale physics analysis activities. The charge of the task force was to evaluate the available support tools, the user support techniques, and the direct feedback of users with the goal of improving the success rate and user experience when utilizing the distributed computing environment. The task force determined the tools needed to assess and reduce the number of non-zero exit code applications submitted through the grid interfaces and worked with the CMS experiment dashboard developers to obtain the necessary information to quickly and proactively identify issues with user jobs and data sets hosted at various sites. Results of the analysis group surveys were compiled. Reference platforms for testing and debugging problems were established in various geographic regions. The task force also assessed the resources needed to make the transition to a permanent Analysis Operations task. In this presentation the results of the task force will be discussed as well as the CMS Analysis Operations plans for the start of data taking.

  17. CMS Software and Computing Ready for Run 2

    CERN Document Server

    Bloom, Kenneth

    2015-01-01

    In Run 1 of the Large Hadron Collider, software and computing was a strategic strength of the Compact Muon Solenoid experiment. The timely processing of data and simulation samples and the excellent performance of the reconstruction algorithms played an important role in the preparation of the full suite of searches used for the observation of the Higgs boson in 2012. In Run 2, the LHC will run at higher intensities and CMS will record data at a higher trigger rate. These new running conditions will provide new challenges for the software and computing systems. Over the two years of Long Shutdown 1, CMS has built upon the successes of Run 1 to improve the software and computing to meet these challenges. In this presentation we will describe the new features in software and computing that will once again put CMS in a position of physics leadership.

  18. 76 FR 14669 - Privacy Act of 1974; CMS Computer Match No. 2011-02; HHS Computer Match No. 1007

    Science.gov (United States)

    2011-03-17

    ... 1974; CMS Computer Match No. 2011-02; HHS Computer Match No. 1007 AGENCY: Department of Health and Human Services (HHS), Centers for Medicare & Medicaid Services (CMS). ACTION: Notice of computer... notice establishes a computer matching agreement between CMS and the Department of Defense (DoD). We have...

  19. CMS offline web tools

    International Nuclear Information System (INIS)

    Metson, S; Newbold, D; Belforte, S; Kavka, C; Bockelman, B; Dziedziniewicz, K; Egeland, R; Elmer, P; Eulisse, G; Tuura, L; Evans, D; Fanfani, A; Feichtinger, D; Kuznetsov, V; Lingen, F van; Wakefield, S

    2008-01-01

    We describe a relatively new effort within CMS to converge on a set of web based tools, using state of the art industry techniques, to engage with the CMS offline computing system. CMS collaborators require tools to monitor various components of the computing system and interact with the system itself. The current state of the various CMS web tools is described along side current planned developments. The CMS collaboration comprises of nearly 3000 people from all over the world. As well as its collaborators, its computing resources are spread all over globe and are accessed via the LHC grid to run analysis, large scale production and data transfer tasks. Due to the distributed nature of collaborators effective provision of collaborative tools is essential to maximise physics exploitation of the CMS experiment, especially when the size of the CMS data set is considered. CMS has chosen to provide such tools over the world wide web as a top level service, enabling all members of the collaboration to interact with the various offline computing components. Traditionally web interfaces have been added in HEP experiments as an afterthought. In the CMS offline we have decided to put web interfaces, and the development of a common CMS web framework, on an equal footing with the rest of the offline development. Tools exist within CMS to transfer and catalogue data (PhEDEx and DBS/DLS), run Monte Carlo production (ProdAgent) and submit analysis (CRAB). Effective human interfaces to these systems are required for users with different agendas and practical knowledge of the systems to effectively use the CMS computing system. The CMS web tools project aims to provide a consistent interface to all these tools

  20. CMS offline web tools

    Energy Technology Data Exchange (ETDEWEB)

    Metson, S; Newbold, D [H.H. Wills Physics Laboratory, University of Bristol, Tyndall Avenue, Bristol BS8 1TL (United Kingdom); Belforte, S; Kavka, C [INFN, Sezione di Trieste (Italy); Bockelman, B [University of Nebraska Lincoln, Lincoln, NE (United States); Dziedziniewicz, K [CERN, Geneva (Switzerland); Egeland, R [University of Minnesota Twin Cities, Minneapolis, MN (United States); Elmer, P [Princeton (United States); Eulisse, G; Tuura, L [Northeastern University, Boston, MA (United States); Evans, D [Fermilab MS234, Batavia, IL (United States); Fanfani, A [Universita degli Studi di Bologna (Italy); Feichtinger, D [PSI, Villigen (Switzerland); Kuznetsov, V [Cornell University, Ithaca, NY (United States); Lingen, F van [California Institute of Technology, Pasedena, CA (United States); Wakefield, S [Blackett Laboratory, Imperial College, London (United Kingdom)

    2008-07-15

    We describe a relatively new effort within CMS to converge on a set of web based tools, using state of the art industry techniques, to engage with the CMS offline computing system. CMS collaborators require tools to monitor various components of the computing system and interact with the system itself. The current state of the various CMS web tools is described along side current planned developments. The CMS collaboration comprises of nearly 3000 people from all over the world. As well as its collaborators, its computing resources are spread all over globe and are accessed via the LHC grid to run analysis, large scale production and data transfer tasks. Due to the distributed nature of collaborators effective provision of collaborative tools is essential to maximise physics exploitation of the CMS experiment, especially when the size of the CMS data set is considered. CMS has chosen to provide such tools over the world wide web as a top level service, enabling all members of the collaboration to interact with the various offline computing components. Traditionally web interfaces have been added in HEP experiments as an afterthought. In the CMS offline we have decided to put web interfaces, and the development of a common CMS web framework, on an equal footing with the rest of the offline development. Tools exist within CMS to transfer and catalogue data (PhEDEx and DBS/DLS), run Monte Carlo production (ProdAgent) and submit analysis (CRAB). Effective human interfaces to these systems are required for users with different agendas and practical knowledge of the systems to effectively use the CMS computing system. The CMS web tools project aims to provide a consistent interface to all these tools.

  1. 78 FR 39730 - Privacy Act of 1974; CMS Computer Match No. 2013-11; HHS Computer Match No. 1302

    Science.gov (United States)

    2013-07-02

    ... 1974; CMS Computer Match No. 2013-11; HHS Computer Match No. 1302 AGENCY: Centers for Medicare & Medicaid Services (CMS), Department of Health and Human Services (HHS). ACTION: Notice of Computer Matching... notice announces the establishment of a CMP that CMS intends to conduct with State-based Administering...

  2. Monitoring data transfer latency in CMS computing operations

    CERN Document Server

    Bonacorsi, D; Magini, N; Sartirana, A; Taze, M; Wildish, T

    2015-01-01

    During the first LHC run, the CMS experiment collected tens of Petabytes of collision and simulated data, which need to be distributed among dozens of computing centres with low latency in order to make efficient use of the resources. While the desired level of throughput has been successfully achieved, it is still common to observe transfer workflows that cannot reach full completion in a timely manner due to a small fraction of stuck files which require operator intervention.For this reason, in 2012 the CMS transfer management system, PhEDEx, was instrumented with a monitoring system to measure file transfer latencies, and to predict the completion time for the transfer of a data set. The operators can detect abnormal patterns in transfer latencies while the transfer is still in progress, and monitor the long-term performance of the transfer infrastructure to plan the data placement strategy.Based on the data collected for one year with the latency monitoring system, we present a study on the different fact...

  3. 78 FR 50419 - Privacy Act of 1974; CMS Computer Match No. 2013-10; HHS Computer Match No. 1310

    Science.gov (United States)

    2013-08-19

    ... 1974; CMS Computer Match No. 2013-10; HHS Computer Match No. 1310 AGENCY: Centers for Medicare & Medicaid Services (CMS), Department of Health and Human Services (HHS). ACTION: Notice of Computer Matching... notice announces the establishment of a CMP that CMS plans to conduct with the Department of Homeland...

  4. CMS computing model evolution

    International Nuclear Information System (INIS)

    Grandi, C; Bonacorsi, D; Colling, D; Fisk, I; Girone, M

    2014-01-01

    The CMS Computing Model was developed and documented in 2004. Since then the model has evolved to be more flexible and to take advantage of new techniques, but many of the original concepts remain and are in active use. In this presentation we will discuss the changes planned for the restart of the LHC program in 2015. We will discuss the changes planning in the use and definition of the computing tiers that were defined with the MONARC project. We will present how we intend to use new services and infrastructure to provide more efficient and transparent access to the data. We will discuss the computing plans to make better use of the computing capacity by scheduling more of the processor nodes, making better use of the disk storage, and more intelligent use of the networking.

  5. 78 FR 73195 - Privacy Act of 1974: CMS Computer Matching Program Match No. 2013-01; HHS Computer Matching...

    Science.gov (United States)

    2013-12-05

    ... 1974: CMS Computer Matching Program Match No. 2013-01; HHS Computer Matching Program Match No. 1312 AGENCY: Centers for Medicare & Medicaid Services (CMS), Department of Health and Human Services (HHS... Privacy Act of 1974 (5 U.S.C. 552a), as amended, this notice announces the renewal of a CMP that CMS plans...

  6. CMS 2006 - CMS France days; CMS 2006 les journees CMS FRANCE

    Energy Technology Data Exchange (ETDEWEB)

    Huss, D.; Dobrzynski, L.; Virdee, J.; Boudoule, G.; Fontaine, J.C.; Faure, J.L.; Paganini, P.; Mathez, H.; Gross, L.; Charlot, C.; Trunov, A.; Patois, Y.; Busson, P.; Maire, M.; Berthon, U.; Todorov, T.; Beaudette, F.; Sirois, Y.; Baffioni, S.; Beauceron, S.; Delmeire, E.; Agram, J.L.; Goerlach, U.; Mangeol, D.; Salerno, R.; Bloch, D.; Lassila-Perini, K.; Blaha, J.; Drobychev, G.; Gras, P.; Hagenauer, M.; Denegri, D.; Lounis, A.; Faccio, F.; Lecoq, J

    2006-07-01

    These CMS talks give the opportunity for all the teams working on the CMS (Compact Muon Spectrometer) project to present the status of their works and to exchange ideas. 5 sessions have been organized: 1) CMS status and perspectives, 2) contributions of the different laboratories, 3) software and computation, 4) physics with CMS (particularly the search for the Higgs boson), and 5) electronic needs. This document gathers the slides of the presentations.

  7. The commissioning of CMS sites: Improving the site reliability

    International Nuclear Information System (INIS)

    Belforte, S; Fisk, I; Flix, J; Hernandez, J M; Klem, J; Letts, J; Magini, N; Saiz, P; Sciaba, A

    2010-01-01

    The computing system of the CMS experiment works using distributed resources from more than 60 computing centres worldwide. These centres, located in Europe, America and Asia are interconnected by the Worldwide LHC Computing Grid. The operation of the system requires a stable and reliable behaviour of the underlying infrastructure. CMS has established a procedure to extensively test all relevant aspects of a Grid site, such as the ability to efficiently use their network to transfer data, the functionality of all the site services relevant for CMS and the capability to sustain the various CMS computing workflows at the required scale. This contribution describes in detail the procedure to rate CMS sites depending on their performance, including the complete automation of the program, the description of monitoring tools, and its impact in improving the overall reliability of the Grid from the point of view of the CMS computing system.

  8. CMS MANANGEMENT MEETINGS

    CERN Multimedia

    Management Board Agendas and minutes of meetings of the Management Board are accessible to CMS members at: http://indico.cern.ch/categoryDisplay.py?categId=223 Collaboration Board Agendas and minutes of meetings of the Collaboration Board are accessible to CMS members at: http://indico.cern.ch/categoryDisplay.py?categId=174 LHCC: Feedback from the CMS Referees, LHCC 97 February 25, 2009. The CMS LHCC referees met with representatives of CMS on 17-2-09, to review progress since the last November minireview. The main topics included shutdown construction, maintenance and repairs; status of the preshower detector; commissioning and physics analysis results from cosmic ray running and CSA08; preparations for physics, off line analysis, computing, and data distribution. TOTEM management and the TOTEM referees then joined us for a joint session to examine the readiness of the TOTEM detector. Detector construction, maintenance, and repairs. The referees congratulate CMS Management and the Detector Groups for the...

  9. Opportunistic resource usage in CMS

    International Nuclear Information System (INIS)

    Kreuzer, Peter; Hufnagel, Dirk; Dykstra, D; Gutsche, O; Tadel, M; Sfiligoi, I; Letts, J; Wuerthwein, F; McCrea, A; Bockelman, B; Fajardo, E; Linares, L; Wagner, R; Konstantinov, P; Blumenfeld, B; Bradley, D

    2014-01-01

    CMS is using a tiered setup of dedicated computing resources provided by sites distributed over the world and organized in WLCG. These sites pledge resources to CMS and are preparing them especially for CMS to run the experiment's applications. But there are more resources available opportunistically both on the GRID and in local university and research clusters which can be used for CMS applications. We will present CMS' strategy to use opportunistic resources and prepare them dynamically to run CMS applications. CMS is able to run its applications on resources that can be reached through the GRID, through EC2 compliant cloud interfaces. Even resources that can be used through ssh login nodes can be harnessed. All of these usage modes are integrated transparently into the GlideIn WMS submission infrastructure, which is the basis of CMS' opportunistic resource usage strategy. Technologies like Parrot to mount the software distribution via CVMFS and xrootd for access to data and simulation samples via the WAN are used and will be described. We will summarize the experience with opportunistic resource usage and give an outlook for the restart of LHC data taking in 2015.

  10. CMS Computing Software and Analysis Challenge 2006

    Energy Technology Data Exchange (ETDEWEB)

    De Filippis, N. [Dipartimento interateneo di Fisica M. Merlin and INFN Bari, Via Amendola 173, 70126 Bari (Italy)

    2007-10-15

    The CMS (Compact Muon Solenoid) collaboration is making a big effort to test the workflow and the dataflow associated with the data handling model. With this purpose the computing, software and analysis Challenge 2006, namely CSA06, started the 15th of September. It was a 50 million event exercise that included all the steps of the analysis chain, like the prompt reconstruction, the data streaming, calibration and alignment iterative executions, the data distribution to regional sites, up to the end-user analysis. Grid tools provided by the LCG project are also experimented to gain access to the data and the resources by providing a user friendly interface to the physicists submitting the production and the analysis jobs. An overview of the status and results of the CSA06 is presented in this work.

  11. CMS Computing Software and Analysis Challenge 2006

    International Nuclear Information System (INIS)

    De Filippis, N.

    2007-01-01

    The CMS (Compact Muon Solenoid) collaboration is making a big effort to test the workflow and the dataflow associated with the data handling model. With this purpose the computing, software and analysis Challenge 2006, namely CSA06, started the 15th of September. It was a 50 million event exercise that included all the steps of the analysis chain, like the prompt reconstruction, the data streaming, calibration and alignment iterative executions, the data distribution to regional sites, up to the end-user analysis. Grid tools provided by the LCG project are also experimented to gain access to the data and the resources by providing a user friendly interface to the physicists submitting the production and the analysis jobs. An overview of the status and results of the CSA06 is presented in this work

  12. CMS MANAGEMENT MEETINGS

    CERN Multimedia

    Management Board Agendas and minutes of meetings of the Management Board are accessible to CMS members at: http://indico.cern.ch/categoryDisplay.py?categId=223 Collaboration Board Agendas and minutes of meetings of the Collaboration Board are accessible to CMS members at: http://indico.cern.ch/categoryDisplay.py?categId=174 LHCC: Feedback from the CMS Referees, LHCC 97 February 25, 2009. The CMS LHCC referees met with representatives of CMS on 17-2-09, to review progress since the last November minireview. The main topics included  shutdown construction, maintenance and repairs;  status of the preshower detector; commissioning and physics analysis results from cosmic ray running and CSA08;   preparations for physics, off line analysis, computing, and data distribution. TOTEM management and the TOTEM referees then joined us for a joint session to examine the readiness of the TOTEM detector. Detector construction, maintenance, and repairs. The referees congratulate C...

  13. CMS MANAGEMENT MEETINGS

    CERN Multimedia

    Jim Virdee

    Management Board Agendas and minutes of meetings of the Management Board are accessible to CMS members at: http://indico.cern.ch/categoryDisplay.py?categId=223 Collaboration Board Agendas and minutes of meetings of the Collaboration Board are accessible to CMS members at: http://indico.cern.ch/categoryDisplay.py?categId=174 LHCC: Feedback from the CMS Referees, LHCC 97 February 25, 2009. The CMS LHCC referees met with representatives of CMS on 17-2-09, to review progress since the last November minireview. The main topics included  shutdown construction, maintenance and repairs;  status of the preshower detector; commissioning and physics analysis results from cosmic ray running and CSA08;   preparations for physics, off line analysis, computing, and data distribution. TOTEM management and the TOTEM referees then joined us for a joint session to examine the readiness of the TOTEM detector. Detector construction, maintenance, and repairs. The referees congratula...

  14. Computing challenges of the CMS experiment

    International Nuclear Information System (INIS)

    Krammer, N.; Liko, D.

    2017-01-01

    The success of the LHC experiments is due to the magnificent performance of the detector systems and the excellent operating computing systems. The CMS offline software and computing system is successfully fulfilling the LHC Run 2 requirements. For the increased data rate of future LHC operation, together with high pileup interactions, improvements of the usage of the current computing facilities and new technologies became necessary. Especially for the challenge of the future HL-LHC a more flexible and sophisticated computing model is needed. In this presentation, I will discuss the current computing system used in the LHC Run 2 and future computing facilities for the HL-LHC runs using flexible computing technologies like commercial and academic computing clouds. The cloud resources are highly virtualized and can be deployed for a variety of computing tasks providing the capacities for the increasing needs of large scale scientific computing.

  15. CMS 2006 - CMS France days

    International Nuclear Information System (INIS)

    Huss, D.; Dobrzynski, L.; Virdee, J.; Boudoule, G.; Fontaine, J.C.; Faure, J.L.; Paganini, P.; Mathez, H.; Gross, L.; Charlot, C.; Trunov, A.; Patois, Y.; Busson, P.; Maire, M.; Berthon, U.; Todorov, T.; Beaudette, F.; Sirois, Y.; Baffioni, S.; Beauceron, S.; Delmeire, E.; Agram, J.L.; Goerlach, U.; Mangeol, D.; Salerno, R.; Bloch, D.; Lassila-Perini, K.; Blaha, J.; Drobychev, G.; Gras, P.; Hagenauer, M.; Denegri, D.; Lounis, A.; Faccio, F.; Lecoq, J.

    2006-01-01

    These CMS talks give the opportunity for all the teams working on the CMS (Compact Muon Spectrometer) project to present the status of their works and to exchange ideas. 5 sessions have been organized: 1) CMS status and perspectives, 2) contributions of the different laboratories, 3) software and computation, 4) physics with CMS (particularly the search for the Higgs boson), and 5) electronic needs. This document gathers the slides of the presentations

  16. Debugging data transfers in CMS

    International Nuclear Information System (INIS)

    Bagliesi, G; Belforte, S; Bloom, K; Bockelman, B; Bonacorsi, D; Fisk, I; Flix, J; Hernandez, J; D'Hondt, J; Maes, J; Kadastik, M; Klem, J; Kodolova, O; Kuo, C-M; Letts, J; Magini, N; Metson, S; Piedra, J; Pukhaeva, N; Tuura, L

    2010-01-01

    The CMS experiment at CERN is preparing for LHC data taking in several computing preparation activities. In early 2007 a traffic load generator infrastructure for distributed data transfer tests was designed and deployed to equip the WLCG tiers which support the CMS virtual organization with a means for debugging, load-testing and commissioning data transfer routes among CMS computing centres. The LoadTest is based upon PhEDEx as a reliable, scalable data set replication system. The Debugging Data Transfers (DDT) task force was created to coordinate the debugging of the data transfer links. The task force aimed to commission most crucial transfer routes among CMS tiers by designing and enforcing a clear procedure to debug problematic links. Such procedure aimed to move a link from a debugging phase in a separate and independent environment to a production environment when a set of agreed conditions are achieved for that link. The goal was to deliver one by one working transfer routes to the CMS data operations team. The preparation, activities and experience of the DDT task force within the CMS experiment are discussed. Common technical problems and challenges encountered during the lifetime of the taskforce in debugging data transfer links in CMS are explained and summarized.

  17. Debugging Data Transfers in CMS

    CERN Document Server

    Bagliesi, G; Bloom, K; Bockelman, B; Bonacorsi, D; Fisk, I; Flix, J; Hernandez, J; D'Hondt, J; Kadastik, M; Klem, J; Kodolova, O; Kuo, C M; Letts, J; Maes, J; Magini, N; Metson, S; Piedra, J; Pukhaeva, N; Tuura, L; Sonajalg, S; Wu, Y; Van Mulders, P; Villella, I; Wurthwein, F

    2010-01-01

    The CMS experiment at CERN is preparing for LHC data taking in several computing preparation activities. In early 2007 a traffic load generator infrastructure for distributed data transfer tests called the LoadTest was designed and deployed to equip the WLCG sites that support CMS with a means for debugging, load-testing and commissioning data transfer routes among CMS computing centres. The LoadTest is based upon PhEDEx as a reliable, scalable data set replication system. The Debugging Data Transfers (DDT) task force was created to coordinate the debugging of the data transfer links. The task force aimed to commission most crucial transfer routes among CMS sites by designing and enforcing a clear procedure to debug problematic links. Such procedure aimed to move a link from a debugging phase in a separate and independent environment to a production environment when a set of agreed conditions are achieved for that link. The goal was to deliver one by one working transfer routes to the CMS data operations team...

  18. 78 FR 42080 - Privacy Act of 1974; CMS Computer Match No. 2013-07; HHS Computer Match No. 1303; DoD-DMDC Match...

    Science.gov (United States)

    2013-07-15

    ... 1974; CMS Computer Match No. 2013-07; HHS Computer Match No. 1303; DoD-DMDC Match No. 18 AGENCY: Centers for Medicare & Medicaid Services (CMS), Department of Health and Human Services (HHS). ACTION... Act of 1974, as amended, this notice announces the establishment of a CMP that CMS plans to conduct...

  19. 78 FR 48169 - Privacy Act of 1974; CMS Computer Match No. 2013-02; HHS Computer Match No. 1306; DoD-DMDC Match...

    Science.gov (United States)

    2013-08-07

    ... 1974; CMS Computer Match No. 2013-02; HHS Computer Match No. 1306; DoD-DMDC Match No. 12 AGENCY: Department of Health and Human Services (HHS), Centers for Medicare & Medicaid Services (CMS). ACTION: Notice... of 1974, as amended, this notice establishes a CMP that CMS plans to conduct with the Department of...

  20. CMS Experiment Data Processing at RDMS CMS Tier 2 Centers

    CERN Document Server

    Gavrilov, V; Korenkov, V; Tikhonenko, E; Shmatov, S; Zhiltsov, V; Ilyin, V; Kodolova, O; Levchuk, L

    2012-01-01

    Russia and Dubna Member States (RDMS) CMS collaboration was founded in the year 1994 [1]. The RDMS CMS takes an active part in the Compact Muon Solenoid (CMS) Collaboration [2] at the Large Hadron Collider (LHC) [3] at CERN [4]. RDMS CMS Collaboration joins more than twenty institutes from Russia and Joint Institute for Nuclear Research (JINR) member states. RDMS scientists, engineers and technicians were actively participating in design, construction and commissioning of all CMS sub-detectors in forward regions. RDMS CMS physics program has been developed taking into account the essential role of these sub-detectors for the corresponding physical channels. RDMS scientists made large contribution for preparation of study QCD, Electroweak, Exotics, Heavy Ion and other physics at CMS. The overview of RDMS CMS physics tasks and RDMS CMS computing activities are presented in [5-11]. RDMS CMS computing support should satisfy the LHC data processing and analysis requirements at the running phase of the CMS experime...

  1. Optimization of Italian CMS computing centers via MIUR funded research projects

    International Nuclear Information System (INIS)

    Boccali, T; Mazzoni, E; Donvito, G; Pompili, A; Ricca, G Della; Talamo, I; Argiro, S; Grandi, C; Bonacorsi, D; Lista, L; Fabozzi, F; Barone, L M; Santocchia, A; Riahi, H; Tricomi, A; Sgaravatto, M; Maron, G

    2014-01-01

    In 2012, 14 Italian Institutions participating LHC Experiments (10 in CMS) have won a grant from the Italian Ministry of Research (MIUR), to optimize Analysis activities and in general the Tier2/Tier3 infrastructure. A large range of activities is actively carried on: they cover data distribution over WAN, dynamic provisioning for both scheduled and interactive processing, design and development of tools for distributed data analysis, and tests on the porting of CMS software stack to new highly performing / low power architectures.

  2. CMS distributed analysis infrastructure and operations: experience with the first LHC data

    International Nuclear Information System (INIS)

    Vaandering, E W

    2011-01-01

    The CMS distributed analysis infrastructure represents a heterogeneous pool of resources distributed across several continents. The resources are harnessed using glite and glidein-based work load management systems (WMS). We provide the operational experience of the analysis workflows using CRAB-based servers interfaced with the underlying WMS. The automatized interaction of the server with the WMS provides a successful analysis workflow. We present the operational experience as well as methods used in CMS to analyze the LHC data. The interaction with CMS Run-registry for Run and luminosity block selections via CRAB is discussed. The variations of different workflows during the LHC data-taking period and the lessons drawn from this experience are also outlined.

  3. Operational experience with CMS Tier-2 sites

    International Nuclear Information System (INIS)

    Gonzalez Caballero, I

    2010-01-01

    In the CMS computing model, more than one third of the computing resources are located at Tier-2 sites, which are distributed across the countries in the collaboration. These sites are the primary platform for user analyses; they host datasets that are created at Tier-1 sites, and users from all CMS institutes submit analysis jobs that run on those data through grid interfaces. They are also the primary resource for the production of large simulation samples for general use in the experiment. As a result, Tier-2 sites have an interesting mix of organized experiment-controlled activities and chaotic user-controlled activities. CMS currently operates about 40 Tier-2 sites in 22 countries, making the sites a far-flung computational and social network. We describe our operational experience with the sites, touching on our achievements, the lessons learned, and the challenges for the future.

  4. Challenging Data Management in CMS Computing with Network-aware Systems

    CERN Document Server

    Bonacorsi, Daniele

    2013-01-01

    After a successful first run at the LHC, and during the Long Shutdown (LS1) of the accelerator, the workload and data management sectors of the CMS Computing Model are entering into an operational review phase in order to concretely assess area of possible improvements and paths to exploit new promising technology trends. In particular, since the preparation activities for the LHC start, the Networks have constantly been of paramount importance for the execution of CMS workflows, exceeding the original expectations - as from the MONARC model - in terms of performance, stability and reliability. The low-latency transfers of PetaBytes of CMS data among dozens of WLCG Tiers worldwide using the PhEDEx dataset replication system is an example of the importance of reliable Networks. Another example is the exploitation of WAN data access over data federations in CMS. A new emerging area of work is the exploitation of �?��??Intelligent Network Services�?��?�, including also bandwidt...

  5. Challenging data and workload management in CMS Computing with network-aware systems

    Science.gov (United States)

    D, Bonacorsi; T, Wildish

    2014-06-01

    After a successful first run at the LHC, and during the Long Shutdown (LS1) of the accelerator, the workload and data management sectors of the CMS Computing Model are entering into an operational review phase in order to concretely assess area of possible improvements and paths to exploit new promising technology trends. In particular, since the preparation activities for the LHC start, the Networks have constantly been of paramount importance for the execution of CMS workflows, exceeding the original expectations - as from the MONARC model - in terms of performance, stability and reliability. The low-latency transfers of PetaBytes of CMS data among dozens of WLCG Tiers worldwide using the PhEDEx dataset replication system is an example of the importance of reliable Networks. Another example is the exploitation of WAN data access over data federations in CMS. A new emerging area of work is the exploitation of Intelligent Network Services, including also bandwidth on demand concepts. In this paper, we will review the work done in CMS on this, and the next steps.

  6. Challenging data and workload management in CMS Computing with network-aware systems

    International Nuclear Information System (INIS)

    Bonacorsi D; Wildish T

    2014-01-01

    After a successful first run at the LHC, and during the Long Shutdown (LS1) of the accelerator, the workload and data management sectors of the CMS Computing Model are entering into an operational review phase in order to concretely assess area of possible improvements and paths to exploit new promising technology trends. In particular, since the preparation activities for the LHC start, the Networks have constantly been of paramount importance for the execution of CMS workflows, exceeding the original expectations - as from the MONARC model - in terms of performance, stability and reliability. The low-latency transfers of PetaBytes of CMS data among dozens of WLCG Tiers worldwide using the PhEDEx dataset replication system is an example of the importance of reliable Networks. Another example is the exploitation of WAN data access over data federations in CMS. A new emerging area of work is the exploitation of Intelligent Network Services, including also bandwidth on demand concepts. In this paper, we will review the work done in CMS on this, and the next steps.

  7. Elastic Extension of a CMS Computing Centre Resources on External Clouds

    Science.gov (United States)

    Codispoti, G.; Di Maria, R.; Aiftimiei, C.; Bonacorsi, D.; Calligola, P.; Ciaschini, V.; Costantini, A.; Dal Pra, S.; DeGirolamo, D.; Grandi, C.; Michelotto, D.; Panella, M.; Peco, G.; Sapunenko, V.; Sgaravatto, M.; Taneja, S.; Zizzi, G.

    2016-10-01

    After the successful LHC data taking in Run-I and in view of the future runs, the LHC experiments are facing new challenges in the design and operation of the computing facilities. The computing infrastructure for Run-II is dimensioned to cope at most with the average amount of data recorded. The usage peaks, as already observed in Run-I, may however originate large backlogs, thus delaying the completion of the data reconstruction and ultimately the data availability for physics analysis. In order to cope with the production peaks, CMS - along the lines followed by other LHC experiments - is exploring the opportunity to access Cloud resources provided by external partners or commercial providers. Specific use cases have already been explored and successfully exploited during Long Shutdown 1 (LS1) and the first part of Run 2. In this work we present the proof of concept of the elastic extension of a CMS site, specifically the Bologna Tier-3, on an external OpenStack infrastructure. We focus on the “Cloud Bursting” of a CMS Grid site using a newly designed LSF configuration that allows the dynamic registration of new worker nodes to LSF. In this approach, the dynamically added worker nodes instantiated on the OpenStack infrastructure are transparently accessed by the LHC Grid tools and at the same time they serve as an extension of the farm for the local usage. The amount of resources allocated thus can be elastically modeled to cope up with the needs of CMS experiment and local users. Moreover, a direct access/integration of OpenStack resources to the CMS workload management system is explored. In this paper we present this approach, we report on the performances of the on-demand allocated resources, and we discuss the lessons learned and the next steps.

  8. CMS distributed data analysis with CRAB3

    Science.gov (United States)

    Mascheroni, M.; Balcas, J.; Belforte, S.; Bockelman, B. P.; Hernandez, J. M.; Ciangottini, D.; Konstantinov, P. B.; Silva, J. M. D.; Ali, M. A. B. M.; Melo, A. M.; Riahi, H.; Tanasijczuk, A. J.; Yusli, M. N. B.; Wolf, M.; Woodard, A. E.; Vaandering, E.

    2015-12-01

    The CMS Remote Analysis Builder (CRAB) is a distributed workflow management tool which facilitates analysis tasks by isolating users from the technical details of the Grid infrastructure. Throughout LHC Run 1, CRAB has been successfully employed by an average of 350 distinct users each week executing about 200,000 jobs per day. CRAB has been significantly upgraded in order to face the new challenges posed by LHC Run 2. Components of the new system include 1) a lightweight client, 2) a central primary server which communicates with the clients through a REST interface, 3) secondary servers which manage user analysis tasks and submit jobs to the CMS resource provisioning system, and 4) a central service to asynchronously move user data from temporary storage in the execution site to the desired storage location. The new system improves the robustness, scalability and sustainability of the service. Here we provide an overview of the new system, operation, and user support, report on its current status, and identify lessons learned from the commissioning phase and production roll-out.

  9. CMS experience of running glideinWMS in High Availability mode

    CERN Document Server

    Sfiligoi, Igor; Belforte, Stefano; Mc Crea, Alison Jean; Larson, Krista Elaine; Zvada, Marian; Holzman, Burt; P Mhashilkar; Bradley, Daniel Charles; Saiz Santos, Maria Dolores; Fanzago, Federica; Gutsche, Oliver; Martin, Terrence; Wuerthwein, Frank Karl

    2013-01-01

    The CMS experiment at the Large Hadron Collider is relying on the HTCondor-based glideinWMS batch system to handle most of its distributed computing needs. In order to minimize the risk of disruptions due to software and hardware problems, and also to simplify the maintenance procedures, CMS has set up its glideinWMS instance to use most of the attainable High Availability (HA) features. The setup involves running services distributed over multiple nodes, which in turn are located in several physical locations, including Geneva, Switzerland, Chicago, Illinois and San Diego, California. This paper describes the setup used by CMS, the HA limits of this setup, as well as a description of the actual operational experience spanning many months.

  10. submitter Studies of CMS data access patterns with machine learning techniques

    CERN Document Server

    De Luca, Silvia

    This thesis presents a study of the Grid data access patterns in distributed analysis in the CMS experiment at the LHC accelerator. This study ranges from the deep analysis of the historical patterns of access to the most relevant data types in CMS, to the exploitation of a supervised Machine Learning classification system to set-up a machinery able to eventually predict future data access patterns - i.e. the so-called dataset “popularity” of the CMS datasets on the Grid - with focus on specific data types. All the CMS workflows run on the Worldwide LHC Computing Grid (WCG) computing centers (Tiers), and in particular the distributed analysis systems sustains hundreds of users and applications submitted every day. These applications (or “jobs”) access different data types hosted on disk storage systems at a large set of WLCG Tiers. The detailed study of how this data is accessed, in terms of data types, hosting Tiers, and different time periods, allows to gain precious insight on storage occupancy ove...

  11. Challenging data and workload management in CMS Computing with network-aware systems

    CERN Document Server

    Wildish, Anthony

    2014-01-01

    After a successful first run at the LHC, and during the Long Shutdown (LS1) of the accelerator, the workload and data management sectors of the CMS Computing Model are entering into an operational review phase in order to concretely assess area of possible improvements and paths to exploit new promising technology trends. In particular, since the preparation activities for the LHC start, the Networks have constantly been of paramount importance for the execution of CMS workflows, exceeding the original expectations - as from the MONARC model - in terms of performance, stability and reliability. The low-latency transfers of PetaBytes of CMS data among dozens of WLCG Tiers worldwide using the PhEDEx dataset replication system is an example of the importance of reliable Networks. Another example is the exploitation of WAN data access over data federations in CMS. A new emerging area of work is the exploitation of "Intelligent Network Services", including also bandwidth on demand concepts. In this paper, we will ...

  12. CMS resource utilization and limitations on the grid after the first two years of LHC collisions

    Energy Technology Data Exchange (ETDEWEB)

    Bagliesi, Giuseppe [Pisa U.; Bloom, Kenneth [Nebraska U.; Bonacorsi, Daniele [Bologna U.; Brew, Chris [Rutherford; Fisk, Ian [Fermilab; Flix, Jose [Madrid, CIEMAT; Kreuzer, Peter [Aachen, Tech. Hochsch.; Sciaba, Andrea [CERN

    2012-01-01

    After years of development, the CMS distributed computing system is now in full operation. The LHC continues to set records for operational performance, and CMS records data at more than 300 Hz. Because of the intensity of the beams, there are multiple proton-proton interactions per beam crossing, leading to ever-larger event sizes and processing times. The CMS computing system has responded admirably to these challenges, but some reoptimization of the computing model has been required to maximize the efficient delivery of data analysis results by the collaboration in the face of increasingly constrained computing resources. We present the current status of the system, describe the recent performance, and discuss the challenges ahead and how CMS intends to meet them.

  13. A comparison between physicians and computer algorithms for form CMS-2728 data reporting.

    Science.gov (United States)

    Malas, Mohammed Said; Wish, Jay; Moorthi, Ranjani; Grannis, Shaun; Dexter, Paul; Duke, Jon; Moe, Sharon

    2017-01-01

    CMS-2728 form (Medical Evidence Report) assesses 23 comorbidities chosen to reflect poor outcomes and increased mortality risk. Previous studies questioned the validity of physician reporting on forms CMS-2728. We hypothesize that reporting of comorbidities by computer algorithms identifies more comorbidities than physician completion, and, therefore, is more reflective of underlying disease burden. We collected data from CMS-2728 forms for all 296 patients who had incident ESRD diagnosis and received chronic dialysis from 2005 through 2014 at Indiana University outpatient dialysis centers. We analyzed patients' data from electronic medical records systems that collated information from multiple health care sources. Previously utilized algorithms or natural language processing was used to extract data on 10 comorbidities for a period of up to 10 years prior to ESRD incidence. These algorithms incorporate billing codes, prescriptions, and other relevant elements. We compared the presence or unchecked status of these comorbidities on the forms to the presence or absence according to the algorithms. Computer algorithms had higher reporting of comorbidities compared to forms completion by physicians. This remained true when decreasing data span to one year and using only a single health center source. The algorithms determination was well accepted by a physician panel. Importantly, algorithms use significantly increased the expected deaths and lowered the standardized mortality ratios. Using computer algorithms showed superior identification of comorbidities for form CMS-2728 and altered standardized mortality ratios. Adapting similar algorithms in available EMR systems may offer more thorough evaluation of comorbidities and improve quality reporting. © 2016 International Society for Hemodialysis.

  14. Grid computing for LHC and methods for W boson mass measurement at CMS

    International Nuclear Information System (INIS)

    Jung, Christopher

    2007-01-01

    Two methods for measuring the W boson mass with the CMS detector have been presented in this thesis. Both methods use similarities between W boson and Z boson decays. Their statistical and systematic precisions have been determined for W → μν; the statistics corresponds to one inverse femtobarn of data. A large number of events needed to be simulated for this analysis; it was not possible to use the full simulation software because of the enormous computing time which would have been needed. Instead, a fast simulation tool for the CMS detector was used. Still, the computing requirements for the fast simulation exceeded the capacity of the local compute cluster. Since the data taken and processed at the LHC will be extremely large, the LHC experiments rely on the emerging grid computing tools. The computing capabilities of the grid have been used for simulating all physics events needed for this thesis. To achieve this, the local compute cluster had to be integrated into the grid and the administration of the grid components had to be secured. As this was the first installation of its kind, several contributions to grid training events could be made: courses on grid installation, administration and grid-enabled applications were given. The two methods for the W mass measurement are the morphing method and the scaling method. The morphing method relies on an analytical transformation of Z boson events into W boson events and determines the W boson mass by comparing the transverse mass distributions; the scaling method relies on scaled observables from W boson and Z boson events, e.g. the transverse muon momentum as studied in this thesis. In both cases, a re-weighting technique applied to Monte Carlo generated events is used to take into account different selection cuts, detector acceptances, and differences in production and decay of W boson and Z boson events. (orig.)

  15. Grid computing for LHC and methods for W boson mass measurement at CMS

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Christopher

    2007-12-14

    Two methods for measuring the W boson mass with the CMS detector have been presented in this thesis. Both methods use similarities between W boson and Z boson decays. Their statistical and systematic precisions have been determined for W {yields} {mu}{nu}; the statistics corresponds to one inverse femtobarn of data. A large number of events needed to be simulated for this analysis; it was not possible to use the full simulation software because of the enormous computing time which would have been needed. Instead, a fast simulation tool for the CMS detector was used. Still, the computing requirements for the fast simulation exceeded the capacity of the local compute cluster. Since the data taken and processed at the LHC will be extremely large, the LHC experiments rely on the emerging grid computing tools. The computing capabilities of the grid have been used for simulating all physics events needed for this thesis. To achieve this, the local compute cluster had to be integrated into the grid and the administration of the grid components had to be secured. As this was the first installation of its kind, several contributions to grid training events could be made: courses on grid installation, administration and grid-enabled applications were given. The two methods for the W mass measurement are the morphing method and the scaling method. The morphing method relies on an analytical transformation of Z boson events into W boson events and determines the W boson mass by comparing the transverse mass distributions; the scaling method relies on scaled observables from W boson and Z boson events, e.g. the transverse muon momentum as studied in this thesis. In both cases, a re-weighting technique applied to Monte Carlo generated events is used to take into account different selection cuts, detector acceptances, and differences in production and decay of W boson and Z boson events. (orig.)

  16. Distributed error and alarm processing in the CMS data acquisition system

    Energy Technology Data Exchange (ETDEWEB)

    Bauer, G.; et al.

    2012-01-01

    The error and alarm system for the data acquisition of the Compact Muon Solenoid (CMS) at CERN was successfully used for the physics runs at Large Hadron Collider (LHC) during first three years of activities. Error and alarm processing entails the notification, collection, storing and visualization of all exceptional conditions occurring in the highly distributed CMS online system using a uniform scheme. Alerts and reports are shown on-line by web application facilities that map them to graphical models of the system as defined by the user. A persistency service keeps a history of all exceptions occurred, allowing subsequent retrieval of user defined time windows of events for later playback or analysis. This paper describes the architecture and the technologies used and deals with operational aspects during the first years of LHC operation. In particular we focus on performance, stability, and integration with the CMS sub-detectors.

  17. Monitoring techniques and alarm procedures for CMS services and sites in WLCG

    CERN Document Server

    Molina-Perez, Jorge Amando

    2012-01-01

    The CMS offline computing system is composed of roughly 80 sites (including most experienced T3s) and a number of central services to distribute, process and analyze data worldwide. A high level of stability and reliability is required from the underlying infrastructure and services, partially covered by local or automated monitoring and alarming systems such as Lemon and SLS; the former collects metrics from sensors installed on computing nodes and triggers alarms when values are out of range, the latter measures the quality of service and warns managers when service is affected. CMS has established computing shift procedures with personnel operating worldwide from remote Computing Centers, under the supervision of the Computing Run Coordinator on duty at CERN. This dedicated 24/7 computing shift personnel is contributing to detect and react timely on any unexpected error and hence ensure that CMS workflows are carried out efficiently and in a sustained manner. Synergy among all the involved actors is explo...

  18. Monitoring techniques and alarm procedures for CMS Services and Sites in WLCG

    International Nuclear Information System (INIS)

    Molina-Perez, J; Sciabà, A; Magini, N; Bonacorsi, D; Gutsche, O; Flix, J; Kreuzer, P; Fajardo, E; Boccali, T; Klute, M; Gomes, D; Kaselis, R; Butenas, I; Du, R; Wang, W

    2012-01-01

    The CMS offline computing system is composed of roughly 80 sites (including most experienced T3s) and a number of central services to distribute, process and analyze data worldwide. A high level of stability and reliability is required from the underlying infrastructure and services, partially covered by local or automated monitoring and alarming systems such as Lemon and SLS; the former collects metrics from sensors installed on computing nodes and triggers alarms when values are out of range, the latter measures the quality of service and warns managers when service is affected. CMS has established computing shift procedures with personnel operating worldwide from remote Computing Centers, under the supervision of the Computing Run Coordinator at CERN. This dedicated 24/7 computing shift personnel is contributing to detect and react timely on any unexpected error and hence ensure that CMS workflows are carried out efficiently and in a sustained manner. Synergy among all the involved actors is exploited to ensure the 24/7 monitoring, alarming and troubleshooting of the CMS computing sites and services. We review the deployment of the monitoring and alarming procedures, and report on the experience gained throughout the first two years of LHC operation. We describe the efficiency of the communication tools employed, the coherent monitoring framework, the proactive alarming systems and the proficient troubleshooting procedures that helped the CMS Computing facilities and infrastructure to operate at high reliability levels.

  19. Monitoring techniques and alarm procedures for CMS services and sites in WLCG

    Energy Technology Data Exchange (ETDEWEB)

    Molina-Perez, J. [UC, San Diego; Bonacorsi, D. [Bologna U.; Gutsche, O. [Fermilab; Sciaba, A. [CERN; Flix, J. [Madrid, CIEMAT; Kreuzer, P. [CERN; Fajardo, E. [Andes U., Bogota; Boccali, T. [INFN, Pisa; Klute, M. [MIT; Gomes, D. [Rio de Janeiro State U.; Kaselis, R. [Vilnius U.; Du, R. [Beijing, Inst. High Energy Phys.; Magini, N. [CERN; Butenas, I. [Vilnius U.; Wang, W. [Beijing, Inst. High Energy Phys.

    2012-01-01

    The CMS offline computing system is composed of roughly 80 sites (including most experienced T3s) and a number of central services to distribute, process and analyze data worldwide. A high level of stability and reliability is required from the underlying infrastructure and services, partially covered by local or automated monitoring and alarming systems such as Lemon and SLS, the former collects metrics from sensors installed on computing nodes and triggers alarms when values are out of range, the latter measures the quality of service and warns managers when service is affected. CMS has established computing shift procedures with personnel operating worldwide from remote Computing Centers, under the supervision of the Computing Run Coordinator at CERN. This dedicated 24/7 computing shift personnel is contributing to detect and react timely on any unexpected error and hence ensure that CMS workflows are carried out efficiently and in a sustained manner. Synergy among all the involved actors is exploited to ensure the 24/7 monitoring, alarming and troubleshooting of the CMS computing sites and services. We review the deployment of the monitoring and alarming procedures, and report on the experience gained throughout the first two years of LHC operation. We describe the efficiency of the communication tools employed, the coherent monitoring framework, the proactive alarming systems and the proficient troubleshooting procedures that helped the CMS Computing facilities and infrastructure to operate at high reliability levels.

  20. CMS AWARDS

    CERN Multimedia

    Steven Lowette

    Working under great time pressure towards a common goal in gradual steps can sometimes cause us to forget to take a step back, and celebrate what marvels have been achieved. A general need was felt within CMS to expand the recognition for our young scientists that made outstanding, well recognized and creative contributions to CMS, which served to significantly advance the performance of CMS as a complete and powerful experiment. Therefore, the Collaboration Board endorsed in March 2009 a proposal from the CB Chair and Advisory Group to award each year the newly created "CMS Achievement Award" to fourteen graduate students and postdocs that made exceptional contributions to the Tracker, ECAL, HCAL and Muon subdetectors as well as the TriDAS project, the Commissioning of CMS and the Offline Software and Computing projects. It was also agreed that there was a need to go back in time, and retroactively attribute awards for the years 2007 and 2008 when CMS went from a bare cavern to a detect...

  1. CMS Data Transfer operations after the first years of LHC collisions

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    CMS experiment possesses distributed computing infrastructure and its performance heavily depends on the fast and smooth distribution of data between different CMS sites. Data must be transferred from the Tier-0 (CERN) to the Tier-1 for storing and archiving, and time and good quality are vital to avoid overflowing CERN storage buffers. At the same time, processed data has to be distributed from Tier-1 sites to all Tier-2 sites for physics analysis while MonteCarlo simulations synchronized back to Tier-1 sites for further archival. At the core of all transferring machinery is PhEDEx (Physics Experiment Data Export) data transfer system. It is very important to ensure reliable operation of the system, and the operational tasks comprise monitoring and debugging all transfer issues. Based on transfer quality information Site Readiness tool is used to create plans for resources utilization in the future. We review the operational procedures created to enforce reliable data delivery to CMS distributed sites all ov...

  2. CRAB3: Establishing a new generation of services for distributed analysis at CMS

    International Nuclear Information System (INIS)

    Cinquilli, M; Spiga, D; Konstantinov, P; Mascheroni, M; Grandi, C; Hernàndez, J M; Riahi, H; Vaandering, E

    2012-01-01

    In CMS Computing the highest priorities for analysis tools are the improvement of the end users’ ability to produce and publish reliable samples and analysis results as well as a transition to a sustainable development and operations model. To achieve these goals CMS decided to incorporate analysis processing into the same framework as data and simulation processing. This strategy foresees that all workload tools (TierO, Tier1, production, analysis) share a common core with long term maintainability as well as the standardization of the operator interfaces. The re-engineered analysis workload manager, called CRAB3, makes use of newer technologies, such as RESTFul based web services and NoSQL Databases, aiming to increase the scalability and reliability of the system. As opposed to CRAB2, in CRAB3 all work is centrally injected and managed in a global queue. A pool of agents, which can be geographically distributed, consumes work from the central services serving the user tasks. The new architecture of CRAB substantially changes the deployment model and operations activities. In this paper we present the implementation of CRAB3, emphasizing how the new architecture improves the workflow automation and simplifies maintainability. In particular, we will highlight the impact of the new design on daily operations.

  3. CRAB3: Establishing a new generation of services for distributed analysis at CMS

    Science.gov (United States)

    Cinquilli, M.; Spiga, D.; Grandi, C.; Hernàndez, J. M.; Konstantinov, P.; Mascheroni, M.; Riahi, H.; Vaandering, E.

    2012-12-01

    In CMS Computing the highest priorities for analysis tools are the improvement of the end users’ ability to produce and publish reliable samples and analysis results as well as a transition to a sustainable development and operations model. To achieve these goals CMS decided to incorporate analysis processing into the same framework as data and simulation processing. This strategy foresees that all workload tools (TierO, Tier1, production, analysis) share a common core with long term maintainability as well as the standardization of the operator interfaces. The re-engineered analysis workload manager, called CRAB3, makes use of newer technologies, such as RESTFul based web services and NoSQL Databases, aiming to increase the scalability and reliability of the system. As opposed to CRAB2, in CRAB3 all work is centrally injected and managed in a global queue. A pool of agents, which can be geographically distributed, consumes work from the central services serving the user tasks. The new architecture of CRAB substantially changes the deployment model and operations activities. In this paper we present the implementation of CRAB3, emphasizing how the new architecture improves the workflow automation and simplifies maintainability. In particular, we will highlight the impact of the new design on daily operations.

  4. CRAB3: Establishing a new generation of services for distributed analysis at CMS

    Energy Technology Data Exchange (ETDEWEB)

    Cinquilli, M. [CERN; Spiga, D. [CERN; Grandi, C. [INFN, Bologna; Hernandez, J. M. [Madrid, CIEMAT; Konstantinov, P. [CERN; Mascheroni, M. [CERN; Riahi, H. [INFN, Perugia; Vaandering, E. [Fermilab

    2012-01-01

    In CMS Computing the highest priorities for analysis tools are the improvement of the end users ability to produce and publish reliable samples and analysis results as well as a transition to a sustainable development and operations model. To achieve these goals CMS decided to incorporate analysis processing into the same framework as data and simulation processing. This strategy foresees that all workload tools (TierO, Tier1, production, analysis) share a common core with long term maintainability as well as the standardization of the operator interfaces. The re-engineered analysis workload manager, called CRAB3, makes use of newer technologies, such as RESTFul based web services and NoSQL Databases, aiming to increase the scalability and reliability of the system. As opposed to CRAB2, in CRAB3 all work is centrally injected and managed in a global queue. A pool of agents, which can be geographically distributed, consumes work from the central services serving the user tasks. The new architecture of CRAB substantially changes the deployment model and operations activities. In this paper we present the implementation of CRAB3, emphasizing how the new architecture improves the workflow automation and simplifies maintainability. In particular, we will highlight the impact of the new design on daily operations.

  5. CMS Computing Operations During Run1

    CERN Document Server

    Gutsche, Oliver

    2013-01-01

    During the first run, CMS collected and processed more than 10B data events and simulated more than 15B events. Up to 100k processor cores were used simultaneously and 100PB of storage was managed. Each month petabytes of data were moved and hundreds of users accessed data samples. In this presentation we will discuss the operational experience from the first run. We will present the workflows and data flows that were executed, we will discuss the tools and services developed, and the operations and shift models used to sustain the system. Many techniques were followed from the original computing planning, but some were reactions to difficulties and opportunities. In this presentation we will also address the lessons learned from an operational perspective, and how this is shaping our thoughts for 2015.

  6. CMS computing operations during run 1

    CERN Document Server

    Adelman, J; Artieda, J; Bagliese, G; Ballestero, D; Bansal, S; Bauerdick, L; Behrenhof, W; Belforte, S; Bloom, K; Blumenfeld, B; Blyweert, S; Bonacorsi, D; Brew, C; Contreras, L; Cristofori, A; Cury, S; da Silva Gomes, D; Dolores Saiz Santos, M; Dost, J; Dykstra, D; Fajardo Hernandez, E; Fanzango, F; Fisk, I; Flix, J; Georges, A; Gi ffels, M; Gomez-Ceballos, G; Gowdy, S; Gutsche, O; Holzman, B; Janssen, X; Kaselis, R; Kcira, D; Kim, B; Klein, D; Klute, M; Kress, T; Kreuzer, P; Lahi , A; Larson, K; Letts, J; Levin, A; Linacre, J; Linares, J; Liu, S; Luyckx, S; Maes, M; Magini, N; Malta, A; Marra Da Silva, J; Mccartin, J; McCrea, A; Mohapatra, A; Molina, J; Mortensen, T; Padhi, S; Paus, C; Piperov, S; Ralph; Sartirana, A; Sciaba, A; S ligoi, I; Spinoso, V; Tadel, M; Traldi, S; Wissing, C; Wuerthwein, F; Yang, M; Zielinski, M; Zvada, M

    2014-01-01

    During the first run, CMS collected and processed more than 10B data events and simulated more than 15B events. Up to 100k processor cores were used simultaneously and 100PB of storage was managed. Each month petabytes of data were moved and hundreds of users accessed data samples. In this document we discuss the operational experience from this first run. We present the workflows and data flows that were executed, and we discuss the tools and services developed, and the operations and shift models used to sustain the system. Many techniques were followed from the original computing planning, but some were reactions to difficulties and opportunities. We also address the lessons learned from an operational perspective, and how this is shaping our thoughts for 2015.

  7. Predicting dataset popularity for the CMS experiment

    CERN Document Server

    INSPIRE-00005122; Li, Ting; Giommi, Luca; Bonacorsi, Daniele; Wildish, Tony

    2016-01-01

    The CMS experiment at the LHC accelerator at CERN relies on its computing infrastructure to stay at the frontier of High Energy Physics, searching for new phenomena and making discoveries. Even though computing plays a significant role in physics analysis we rarely use its data to predict the system behavior itself. A basic information about computing resources, user activities and site utilization can be really useful for improving the throughput of the system and its management. In this paper, we discuss a first CMS analysis of dataset popularity based on CMS meta-data which can be used as a model for dynamic data placement and provide the foundation of data-driven approach for the CMS computing infrastructure.

  8. Predicting dataset popularity for the CMS experiment

    International Nuclear Information System (INIS)

    Kuznetsov, V.; Li, T.; Giommi, L.; Bonacorsi, D.; Wildish, T.

    2016-01-01

    The CMS experiment at the LHC accelerator at CERN relies on its computing infrastructure to stay at the frontier of High Energy Physics, searching for new phenomena and making discoveries. Even though computing plays a significant role in physics analysis we rarely use its data to predict the system behavior itself. A basic information about computing resources, user activities and site utilization can be really useful for improving the throughput of the system and its management. In this paper, we discuss a first CMS analysis of dataset popularity based on CMS meta-data which can be used as a model for dynamic data placement and provide the foundation of data-driven approach for the CMS computing infrastructure. (paper)

  9. Grid Interoperation with ARC Middleware for the CMS Experiment

    CERN Document Server

    Edelmann, Erik; Frey, Jaime; Gronager, Michael; Happonen, Kalle; Johansson, Daniel; Kleist, Josva; Klem, Jukka; Koivumaki, Jesper; Linden, Tomas; Pirinen, Antti; Qing, Di

    2010-01-01

    The Compact Muon Solenoid (CMS) is one of the general purpose experiments at the CERN Large Hadron Collider (LHC). CMS computing relies on different grid infrastructures to provide computational and storage resources. The major grid middleware stacks used for CMS computing are gLite, Open Science Grid (OSG) and ARC (Advanced Resource Connector). Helsinki Institute of Physics (HIP) hosts one of the Tier-2 centers for CMS computing. CMS Tier-2 centers operate software systems for data transfers (PhEDEx), Monte Carlo production (ProdAgent) and data analysis (CRAB). In order to provide the Tier-2 services for CMS, HIP uses tools and components from both ARC and gLite grid middleware stacks. Interoperation between grid systems is a challenging problem and HIP uses two different solutions to provide the needed services. The first solution is based on gLite-ARC grid level interoperability. This allows to use ARC resources in CMS without modifying the CMS application software. The second solution is based on developi...

  10. Grid Interoperation with ARC middleware for the CMS experiment

    International Nuclear Information System (INIS)

    Edelmann, Erik; Groenager, Michael; Johansson, Daniel; Kleist, Josva; Field, Laurence; Qing, Di; Frey, Jaime; Happonen, Kalle; Klem, Jukka; Koivumaeki, Jesper; Linden, Tomas; Pirinen, Antti

    2010-01-01

    The Compact Muon Solenoid (CMS) is one of the general purpose experiments at the CERN Large Hadron Collider (LHC). CMS computing relies on different grid infrastructures to provide computational and storage resources. The major grid middleware stacks used for CMS computing are gLite, Open Science Grid (OSG) and ARC (Advanced Resource Connector). Helsinki Institute of Physics (HIP) hosts one of the Tier-2 centers for CMS computing. CMS Tier-2 centers operate software systems for data transfers (PhEDEx), Monte Carlo production (ProdAgent) and data analysis (CRAB). In order to provide the Tier-2 services for CMS, HIP uses tools and components from both ARC and gLite grid middleware stacks. Interoperation between grid systems is a challenging problem and HIP uses two different solutions to provide the needed services. The first solution is based on gLite-ARC grid level interoperability. This allows to use ARC resources in CMS without modifying the CMS application software. The second solution is based on developing specific ARC plugins in CMS software.

  11. Grid Interoperation with ARC middleware for the CMS experiment

    Energy Technology Data Exchange (ETDEWEB)

    Edelmann, Erik; Groenager, Michael; Johansson, Daniel; Kleist, Josva [Nordic DataGrid Facility, Kastruplundgade 22, 1., DK-2770 Kastrup (Denmark); Field, Laurence; Qing, Di [CERN, CH-1211 Geneve 23 (Switzerland); Frey, Jaime [University of Wisconsin-Madison, 1210 W. Dayton St., Madison, WI (United States); Happonen, Kalle; Klem, Jukka; Koivumaeki, Jesper; Linden, Tomas; Pirinen, Antti, E-mail: Jukka.Klem@cern.c [Helsinki Institute of Physics, PO Box 64, FIN-00014 University of Helsinki (Finland)

    2010-04-01

    The Compact Muon Solenoid (CMS) is one of the general purpose experiments at the CERN Large Hadron Collider (LHC). CMS computing relies on different grid infrastructures to provide computational and storage resources. The major grid middleware stacks used for CMS computing are gLite, Open Science Grid (OSG) and ARC (Advanced Resource Connector). Helsinki Institute of Physics (HIP) hosts one of the Tier-2 centers for CMS computing. CMS Tier-2 centers operate software systems for data transfers (PhEDEx), Monte Carlo production (ProdAgent) and data analysis (CRAB). In order to provide the Tier-2 services for CMS, HIP uses tools and components from both ARC and gLite grid middleware stacks. Interoperation between grid systems is a challenging problem and HIP uses two different solutions to provide the needed services. The first solution is based on gLite-ARC grid level interoperability. This allows to use ARC resources in CMS without modifying the CMS application software. The second solution is based on developing specific ARC plugins in CMS software.

  12. Evolution of CMS Workload Management Towards Multicore Job Support

    Energy Technology Data Exchange (ETDEWEB)

    Perez-Calero Yzquierdo, A. [Madrid, CIEMAT; Hernández, J. M. [Madrid, CIEMAT; Khan, F. A. [Quaid-i-Azam U.; Letts, J. [UC, San Diego; Majewski, K. [Fermilab; Rodrigues, A. M. [Fermilab; McCrea, A. [UC, San Diego; Vaandering, E. [Fermilab

    2015-12-23

    The successful exploitation of multicore processor architectures is a key element of the LHC distributed computing system in the coming era of the LHC Run 2. High-pileup complex-collision events represent a challenge for the traditional sequential programming in terms of memory and processing time budget. The CMS data production and processing framework is introducing the parallel execution of the reconstruction and simulation algorithms to overcome these limitations. CMS plans to execute multicore jobs while still supporting singlecore processing for other tasks difficult to parallelize, such as user analysis. The CMS strategy for job management thus aims at integrating single and multicore job scheduling across the Grid. This is accomplished by employing multicore pilots with internal dynamic partitioning of the allocated resources, capable of running payloads of various core counts simultaneously. An extensive test programme has been conducted to enable multicore scheduling with the various local batch systems available at CMS sites, with the focus on the Tier-0 and Tier-1s, responsible during 2015 of the prompt data reconstruction. Scale tests have been run to analyse the performance of this scheduling strategy and ensure an efficient use of the distributed resources. This paper presents the evolution of the CMS job management and resource provisioning systems in order to support this hybrid scheduling model, as well as its deployment and performance tests, which will enable CMS to transition to a multicore production model for the second LHC run.

  13. CMS Centre at CERN

    CERN Multimedia

    2007-01-01

    A new "CMS Centre" is being established on the CERN Meyrin site by the CMS collaboration. It will be a focal point for communications, where physicists will work together on data quality monitoring, detector calibration, offline analysis of physics events, and CMS computing operations. Construction of the CMS Centre begins in the historic Proton Synchrotron (PS) control room. The historic Proton Synchrotron (PS) control room, Opened by Niels Bohr in 1960, will be reused by CMS to built its control centre. TThe LHC@FNAL Centre, in operation at Fermilab in the US, will work very closely with the CMS Centre, as well as the CERN Control Centre. (Photo Fermilab)The historic Proton Synchrotron (PS) control room is about to start a new life. Opened by Niels Bohr in 1960, the room will be reused by CMS to built its control centre. When finished, it will resemble the CERN Contro...

  14. Pharmacokinetics of Colistin Methansulphonate (CMS) and Colistin after CMS Nebulisation in Baboon Monkeys.

    Science.gov (United States)

    Marchand, Sandrine; Bouchene, Salim; de Monte, Michèle; Guilleminault, Laurent; Montharu, Jérôme; Cabrera, Maria; Grégoire, Nicolas; Gobin, Patrice; Diot, Patrice; Couet, William; Vecellio, Laurent

    2015-10-01

    The objective of this study was to compare two different nebulizers: Eflow rapid® and Pari LC star® by scintigraphy and PK modeling to simulate epithelial lining fluid concentrations from measured plasma concentrations, after nebulization of CMS in baboons. Three baboons received CMS by IV infusion and by 2 types of aerosols generators and colistin by subcutaneous infusion. Gamma imaging was performed after nebulisation to determine colistin distribution in lungs. Blood samples were collected during 9 h and colistin and CMS plasma concentrations were measured by LC-MS/MS. A population pharmacokinetic analysis was conducted and simulations were performed to predict lung concentrations after nebulization. Higher aerosol distribution into lungs was observed by scintigraphy, when CMS was nebulized with Pari LC® star than with Eflow Rapid® nebulizer. This observation was confirmed by the fraction of CMS deposited into the lung (respectively 3.5% versus 1.3%).CMS and colistin simulated concentrations in epithelial lining fluid were higher after using the Pari LC star® than the Eflow rapid® system. A limited fraction of CMS reaches lungs after nebulization, but higher colistin plasma concentrations were measured and higher intrapulmonary colistin concentrations were simulated with the Pari LC Star® than with the Eflow Rapid® system.

  15. CMS Results of Grid-related activities using the early deployed LCG Implementations

    CERN Document Server

    Coviello, Tommaso; De Filippis, Nicola; Donvito, Giacinto; Maggi, Giorgio; Pierro, A; Bonacorsi, Daniele; Capiluppi, Paolo; Fanfani, Alessandra; Grandi, Claudio; Maroney, Owen; Nebrensky, H; Donno, Flavia; Jank, Werner; Sciabà, Andrea; Sinanis, Nick; Colling, David; Tallini, Hugh; MacEvoy, Barry C; Wang, Shaowen; Kaiser, Joseph; Osman, Asif; Charlot, Claude; Semenjouk, I; Biasotto, Massimo; Fantinel, Sergio; Corvo, Marco; Fanzago, Federica; Mazzucato, Mirco; Verlato, Marco; Go, Apollo; Khan Chia Ming; Andreozzi, S; Cavalli, A; Ciaschini, V; Ghiselli, A; Italiano, A; Spataro, F; Vistoli, C; Tortone, G

    2004-01-01

    The CMS Experiment is defining its Computing Model and is experimenting and testing the new distributed features offered by many Grid Projects. This report describes use by CMS of the early-deployed systems of LCG (LCG-0 and LCG-1). Most of the used features here discussed came from the EU implemented middleware, even if some of the tested capabilities were in common with the US developed middleware. This report describes the simulation of about 2 million of CMS detector events, which were generated as part of the official CMS Data Challenge 04 (Pre-Challenge-Production). The simulations were done on a CMS-dedicated testbed (CMS-LCG-0), where an ad-hoc modified version of the LCG-0 middleware was deployed and where the CMS Experiment had a complete control, and on the official early LCG delivered system (with the LCG-1 version). Modifications to the CMS simulation tools for events produc tion where studied and achieved, together with necessary adaptations of the middleware services. Bilateral feedback (betwee...

  16. A Population WB-PBPK Model of Colistin and its Prodrug CMS in Pigs: Focus on the Renal Distribution and Excretion.

    Science.gov (United States)

    Viel, Alexis; Henri, Jérôme; Bouchène, Salim; Laroche, Julian; Rolland, Jean-Guy; Manceau, Jacqueline; Laurentie, Michel; Couet, William; Grégoire, Nicolas

    2018-03-12

    The objective was the development of a whole-body physiologically-based pharmacokinetic (WB-PBPK) model for colistin, and its prodrug colistimethate sodium (CMS), in pigs to explore their tissue distribution, especially in kidneys. Plasma and tissue concentrations of CMS and colistin were measured after systemic administrations of different dosing regimens of CMS in pigs. The WB-PBPK model was developed based on these data according to a non-linear mixed effect approach and using NONMEM software. A detailed sub-model was implemented for kidneys to handle the complex disposition of CMS and colistin within this organ. The WB-PBPK model well captured the kinetic profiles of CMS and colistin in plasma. In kidneys, an accumulation and slow elimination of colistin were observed and well described by the model. Kidneys seemed to have a major role in the elimination processes, through tubular secretion of CMS and intracellular degradation of colistin. Lastly, to illustrate the usefulness of the PBPK model, an estimation of the withdrawal periods after veterinary use of CMS in pigs was made. The WB-PBPK model gives an insight into the renal distribution and elimination of CMS and colistin in pigs; it may be further developed to explore the colistin induced-nephrotoxicity in humans.

  17. Using ssh as portal - The CMS CRAB over glideinWMS experience

    CERN Document Server

    Belforte, Stefano; Letts, James; Fanzago, Federica; Saiz Santos, Maria Dolores; Martin, Terrence

    2013-01-01

    The User Analysis of the CMS experiment is performed in distributed way usingboth Grid and dedicated resources. In order to insulate the users from the details of computing fabric, CMS relies on the CRAB (CMS Remote Analysis Builder) package as an abstraction layer. CMS has recently switched from a client-server version of CRAB to a purely client-based solution, with ssh being used to interface with HTCondor-based glideinWMS batch system. This switch has resulted in significant improvement of user satisfaction, as well as in significant simplification of the CRAB code base and of the operation support. This paper presents the architecture of the ssh-based CRAB package, the rationale behind it, as well as the operational experience running both the client-server and the ssh-based versions in parallel forseveral months.

  18. Use of glide-ins in CMS for production and analysis

    International Nuclear Information System (INIS)

    Bradley, D; Gutsche, O; Holzman, B; Sfiligoi, I; Vaandering, E; Hahn, K; Padhi, S; Pi, H; Wuerthwein, F; Spiga, D

    2010-01-01

    With the evolution of various grid federations, the Condor glide-ins represent a key feature in providing a homogeneous pool of resources using late-binding technology. The CMS collaboration uses the glide-in based Workload Management System, glideinWMS, for production (ProdAgent) and distributed analysis (CRAB) of the data. The Condor glide-in daemons traverse to the worker nodes, submitted via Condor-G. Once activated, they preserve the Master-Worker relationships, with the worker first validating the execution environment on the worker node before pulling the jobs sequentially until the expiry of their lifetimes. The combination of late-binding and validation significantly reduces the overall failure rate visible to CMS physicists. We discuss the extensive use of the glideinWMS since the computing challenge, CCRC-08, in order to prepare for the forthcoming LHC data-taking period. The key features essential to the success of large-scale production and analysis on CMS resources across major grid federations, including EGEE, OSG and NorduGrid are outlined. Use of glide-ins via the CRAB server mechanism and ProdAgent, as well as first hand experience of using the next generation CREAM computing element within the CMS framework is discussed.

  19. Opportunistic usage of the CMS online cluster using a cloud overlay

    CERN Document Server

    Chaze, Olivier; Andronidis, Anastasios; Behrens, Ulf; Branson, James; Brummer, Philipp; Contescu, Alexandru-Cristian; Cittolin, Sergio; Craigs, Benjamin; Darlea, Georgiana-Lavinia; Deldicque, Christian; Demiragli, Zeynep; Dobson, M; Doualot, Nicolas; Erhan, Samim; Fulcher, Jonathan Richard; Gigi, Dominique; Glege, Frank; Gomez-Ceballos, Guillelmo; Hegeman, Jeroen; Holzner, Andre Georg; Jimenez-Estupiñán, Raul; Masetti, Lorenzo; Meijers, Frans; Meschi, Emilio; Mommsen, Remigius; Morovic, Srecko; O'Dell, Vivian; Orsini, Luciano; Paus, Christoph; Pieri, Marco; Racz, Attila; Sakulin, Hannes; Schwick, Christoph; Reis, Thomas; Simelevicius, Dainius; Zejdl, Petr

    2016-01-01

    After two years of maintenance and upgrade, the Large Hadron Collider (LHC), the largest and most powerful particle accelerator in the world, has started its second three year run. Around 1500 computers make up the CMS (Compact Muon Solenoid) Online cluster. This cluster is used for Data Acquisition of the CMS experiment at CERN, selecting and sending to storage around 20 TBytes of data per day that are then analysed by the Worldwide LHC Computing Grid (WLCG) infrastructure that links hundreds of data centres worldwide. 3000 CMS physicists can access and process data, and are always seeking more computing power and data. The backbone of the CMS Online cluster is composed of 16000 cores which provide as much computing power as all CMS WLCG Tier1 sites (352K HEP-SPEC-06 score in the CMS cluster versus 300K across CMS Tier1 sites). The computing power available in the CMS cluster can significantly speed up the processing of data, so an effort has been made to allocate the resources of the CMS Online cluster to t...

  20. An outlook of the user support model to educate the users community at the CMS Experiment

    CERN Document Server

    Malik, Sudhir

    2011-01-01

    The CMS (Compact Muon Solenoid) experiment is one of the two large general-purpose particle physics detectors built at the LHC (Large Hadron Collider) at CERN in Geneva, Switzerland. The diverse collaboration combined with a highly distributed computing environment and Petabytes/year of data being collected makes CMS unlike any other High Energy Physics collaborations before. This presents new challenges to educate and bring users, coming from different cultural, linguistics and social backgrounds, up to speed to contribute to the physics analysis. CMS has been able to deal with this new paradigm by deploying a user support structure model that uses collaborative tools to educate about software, computing an physics tools specific to CMS. To carry out the user support mission worldwide, an LHC Physics Centre (LPC) was created few years back at Fermilab as a hub for US physicists. The LPC serves as a "brick and mortar" location for physics excellence for the CMS physicists where graduate and postgraduate scien...

  1. CMS Centres Worldwide - a New Collaborative Infrastructure

    International Nuclear Information System (INIS)

    Taylor, Lucas

    2011-01-01

    The CMS Experiment at the LHC has established a network of more than fifty inter-connected 'CMS Centres' at CERN and in institutes in the Americas, Asia, Australasia, and Europe. These facilities are used by people doing CMS detector and computing grid operations, remote shifts, data quality monitoring and analysis, as well as education and outreach. We present the computing, software, and collaborative tools and videoconferencing systems. These include permanently running 'telepresence' video links (hardware-based H.323, EVO and Vidyo), Webcasts, and generic Web tools such as CMS-TV for broadcasting live monitoring and outreach information. Being Web-based and experiment-independent, these systems could easily be extended to other organizations. We describe the experiences of using CMS Centres Worldwide in the CMS data-taking operations as well as for major media events with several hundred TV channels, radio stations, and many more press journalists simultaneously around the world.

  2. Charge Distribution Dependency on Gap Thickness of CMS Endcap RPC

    CERN Document Server

    Park, Sung K.; Lee, Kyongsei

    2016-01-01

    We report a systematic study of charge distribution dependency of CMS Resistive Plate Chamber (RPC) on gap thickness. Prototypes of double-gap RPCs with six different gap thickness ranging from from 1.0 to 2.0 mm in 0.2-mm steps have been built with 2-mm-thick phenolic high-pressure-laminated plates. The efficiencies of the six gaps are measured as a function of the effective high voltages. We report that the strength of the electric fields of the gap is decreased as the gap thickness is increased. The distributions of charges in six gaps are measured. The space charge effect is seen in the charge distribution at the higher voltages. The logistic function is used to fit the charge distribution data. Smaller charges can be produced within smaller gas gap. But the digitization threshold should be also lowered to utilize these smaller charges.

  3. Managing the CMS Online Software integrity through development and production cycles

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    The Data Acquisition system of the Compact Muon Solenoid experiment at CERN is a distributed system made of several different network technologies and computers to collect data from more than 600 custom detector Front-End Drivers. It assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GByte/s. The architecture takes advantage of the latest developments in the computing industry. For data concentration, 10/40 Gbit Ethernet technologies are used while a 56Gbps Infiniband FDR CLOS network has been chosen for the event builder with a throughput of ~4 Tbps. The CMS Online Software (CMSOS) infrastructure is a complex product created specifically for the development of large distributed data acquisition systems as well as all application components to achieve the CMS data acquisition task. It is designed to benefit from different networking technologies, parallelism available on a processing platform such as multi-core or multi-processor systems. It provides platform i...

  4. CMS overview

    CERN Document Server

    AUTHOR|(CDS)2071615

    2016-01-01

    Most recent CMS data related to the high-density QCD are presented for pp and PbPb collisions at 2.76 TeV and pPb collisions at 5.02 TeV. The PbPb collision is essential to understand collective behavior and the final-state effects for the detailed characteristics of hot, dense partonic matter, whereas the pPb collision provides the critical information on the initial-state effects including the modification of the parton distribution function in cold nuclei. This paper highlights some of recent heavy-ion related results from CMS.

  5. The US-CMS Tier-1 Center Network Evolving toward 100Gbps

    International Nuclear Information System (INIS)

    Bobyshev, A; DeMar, P

    2011-01-01

    Fermilab hosts the US Tier-1 Center for the LHC's Compact Muon Collider (CMS) experiment. The Tier-1s are the central points for the processing and movement of LHC data. They sink raw data from the Tier-0 at CERN, process and store it locally, and then distribute the processed data to Tier-2s for simulation studies and analysis. The Fermilab Tier-1 Center is the largest of the CMS Tier-1s, accounting for roughly 35% of the experiment's Tier-1 computing and storage capacity. Providing capacious, resilient network services, both in terms of local network infrastructure and off-site data movement capabilities, presents significant challenges. This article will describe the current architecture, status, and near term plans for network support of the US-CMS Tier-1 facility.

  6. Scalability tests of R-GMA based Grid job monitoring system for CMS Monte Carlo data production

    CERN Document Server

    Bonacorsi, D; Field, L; Fisher, S; Grandi, C; Hobson, P R; Kyberd, P; MacEvoy, B; Nebrensky, J J; Tallini, H; Traylen, S

    2004-01-01

    High Energy Physics experiments such as CMS (Compact Muon Solenoid) at the Large Hadron Collider have unprecedented, large-scale data processing computing requirements, with data accumulating at around 1 Gbyte/s. The Grid distributed computing paradigm has been chosen as the solution to provide the requisite computing power. The demanding nature of CMS software and computing requirements, such as the production of large quantities of Monte Carlo simulated data, makes them an ideal test case for the Grid and a major driver for the development of Grid technologies. One important challenge when using the Grid for large-scale data analysis is the ability to monitor the large numbers of jobs that are being executed simultaneously at multiple remote sites. R-GMA is a monitoring and information management service for distributed resources based on the Grid Monitoring Architecture of the Global Grid Forum. In this paper we report on the first measurements of R-GMA as part of a monitoring architecture to be used for b...

  7. Prototype for a generic thin-client remote analysis environment for CMS

    International Nuclear Information System (INIS)

    Steenberg, C.D.; Bunn, J.J.; Hickey, T.M.; Holtman, K.; Legrand, I.; Litvin, V.; Newman, H.B.; Samar, A.; Singh, S.; Wilkinson, R.

    2001-01-01

    The multi-tiered architecture of the highly-distributed CMS computing systems necessitates a flexible data distribution and analysis environment. The authors describe a prototype analysis environment which functions efficiently over wide area networks using a server installed at the Caltech/UCSD Tier 2 prototype to analyze CMS data stored at various locations using a thin client. The analysis environment is based on existing HEP (Anaphe) and CMS (CARF, ORCA, IGUANA) software technology on the server accessed from a variety of clients. A Java Analysis Studio (JAS, from SLAC) plug-in is being developed as a reference client. The server is operated as a 'black box' on the proto-Tier2 system. ORCA objectivity databases (e.g. an existing large CMS Muon sample) are hosted on the master and slave nodes, and remote clients can request processing of queries across the server nodes, and get the histogram results returned and rendered in the client. The server is implemented using pure C++, and use XML-RPC as a language-neutral transport. This has several benefits, including much better scalability, better integration with CARF-ORCA, and importantly, makes the work directly useful to other non-Java general-purpose analysis and presentation tools such as Hippodraw, Lizard, or ROOT

  8. Use of the gLite-WMS in CMS for production and analysis

    International Nuclear Information System (INIS)

    Codispoti, G; Grandi, C; Fanfani, A; Bonacorsi, D; Spiga, D; Sciaba', A; Lemaitre, S; Litmaath, M; Calas, Y; Cinquilli, M; Farina, F; Miccio, V; Sartirana, A; Dongiovanni, D; Cesini, D; Fanzago, F; Lacaprara, S; Belforte, S; Wakefield, S; Hernandez, J

    2010-01-01

    The CMS experiment at LHC started using the Resource Broker (by the EDG and LCG projects) to submit Monte Carlo production and analysis jobs to distributed computing resources of the WLCG infrastructure over 6 years ago. Since 2006 the gLite Workload Management System (WMS) and Logging and Bookkeeping (LB) are used. The interaction with the gLite-WMS/LB happens through the CMS production and analysis frameworks, respectively ProdAgent and CRAB, through a common component, BOSSLite. The important improvements recently made in the gLite-WMS/LB as well as in the CMS tools and the intrinsic independence of different WMS/LB instances allow CMS to reach the stability and scalability needed for LHC operations. In particular the use of a multi-threaded approach in BOSSLite allowed to increase the scalability of the systems significantly. In this work we present the operational set up of CMS production and analysis based on the gLite-WMS and the performances obtained in the past data challenges and in the daily Monte Carlo productions and user analysis usage in the experiment.

  9. Deployment of the CMS software on the WLCG Grid

    International Nuclear Information System (INIS)

    Behrenhoff, W; Wissing, C; Kim, B; Blyweert, S; D'Hondt, J; Maes, J; Maes, M; Mulders, P Van; Villella, I; Vanelderen, L

    2011-01-01

    The CMS Experiment is taking high energy collision data at CERN. The computing infrastructure used to analyse the data is distributed round the world in a tiered structure. In order to use the 7 Tier-1 sites, the 50 Tier-2 sites and a still growing number of about 30 Tier-3 sites, the CMS software has to be available at those sites. Except for a very few sites the deployment and the removal of CMS software is managed centrally. Since the deployment team has no local accounts at the remote sites all installation jobs have to be sent via Grid jobs. Via a VOMS role the job has a high priority in the batch system and gains write privileges to the software area. Due to the lack of interactive access the installation jobs must be very robust against possible failures, in order not to leave a broken software installation. The CMS software is packaged in RPMs that are installed in the software area independent of the host OS. The apt-get tool is used to resolve package dependencies. This paper reports about the recent deployment experiences and the achieved performance.

  10. Integration of End-User Cloud Storage for CMS Analysis

    CERN Document Server

    Riahi, Hassen; Álvarez Ayllón, Alejandro; Balcas, Justas; Ciangottini, Diego; Hernández, José M; Keeble, Oliver; Magini, Nicolò; Manzi, Andrea; Mascetti, Luca; Mascheroni, Marco; Tanasijczuk, Andres Jorge; Vaandering, Eric Wayne

    2018-01-01

    End-user Cloud storage is increasing rapidly in popularity in research communities thanks to the collaboration capabilities it offers, namely synchronisation and sharing. CERN IT has implemented a model of such storage named, CERNBox, integrated with the CERN AuthN and AuthZ services. To exploit the use of the end-user Cloud storage for the distributed data analysis activity, the CMS experiment has started the integration of CERNBox as a Grid resource. This will allow CMS users to make use of their own storage in the Cloud for their analysis activities as well as to benefit from synchronisation and sharing capabilities to achieve results faster and more effectively. It will provide an integration model of Cloud storages in the Grid, which is implemented and commissioned over the world’s largest computing Grid infrastructure, Worldwide LHC Computing Grid (WLCG). In this paper, we present the integration strategy and infrastructure changes needed in order to transparently integrate end-user Cloud storage with...

  11. Running CMS remote analysis builder jobs on advanced resource connector middleware

    International Nuclear Information System (INIS)

    Edelmann, E; Happonen, K; Koivumäki, J; Lindén, T; Välimaa, J

    2011-01-01

    CMS user analysis jobs are distributed over the grid with the CMS Remote Analysis Builder application (CRAB). According to the CMS computing model the applications should run transparently on the different grid flavours in use. In CRAB this is handled with different plugins that are able to submit to different grids. Recently a CRAB plugin for submitting to the Advanced Resource Connector (ARC) middleware has been developed. The CRAB ARC plugin enables simple and fast job submission with full job status information available. CRAB can be used with a server which manages and monitors the grid jobs on behalf of the user. In the presentation we will report on the CRAB ARC plugin and on the status of integrating it with the CRAB server and compare this with using the gLite ARC interoperability method for job submission.

  12. The CMS tracker control system

    Science.gov (United States)

    Dierlamm, A.; Dirkes, G. H.; Fahrer, M.; Frey, M.; Hartmann, F.; Masetti, L.; Militaru, O.; Shah, S. Y.; Stringer, R.; Tsirou, A.

    2008-07-01

    The Tracker Control System (TCS) is a distributed control software to operate about 2000 power supplies for the silicon modules of the CMS Tracker and monitor its environmental sensors. TCS must thus be able to handle about 104 power supply parameters, about 103 environmental probes from the Programmable Logic Controllers of the Tracker Safety System (TSS), about 105 parameters read via DAQ from the DCUs in all front end hybrids and from CCUs in all control groups. TCS is built on top of an industrial SCADA program (PVSS) extended with a framework developed at CERN (JCOP) and used by all LHC experiments. The logical partitioning of the detector is reflected in the hierarchical structure of the TCS, where commands move down to the individual hardware devices, while states are reported up to the root which is interfaced to the broader CMS control system. The system computes and continuously monitors the mean and maximum values of critical parameters and updates the percentage of currently operating hardware. Automatic procedures switch off selected parts of the detector using detailed granularity and avoiding widespread TSS intervention.

  13. Detector Alignment Studies for the CMS Experiment

    CERN Document Server

    Lampén, Tapio

    2007-01-01

    This thesis presen ts studies related to trac k-based alignmen t for the future CMS exp erimen t at CERN. Excellen t geometric alignmen t is crucial to fully bene t from the outstanding resolution of individual sensors. The large num ber of sensors mak es it dicult in CMS to utilize computationally demanding alignmen t algorithms. A computationally ligh t alignmen t algorithm, called the Hits and Impact Points algorithm (HIP), is dev elop ed and studied. It is based on minimization of the hit residuals. It can be applied to individual sensors or to comp osite objects. All six alignmen t parameters (three translations and three rotations), or their subgroup can be considered. The algorithm is exp ected to be particularly suitable for the alignmen t of the innermost part of CMS, the pixel detector, during its early operation, but can be easily utilized to align other parts of CMS also. The HIP algorithm is applied to sim ulated CMS data and real data measured with a test-b eam setup. The sim ulation studies dem...

  14. The Need for an R&D and Upgrade Program for CMS Software and Computing

    CERN Document Server

    Elmer, Peter; Stenson, Kevin; Wittich, Peter

    2013-01-01

    Over the next ten years, the physics reach of the Large Hadron Collider (LHC) at the European Organization for Nuclear Research (CERN) will be greatly extended through increases in the instantaneous luminosity of the accelerator and large increases in the amount of collected data. Due to changes in the way Moore's Law computing performance gains have been realized in the past decade, an aggressive program of R&D is needed to ensure that the computing capability of CMS will be up to the task of collecting and analyzing this data.

  15. SiteDB: Marshalling people and resources available to CMS

    Energy Technology Data Exchange (ETDEWEB)

    Metson, S [H.H. Wills Physics Laboratory, Bristol (United Kingdom); Bonacorsi, D [University of Bologna and INFN Bologna (Italy); Ferreira, M Dias [SPRACE (Brazil); Egeland, R [University of Minnesota, Twin Cities (United States)

    2010-04-01

    In a collaboration the size of CMS (approx. 3000 users, and almost 100 computing centres of varying size) communication and accurate information about the sites it has access to is vital in co-ordinating the multitude of computing tasks required for smooth running. SiteDB is a tool developed by CMS to track sites available to the collaboration, the allocation to CMS of resources available at those sites and the associations between CMS members and the sites (as either a manager/operator of the site or a member of a group associated to the site). It is used to track the roles a person has for an associated site or group. SiteDB eases the coordination load for the operations teams by providing a consistent interface to manage communication with the people working at a site, by identifying who is responsible for a given task or service at a site and by offering a uniform interface to information on CMS contacts and sites. SiteDB provides api's and reports for other CMS tools to use to access the information it contains, for instance enabling CRAB to use 'user friendly' names when black/white listing CE's, providing role based authentication and authorisation for other web based services and populating various troubleshooting squads in external ticketing systems in use daily by CMS Computing operations.

  16. SiteDB: Marshalling people and resources available to CMS

    International Nuclear Information System (INIS)

    Metson, S; Bonacorsi, D; Ferreira, M Dias; Egeland, R

    2010-01-01

    In a collaboration the size of CMS (approx. 3000 users, and almost 100 computing centres of varying size) communication and accurate information about the sites it has access to is vital in co-ordinating the multitude of computing tasks required for smooth running. SiteDB is a tool developed by CMS to track sites available to the collaboration, the allocation to CMS of resources available at those sites and the associations between CMS members and the sites (as either a manager/operator of the site or a member of a group associated to the site). It is used to track the roles a person has for an associated site or group. SiteDB eases the coordination load for the operations teams by providing a consistent interface to manage communication with the people working at a site, by identifying who is responsible for a given task or service at a site and by offering a uniform interface to information on CMS contacts and sites. SiteDB provides api's and reports for other CMS tools to use to access the information it contains, for instance enabling CRAB to use 'user friendly' names when black/white listing CE's, providing role based authentication and authorisation for other web based services and populating various troubleshooting squads in external ticketing systems in use daily by CMS Computing operations.

  17. Use of glide-ins in CMS for production and analysis

    CERN Document Server

    Bradley, D; Hahn, K; Holzman, B; Padhi, S; Pi, H; Spiga, D; Sfiligoi, I; Vaandering, E; Würthwein, F

    2010-01-01

    With the evolution of various grid federations, the Condor glide-ins represent a key feature in providing a homogeneous pool of resources using late-binding technology. The CMS collaboration uses the glide-in based Workload Management System, glideinWMS, for production (ProdAgent) and distributed analysis (CRAB) of the data. The Condor glide-in daemons traverse to the worker nodes, submitted via Condor-G. Once activated, they preserve the Master-Worker relationships, with the worker first validating the execution environment on the worker node before pulling the jobs sequentially until the expiry of their lifetimes. The combination of late-binding and validation significantly reduces the overall failure rate visible to CMS physicists. We discuss the extensive use of the glideinWMS since the computing challenge, CCRC-08, in order to prepare for the forthcoming LHC data-taking period. The key features essential to the success of large-scale production and analysis on CMS resources across major grid federations,...

  18. The Architecture of the CMS Level-1 Trigger Control and Monitoring System

    CERN Document Server

    Magrans de Abril, Marc; Hammer, Josef; Hartl, Christian; Xie, Zhen

    2011-01-01

    The architecture of the Level-1 Trigger Control and Monitoring system for the CMS experiment is presented. This system has been installed and commissioned on the trigger online computers and is currently used for data taking at the LHC. This is a medium-size distributed system that runs over 40 PCs and 200 processes that control about 4000 electronic boards. It has been designed to handle the trigger configuration and monitoring during data taking as well as all communications with the main run control of CMS. Furthermore its design has foreseen the provision of the software infrastructure for detailed testing of the trigger system during beam down time.

  19. Large scale and low latency analysis facilities for the CMS experiment: development and operational aspects

    CERN Document Server

    Riahi, Hassen

    2010-01-01

    While a majority of CMS data analysis activities rely on the distributed computing infrastructure on the WLCG Grid, dedicated local computing facilities have been deployed to address particular requirements in terms of latency and scale. The CMS CERN Analysis Facility (CAF) was primarily designed to host a large variety of latency-critical workfows. These break down into alignment and calibration, detector commissioning and diagnosis, and high-interest physics analysis requiring fast turnaround. In order to reach the goal for fast turnaround tasks, the Workload Management group has designed a CRABServer based system to fit with two main needs: to provide a simple, familiar interface to the user (as used in the CRAB Analysis Tool[7]) and to allow an easy transition to the Tier-0 system. While the CRABServer component had been initially designed for Grid analysis by CMS end-users, with a few modifications it turned out to be also a very powerful service to manage and monitor local submissions on the CAF. Tran...

  20. Implementing data placement strategies for the CMS experiment based on a popularity mode

    CERN Multimedia

    CERN. Geneva; Barreiro Megino, Fernando Harald

    2012-01-01

    During the first two years of data taking, the CMS experiment has collected over 20 PetaBytes of data and processed and analyzed it on the distributed, multi-tiered computing infrastructure on the WorldWide LHC Computing Grid. Given the increasing data volume that has to be stored and efficiently analyzed, it is a challenge for several LHC experiments to optimize and automate the data placement strategies in order to fully profit of the available network and storage resources and to facilitate daily computing operations. Building on previous experience acquired by ATLAS, we have developed the CMS Popularity Service that tracks file accesses and user activity on the grid and will serve as the foundation for the evolution of their data placement. A fully automated, popularity-based site-cleaning agent has been deployed in order to scan Tier2 sites that are reaching their space quota and suggest obsolete, unused data that can be safely deleted without disrupting analysis activity. Future work will be to demons...

  1. Implementing data placement strategies for the CMS experiment based on a popularity model

    CERN Document Server

    Giordano, Domenico

    2012-01-01

    During the first two years of data taking, the CMS experiment has collected over 20 PetaBytes of data and processed and analyzed it on the distributed, multi-tiered computing infrastructure on the WorldWide LHC Computing Grid. Given the increasing data volume that has to be stored and efficiently analyzed, it is a challenge for several LHC experiments to optimize and automate the data placement strategies in order to fully profit of the available network and storage resources and to facilitate daily computing operations. Building on previous experience acquired by ATLAS, we have developed the CMS Popularity Service that tracks file accesses and user activity on the grid and will serve as the foundation for the evolution of their data placement. A fully automated, popularity-based site-cleaning agent has been deployed in order to scan Tier-2 sites that are reaching their space quota and suggest obsolete, unused data that can be safely deleted without disrupting analysis activity. Future work will be to demonst...

  2. Improving collaborative documentation in CMS

    International Nuclear Information System (INIS)

    Lassila-Perini, Kati; Salmi, Leena

    2010-01-01

    Complete and up-to-date documentation is essential for efficient data analysis in a large and complex collaboration like CMS. Good documentation reduces the time spent in problem solving for users and software developers. The scientists in our research environment do not necessarily have the interests or skills of professional technical writers. This results in inconsistencies in the documentation. To improve the quality, we have started a multidisciplinary project involving CMS user support and expertise in technical communication from the University of Turku, Finland. In this paper, we present possible approaches to study the usability of the documentation, for instance, usability tests conducted recently for the CMS software and computing user documentation.

  3. The CMS tracker control system

    International Nuclear Information System (INIS)

    Dierlamm, A; Dirkes, G H; Fahrer, M; Frey, M; Hartmann, F; Masetti, L; Militaru, O; Shah, S Y; Stringer, R; Tsirou, A

    2008-01-01

    The Tracker Control System (TCS) is a distributed control software to operate about 2000 power supplies for the silicon modules of the CMS Tracker and monitor its environmental sensors. TCS must thus be able to handle about 10 4 power supply parameters, about 10 3 environmental probes from the Programmable Logic Controllers of the Tracker Safety System (TSS), about 10 5 parameters read via DAQ from the DCUs in all front end hybrids and from CCUs in all control groups. TCS is built on top of an industrial SCADA program (PVSS) extended with a framework developed at CERN (JCOP) and used by all LHC experiments. The logical partitioning of the detector is reflected in the hierarchical structure of the TCS, where commands move down to the individual hardware devices, while states are reported up to the root which is interfaced to the broader CMS control system. The system computes and continuously monitors the mean and maximum values of critical parameters and updates the percentage of currently operating hardware. Automatic procedures switch off selected parts of the detector using detailed granularity and avoiding widespread TSS intervention

  4. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  5. Exercising CMS dataflows and workflows in computing challenges at the SpanishTier-1 and Tier-2 sites

    International Nuclear Information System (INIS)

    Caballero, J; Colino, N; Peris, A D; G-Abia, P; Hernandez, J M; R-Calonge, F J; Cabrillo, I; Caballero, I G; Marco, R; Matorras, F; Flix, J; Merino, G

    2008-01-01

    An overview of the data transfer, processing and analysis operations conducted at the Spanish Tier-1 (PIC, Barcelona) and Tier-2 (CIEMAT-Madrid and IFCA-Santander federation) centres during the past CMS CSA06 Computing, Software and Analysis challenge and in preparation for CSA07 is presented

  6. CMS DOCUMENTATION

    CERN Multimedia

    CMS TALKS AT MAJOR MEETINGS The agenda and talks from major CMS meetings can now be electronically accessed from the iCMS Web site. The following items can be found on: http://cms.cern.ch/iCMS/ General - CMS Weeks (Collaboration Meetings), CMS Weeks Agendas The talks presented at the Plenary Sessions. LHC Symposiums Management - CB - MB - FB - FMC Agendas and minutes are accessible to CMS members through their AFS account (ZH). However some linked documents are restricted to the Board Members. FB documents are only accessible to FB members. LHCC The talks presented at the ‘CMS Meetings with LHCC Referees’ are available on request from the PM or MB Country Representative. Annual Reviews The talks presented at the 2006 Annual reviews are posted.   CMS DOCUMENTS It is considered useful to establish information on the first employment of CMS doctoral students upon completion of their theses. Therefore it is requested that Ph.D students inform the CMS Secretariat a...

  7. CMS DOCUMENTATION

    CERN Multimedia

    CMS TALKS AT MAJOR MEETINGS The agenda and talks from major CMS meetings can now be electronically accessed from the iCMS Web site. The following items can be found on: http://cms.cern.ch/iCMS/ General - CMS Weeks (Collaboration Meetings), CMS Weeks Agendas The talks presented at the Plenary Sessions. LHC Symposiums Management - CB - MB - FB - FMC Agendas and minutes are accessible to CMS members through their AFS account (ZH). However some linked documents are restricted to the Board Members. FB documents are only accessible to FB members. LHCC The talks presented at the ‘CMS Meetings with LHCC Referees’ are available on request from the PM or MB Country Representative. Annual Reviews The talks presented at the 2006 Annual reviews are posted. CMS DOCUMENTS It is considered useful to establish information on the first employment of CMS doctoral students upon completion of their theses. Therefore it is requested that Ph.D students inform the CMS Secretariat about the natu...

  8. CMS DOCUMENTATION

    CERN Multimedia

    CMS TALKS AT MAJOR MEETINGS The agenda and talks from major CMS meetings can now be electronically accessed from the iCMS Web site. The following items can be found on: http://cms.cern.ch/iCMS/ General - CMS Weeks (Collaboration Meetings), CMS Weeks Agendas The talks presented at the Plenary Sessions. LHC Symposiums Management - CB - MB - FB - FMC Agendas and minutes are accessible to CMS members through their AFS account (ZH). However some linked documents are restricted to the Board Members. FB documents are only accessible to FB members. LHCC The talks presented at the ‘CMS Meetings with LHCC Referees’ are available on request from the PM or MB Country Representative. Annual Reviews The talks presented at the 2006 Annual reviews are posted. CMS DOCUMENTS It is considered useful to establish information on the first employment of CMS doctoral students upon completion of their theses. Therefore it is requested that Ph.D students inform the CMS Secretariat about the natur...

  9. CMS DOCUMENTATION

    CERN Multimedia

    CMS TALKS AT MAJOR MEETINGS The agenda and talks from major CMS meetings can now be electronically accessed from the iCMS Web site. The following items can be found on: http://cms.cern.ch/iCMS/ Management- CMS Weeks (Collaboration Meetings), CMS Weeks Agendas The talks presented at the Plenary Sessions. Management - CB - MB - FB Agendas and minutes are accessible to CMS members through their AFS account (ZH). However some linked documents are restricted to the Board Members. FB documents are only accessible to FB members. LHCC The talks presented at the ‘CMS Meetings with LHCC Referees’ are available on request from the PM or MB Country Representative. Annual Reviews The talks presented at the 2007 Annual reviews are posted. CMS DOCUMENTS It is considered useful to establish information on the first employment of CMS doctoral students upon completion of their theses. Therefore it is requested that Ph.D students inform the CMS Secretariat about the nature of em¬pl...

  10. CMS DOCUMENTATION

    CERN Multimedia

    CMS TALKS AT MAJOR MEETINGS The agenda and talks from major CMS meetings can now be electronically accessed from the iCMS Web site. The following items can be found on: http://cms.cern.ch/iCMS/ Management- CMS Weeks (Collaboration Meetings), CMS Weeks Agendas The talks presented at the Plenary Sessions. Management - CB - MB - FB Agendas and minutes are accessible to CMS members through their AFS account (ZH). However some linked documents are restricted to the Board Members. FB documents are only accessible to FB members. LHCC The talks presented at the ‘CMS Meetings with LHCC Referees’ are available on request from the PM or MB Country Representative. Annual Reviews The talks presented at the 2007 Annual reviews are posted. CMS DOCUMENTS It is considered useful to establish information on the first employment of CMS doctoral students upon completion of their theses. Therefore it is requested that Ph.D students inform the CMS Secretariat about the nature of employment and ...

  11. CMS DOCUMENTATION

    CERN Multimedia

    CMS TALKS AT MAJOR MEETINGS The agenda and talks from major CMS meetings can now be electronically accessed from the iCMS Web site. The following items can be found on: http://cms.cern.ch/iCMS/ General - CMS Weeks (Collaboration Meetings), CMS Weeks Agendas The talks presented at the Plenary Sessions. LHC Symposiums Management - CB - MB - FB - FMC Agendas and minutes are accessible to CMS members through their AFS account (ZH). However some linked documents are restricted to the Board Members. FB documents are only accessible to FB members. LHCC The talks presented at the ‘CMS Meetings with LHCC Referees’ are available on request from the PM or MB Country Representative. Annual Reviews The talks presented at the 2006 Annual reviews are posted. CMS DOCUMENTS It is considered useful to establish information on the first employment of CMS doctoral students upon completion of their theses. Therefore it is requested that Ph.D students inform the CMS Secretariat about the na...

  12. Exercising CMS dataflows and workflows in computing challenges at the SpanishTier-1 and Tier-2 sites

    Energy Technology Data Exchange (ETDEWEB)

    Caballero, J; Colino, N; Peris, A D; G-Abia, P; Hernandez, J M; R-Calonge, F J [CIEMAT, Madrid (Spain); Cabrillo, I; Caballero, I G; Marco, R; Matorras, F [IFCA, Santander (Spain); Flix, J; Merino, G [PIC, Barcelona (Spain)], E-mail: jose.hernandez@ciemat.es

    2008-07-15

    An overview of the data transfer, processing and analysis operations conducted at the Spanish Tier-1 (PIC, Barcelona) and Tier-2 (CIEMAT-Madrid and IFCA-Santander federation) centres during the past CMS CSA06 Computing, Software and Analysis challenge and in preparation for CSA07 is present0008.

  13. CMS DOCUMENTATION

    CERN Multimedia

    CMS TALKS AT MAJOR MEETINGS The agenda and talks from major CMS meetings can now be electronically accessed from the ICMS Web site. The following items can be found on: http://cms.cern.ch/iCMS Management – CMS Weeks (Collaboration Meetings), CMS Weeks Agendas The talks presented at the Plenary Sessions. Management – CB – MB – FB Agendas and minutes are accessible to CMS members through Indico. LHCC The talks presented at the ‘CMS Meetings with LHCC Referees’ are available on request from the PM or MB Country Representative. Annual Reviews The talks presented at the 2008 Annual Reviews are posted in Indico. CMS DOCUMENTS It is considered useful to establish information on the first employment of CMS doctoral student upon completion of their theses.  Therefore it is requested that Ph.D students inform the CMS Secretariat about the nature of employment and name of their first employer. The Notes, Conference Reports and Theses published si...

  14. CMS software deployment on OSG

    International Nuclear Information System (INIS)

    Kim, B; Avery, P; Thomas, M; Wuerthwein, F

    2008-01-01

    A set of software deployment tools has been developed for the installation, verification, and removal of a CMS software release. The tools that are mainly targeted for the deployment on the OSG have the features of instant release deployment, corrective resubmission of the initial installation job, and an independent web-based deployment portal with Grid security infrastructure login mechanism. We have been deploying over 500 installations and found the tools are reliable and adaptable to cope with problems with changes in the Grid computing environment and the software releases. We present the design of the tools, statistics that we gathered during the operation of the tools, and our experience with the CMS software deployment on the OSG Grid computing environment

  15. CMS software deployment on OSG

    Energy Technology Data Exchange (ETDEWEB)

    Kim, B; Avery, P [University of Florida, Gainesville, FL 32611 (United States); Thomas, M [California Institute of Technology, Pasadena, CA 91125 (United States); Wuerthwein, F [University of California at San Diego, La Jolla, CA 92093 (United States)], E-mail: bockjoo@phys.ufl.edu, E-mail: thomas@hep.caltech.edu, E-mail: avery@phys.ufl.edu, E-mail: fkw@fnal.gov

    2008-07-15

    A set of software deployment tools has been developed for the installation, verification, and removal of a CMS software release. The tools that are mainly targeted for the deployment on the OSG have the features of instant release deployment, corrective resubmission of the initial installation job, and an independent web-based deployment portal with Grid security infrastructure login mechanism. We have been deploying over 500 installations and found the tools are reliable and adaptable to cope with problems with changes in the Grid computing environment and the software releases. We present the design of the tools, statistics that we gathered during the operation of the tools, and our experience with the CMS software deployment on the OSG Grid computing environment.

  16. Personalized Opportunistic Computing for CMS at Large Scale

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    **Douglas Thain** is an Associate Professor of Computer Science and Engineering at the University of Notre Dame, where he designs large scale distributed computing systems to power the needs of advanced science and...

  17. Experience in using commercial clouds in CMS

    Energy Technology Data Exchange (ETDEWEB)

    Bauerdick, L. [Fermilab; Bockelman, B. [Nebraska U.; Dykstra, D. [Fermilab; Fuess, S. [Fermilab; Garzoglio, G. [Fermilab; Girone, M. [CERN; Gutsche, O. [Fermilab; Holzman, B. [Fermilab; Hugnagel, D. [Fermilab; Kim, H. [Fermilab; Kennedy, R. [Fermilab; Mason, D. [Fermilab; Spentzouris, P. [Fermilab; Timm, S. [Fermilab; Tiradani, A. [Fermilab; Vaandering, E. [Fermilab

    2017-10-03

    Historically high energy physics computing has been performed on large purposebuilt computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.

  18. Implementation of NASTRAN on the IBM/370 CMS operating system

    Science.gov (United States)

    Britten, S. S.; Schumacker, B.

    1980-01-01

    The NASA Structural Analysis (NASTRAN) computer program is operational on the IBM 360/370 series computers. While execution of NASTRAN has been described and implemented under the virtual storage operating systems of the IBM 370 models, the IBM 370/168 computer can also operate in a time-sharing mode under the virtual machine operating system using the Conversational Monitor System (CMS) subset. The changes required to make NASTRAN operational under the CMS operating system are described.

  19. NSC KIPT Linux cluster for computing within the CMS physics program

    International Nuclear Information System (INIS)

    Levchuk, L.G.; Sorokin, P.V.; Soroka, D.V.

    2002-01-01

    The architecture of the NSC KIPT specialized Linux cluster constructed for carrying out work on CMS physics simulations and data processing is described. The configuration of the portable batch system (PBS) on the cluster is outlined. Capabilities of the cluster in its current configuration to perform CMS physics simulations are pointed out

  20. COMPUTING

    CERN Multimedia

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  1. The CMS integration grid testbed

    Energy Technology Data Exchange (ETDEWEB)

    Graham, Gregory E.

    2004-08-26

    The CMS Integration Grid Testbed (IGT) comprises USCMS Tier-1 and Tier-2 hardware at the following sites: the California Institute of Technology, Fermi National Accelerator Laboratory, the University of California at San Diego, and the University of Florida at Gainesville. The IGT runs jobs using the Globus Toolkit with a DAGMan and Condor-G front end. The virtual organization (VO) is managed using VO management scripts from the European Data Grid (EDG). Gridwide monitoring is accomplished using local tools such as Ganglia interfaced into the Globus Metadata Directory Service (MDS) and the agent based Mona Lisa. Domain specific software is packaged and installed using the Distribution After Release (DAR) tool of CMS, while middleware under the auspices of the Virtual Data Toolkit (VDT) is distributed using Pacman. During a continuous two month span in Fall of 2002, over 1 million official CMS GEANT based Monte Carlo events were generated and returned to CERN for analysis while being demonstrated at SC2002. In this paper, we describe the process that led to one of the world's first continuously available, functioning grids.

  2. COMPUTING

    CERN Multimedia

    2010-01-01

    Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...

  3. CMS Data Analysis: Current Status and Future Strategy

    CERN Document Server

    Innocente, V

    2003-01-01

    We present the current status of CMS data analysis architecture and describe work on future Grid-based distributed analysis prototypes. CMS has two main software frameworks related to data analysis: COBRA, the main framework, and IGUANA, the interactive visualisation framework. Software using these frameworks is used today in the world-wide production and analysis of CMS data. We describe their overall design and present examples of their current use with emphasis on interactive analysis. CMS is currently developing remote analysis prototypes, including one based on Clarens, a Grid-enabled client-server tool. Use of the prototypes by CMS physicists will guide us in forming a Grid-enriched analysis strategy. The status of this work is presented, as is an outline of how we plan to leverage the power of our existing frameworks in the migration of CMS software to the Grid.

  4. CMS Create #2 | 3-4 October | Register now!

    CERN Multimedia

    2016-01-01

    CMS Create brings together CERN members and students from IPAC Design Genève (see here). The goal is to build a prototype exhibit illustrating what CMS does and how it does it. The exhibit will introduce the world of a particle physics detector to the general public, and to younger visitors in particular.    CMS Create, hosted by IdeaSquare, was first held in November 2015. There were 4 highly diverse teams made of participants from many educational backgrounds and from 15 nationalities. 36% of these were women; a figure we hope will grow this year. The 25 participants were CMS physicists, computer scientists, engineers, other CMS collaborators and IPAC students. The 2015 winning exhibit is now permanently installed in the visitor reception centre at CMS Point 5, which was visited by 20.600 visitors during 2015. Are you creative and motivated to share your ideas?  Take part in CMS Create #2, meet with scientists and designers from all over the world and explain to CER...

  5. Storage element performance optimization for CMS analysis jobs

    International Nuclear Information System (INIS)

    Behrmann, G; Dahlblom, J; Guldmyr, J; Happonen, K; Lindén, T

    2012-01-01

    Tier-2 computing sites in the Worldwide Large Hadron Collider Computing Grid (WLCG) host CPU-resources (Compute Element, CE) and storage resources (Storage Element, SE). The vast amount of data that needs to processed from the Large Hadron Collider (LHC) experiments requires good and efficient use of the available resources. Having a good CPU efficiency for the end users analysis jobs requires that the performance of the storage system is able to scale with I/O requests from hundreds or even thousands of simultaneous jobs. In this presentation we report on the work on improving the SE performance at the Helsinki Institute of Physics (HIP) Tier-2 used for the Compact Muon Experiment (CMS) at the LHC. Statistics from CMS grid jobs are collected and stored in the CMS Dashboard for further analysis, which allows for easy performance monitoring by the sites and by the CMS collaboration. As part of the monitoring framework CMS uses the JobRobot which sends every four hours 100 analysis jobs to each site. CMS also uses the HammerCloud tool for site monitoring and stress testing and it has replaced the JobRobot. The performance of the analysis workflow submitted with JobRobot or HammerCloud can be used to track the performance due to site configuration changes, since the analysis workflow is kept the same for all sites and for months in time. The CPU efficiency of the JobRobot jobs at HIP was increased approximately by 50 % to more than 90 %, by tuning the SE and by improvements in the CMSSW and dCache software. The performance of the CMS analysis jobs improved significantly too. Similar work has been done on other CMS Tier-sites, since on average the CPU efficiency for CMSSW jobs has increased during 2011. Better monitoring of the SE allows faster detection of problems, so that the performance level can be kept high. The next storage upgrade at HIP consists of SAS disk enclosures which can be stress tested on demand with HammerCloud workflows, to make sure that the I

  6. Using the CMS high level trigger as a cloud resource

    International Nuclear Information System (INIS)

    Colling, David; Huffman, Adam; Bauer, Daniela; McCrae, Alison; Cinquilli, Mattia; Gowdy, Stephen; Coarasa, Jose Antonio; Ozga, Wojciech; Chaze, Olivier; Lahiff, Andrew; Grandi, Claudio; Tiradani, Anthony; Sgaravatto, Massimo

    2014-01-01

    The CMS High Level Trigger is a compute farm of more than 10,000 cores. During data taking this resource is heavily used and is an integral part of the experiment's triggering system. However, outside of data taking periods this resource is largely unused. We describe why CMS wants to use the HLT as a cloud resource (outside of data taking periods) and how this has been achieved. In doing this we have turned a single-use cluster into an agile resource for CMS production computing. While we are able to use the HLT as a production cloud resource, there is still considerable further work that CMS needs to carry out before this resource can be used with the desired agility. This report, therefore, represents a snapshot of this activity at the time of CHEP 2013.

  7. Event processing time prediction at the CMS experiment of the Large Hadron Collider

    International Nuclear Information System (INIS)

    Cury, Samir; Gutsche, Oliver; Kcira, Dorian

    2014-01-01

    The physics event reconstruction is one of the biggest challenges for the computing of the LHC experiments. Among the different tasks that computing systems of the CMS experiment performs, the reconstruction takes most of the available CPU resources. The reconstruction time of single collisions varies according to event complexity. Measurements were done in order to determine this correlation quantitatively, creating means to predict it based on the data-taking conditions of the input samples. Currently the data processing system splits tasks in groups with the same number of collisions and does not account for variations in the processing time. These variations can be large and can lead to a considerable increase in the time it takes for CMS workflows to finish. The goal of this study was to use estimates on processing time to more efficiently split the workflow into jobs. By considering the CPU time needed for each job the spread of the job-length distribution in a workflow is reduced.

  8. Distributed multiscale computing

    NARCIS (Netherlands)

    Borgdorff, J.

    2014-01-01

    Multiscale models combine knowledge, data, and hypotheses from different scales. Simulating a multiscale model often requires extensive computation. This thesis evaluates distributing these computations, an approach termed distributed multiscale computing (DMC). First, the process of multiscale

  9. CMS users data management service integration and first experiences with its NoSQL data storage

    CERN Document Server

    Riahi, H; Cinquilli, M; Hernandez, J M; Konstantinov, P; Mascheroni, M; Santocchia, A

    2014-01-01

    The distributed data analysis workflow in CMS assumes that jobs run in a different location to where their results are finally stored. Typically the user outputs must be transferred from one site to another by a dedicated CMS service, AsyncStageOut. This new service is originally developed to address the inefficiency in using the CMS computing resources when transferring the analysis job outputs, synchronously, once they are produced in the job execution node to the remote site.The AsyncStageOut is designed as a thin application relying only on the NoSQL database (CouchDB) as input and data storage. It has progressed from a limited prototype to a highly adaptable service which manages and monitors the whole user files steps, namely file transfer and publication. The AsyncStageOut is integrated with the Common CMS/Atlas Analysis Framework. It foresees the management of nearly 200k users files per day of close to 1000 individual users per month with minimal delays, and providing a real time monitoring and repor...

  10. 42 CFR 405.874 - Appeals of CMS or a CMS contractor.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false Appeals of CMS or a CMS contractor. 405.874 Section... Part B Program § 405.874 Appeals of CMS or a CMS contractor. A CMS contractor's (that is, a carrier... supplier enrollment application. If CMS or a CMS contractor denies a provider's or supplier's enrollment...

  11. Charge distribution dependency on gap thickness of CMS endcap RPC

    CERN Document Server

    Park, Sung Keun

    2016-01-01

    We present a systematic study of charge distribution dependency of CMS Resistive Plate Chamber (RPC) on gap thickness.Prototypes of double-gap with five different gap thickness from 1.8mm to 1.0mm in 0.2mm steps have been built with 2mm thick phenolic high-pressure-laminated (HPL) plates. The charges of cosmic-muon signals induced on the detector strips are measured as a function of time using two four-channel 400-MHz fresh ADCs. In addition, the arrival time of the muons and the strip cluster sizes are measured by digitizing the signal using a 32-channel voltage-mode front-end-electronics and a 400-MHz 64-channel multi-hit TDC. The gain and the input impedance of the front-end-electronics were 200mV/mV and 20 Ohm, respectively.

  12. Implementing data placement strategies for the CMS experiment based on a popularity model

    International Nuclear Information System (INIS)

    Barreiro Megino, F H; Cinquilli, M; Giordano, D; Karavakis, E; Girone, M; Magini, N; Mancinelli, V; Spiga, D

    2012-01-01

    During the first two years of data taking, the CMS experiment has collected over 20 PetaBytes of data and processed and analyzed it on the distributed, multi-tiered computing infrastructure on the WorldWide LHC Computing Grid. Given the increasing data volume that has to be stored and efficiently analyzed, it is a challenge for several LHC experiments to optimize and automate the data placement strategies in order to fully profit of the available network and storage resources and to facilitate daily computing operations. Building on previous experience acquired by ATLAS, we have developed the CMS Popularity Service that tracks file accesses and user activity on the grid and will serve as the foundation for the evolution of their data placement. A fully automated, popularity-based site-cleaning agent has been deployed in order to scan Tier-2 sites that are reaching their space quota and suggest obsolete, unused data that can be safely deleted without disrupting analysis activity. Future work will be to demonstrate dynamic data placement functionality based on this popularity service and integrate it in the data and workload management systems: as a consequence the pre-placement of data will be minimized and additional replication of hot datasets will be requested automatically. This paper will give an insight into the development, validation and production process and will analyze how the framework has influenced resource optimization and daily operations in CMS.

  13. CMS Collaboration

    International Nuclear Information System (INIS)

    Faridah Mohammad Idris; Wan Ahmad Tajuddin Wan Abdullah; Zainol Abidin Ibrahim

    2013-01-01

    Full-text: CMS Collaboration is an international scientific collaboration located at European Organization for Nuclear Research (CERN), Switzerland, dedicated in carried out research on experimental particle physics. Consisting of 179 institutions from 41 countries from all around the word, CMS Collaboration host a general purpose detector for example the Compact Muon Solenoid (CMS) for members in CMS Collaboration to conduct experiment from the collision of two proton beams accelerated to a speed of 8 TeV in the LHC ring. In this paper, we described how the CMS detector is used by the scientist in CMS Collaboration to reconstruct the most basic building of matter. (author)

  14. CMS releases new batch of LHC open data

    CERN Document Server

    Achintya Rao

    2016-01-01

    CMS makes 300 TB of high-quality data from the LHC available to the public through the CERN Open Data Portal.   A CMS collision event as seen in the built-in event display on the CERN Open Data Portal (Image: CERN) The CMS collaboration has made 300 TB of high-quality data from the LHC available to the public through the CERN Open Data Portal. The collision data come in two types: The so-called “primary datasets” are in the same format used by the CMS Collaboration to perform research. The “derived datasets” on the other hand require a lot less computing power and can be readily analysed by university or even high-school students. Notably, CMS is also providing the simulated data generated with the same software version that should be used to analyse the primary datasets. Simulations play a crucial role in particle-physics research and CMS is also making available the protocols for generating the simulations that are provided. The data release is accompanie...

  15. CMS Detector Posters

    CERN Multimedia

    2016-01-01

    CMS Detector posters (produced in 2000): CMS installation CMS collaboration From the Big Bang to Stars LHC Magnetic Field Magnet System Trackering System Tracker Electronics Calorimetry Eletromagnetic Calorimeter Hadronic Calorimeter Muon System Muon Detectors Trigger and data aquisition (DAQ) ECAL posters (produced in 2010, FR & EN): CMS ECAL CMS ECAL-Supermodule cooling and mechatronics CMS ECAL-Supermodule assembly

  16. Top quark mass measurements with CMS

    CERN Document Server

    Kovalchuk, Nataliia

    2017-01-01

    Measurements of the top quark mass are presented, obtained from CMS data collected in proton-proton collisions at the LHC at centre-of-mass energies of 7 TeV and 8 TeV. The mass of the top quark is measured using several methods and channels, including the reconstructed invariant mass distribution of the top quark, an analysis of endpoint spectra as well as measurements from shapes of top quark decay distributions. The dependence of the mass measurement on the kinematic phase space is investigated. The results of the various channels are combined and compared to the world average. The top mass and also $\\alpha_{\\textnormal S}$ are extracted from the top pair cross section measured at CMS.

  17. Angular distributions of the quenched energy flow from dijets with different radius parameters in CMS

    Energy Technology Data Exchange (ETDEWEB)

    McGinn, Christopher F.

    2016-12-15

    The flow of the quenched energy in imbalanced dijet events has been previously studied by transverse vector sum of charged particles with the CMS detector, namely the missing p{sub T} measurement. The results have led to new theoretical insights to order to explain the wide angle radiation. The missing p{sub T} technique has been improved so that it allows the study of angular distribution of the energy flow with respect to the dijet axis. The measurements are performed using different distance parameters R with the anti-k{sub T} clustering algorithm, which provide information about how the angular distribution of the quenched energy depends on the jet width.

  18. CMS Factsheet

    CERN Multimedia

    Lapka, Marzena; Rao, Achintya

    2016-01-01

    CMS Factsheets: containing facts about the CMS collaboration and detector. Printed copies of the English version are available from the CMS Secretariat. Responsible for translations: English only - E.Gibney (updated 2015)

  19. Physics with CMS and Electronic Upgrades

    Energy Technology Data Exchange (ETDEWEB)

    Rohlf, James W. [Boston Univ., MA (United States)

    2016-08-01

    The current funding is for continued work on the Compact Muon Solenoid (CMS) at the CERN Large Hadron Collider (LHC) as part of the Energy Frontier experimental program. The current budget year covers the first year of physics running at 13 TeV (Run 2). During this period we have concentrated on commisioning of the μTCA electronics, a new standard for distribution of CMS trigger and timing control signals and high bandwidth data aquistiion as well as participating in Run 2 physics.

  20. CMS-Wave

    Science.gov (United States)

    2015-10-30

    Coastal Inlets Research Program CMS -Wave CMS -Wave is a two-dimensional spectral wind-wave generation and transformation model that employs a forward...marching, finite-difference method to solve the wave action conservation equation. Capabilities of CMS -Wave include wave shoaling, refraction... CMS -Wave can be used in either on a half- or full-plane mode, with primary waves propagating from the seaward boundary toward shore. It can

  1. The CMS Muon System Alignment

    CERN Document Server

    Martinez Ruiz-Del-Arbol, P

    2009-01-01

    The alignment of the muon system of CMS is performed using different techniques: photogrammetry measurements, optical alignment and alignment with tracks. For track-based alignment, several methods are employed, ranging from a hit and impact point (HIP) algorithm and a procedure exploiting chamber overlaps to a global fit method based on the Millepede approach. For start-up alignment as long as available integrated luminosity is still significantly limiting the size of the muon sample from collisions, cosmic muon and beam halo signatures play a very strong role. During the last commissioning runs in 2008 the first aligned geometries have been produced and validated with data. The CMS offline computing infrastructure has been used in order to perform improved reconstructions. We present the computational aspects related to the calculation of alignment constants at the CERN Analysis Facility (CAF), the production and population of databases and the validation and performance in the official reconstruction. Also...

  2. Intelligent distributed computing

    CERN Document Server

    Thampi, Sabu

    2015-01-01

    This book contains a selection of refereed and revised papers of the Intelligent Distributed Computing Track originally presented at the third International Symposium on Intelligent Informatics (ISI-2014), September 24-27, 2014, Delhi, India.  The papers selected for this Track cover several Distributed Computing and related topics including Peer-to-Peer Networks, Cloud Computing, Mobile Clouds, Wireless Sensor Networks, and their applications.

  3. CMS Analysis and Data Reduction with Apache Spark

    Energy Technology Data Exchange (ETDEWEB)

    Gutsche, Oliver [Fermilab; Canali, Luca [CERN; Cremer, Illia [Magnetic Corp., Waltham; Cremonesi, Matteo [Fermilab; Elmer, Peter [Princeton U.; Fisk, Ian [Flatiron Inst., New York; Girone, Maria [CERN; Jayatilaka, Bo [Fermilab; Kowalkowski, Jim [Fermilab; Khristenko, Viktor [CERN; Motesnitsalis, Evangelos [CERN; Pivarski, Jim [Princeton U.; Sehrish, Saba [Fermilab; Surdy, Kacper [CERN; Svyatkovskiy, Alexey [Princeton U.

    2017-10-31

    Experimental Particle Physics has been at the forefront of analyzing the world's largest datasets for decades. The HEP community was among the first to develop suitable software and computing tools for this task. In recent times, new toolkits and systems for distributed data processing, collectively called "Big Data" technologies have emerged from industry and open source projects to support the analysis of Petabyte and Exabyte datasets in industry. While the principles of data analysis in HEP have not changed (filtering and transforming experiment-specific data formats), these new technologies use different approaches and tools, promising a fresh look at analysis of very large datasets that could potentially reduce the time-to-physics with increased interactivity. Moreover these new tools are typically actively developed by large communities, often profiting of industry resources, and under open source licensing. These factors result in a boost for adoption and maturity of the tools and for the communities supporting them, at the same time helping in reducing the cost of ownership for the end-users. In this talk, we are presenting studies of using Apache Spark for end user data analysis. We are studying the HEP analysis workflow separated into two thrusts: the reduction of centrally produced experiment datasets and the end-analysis up to the publication plot. Studying the first thrust, CMS is working together with CERN openlab and Intel on the CMS Big Data Reduction Facility. The goal is to reduce 1 PB of official CMS data to 1 TB of ntuple output for analysis. We are presenting the progress of this 2-year project with first results of scaling up Spark-based HEP analysis. Studying the second thrust, we are presenting studies on using Apache Spark for a CMS Dark Matter physics search, comparing Spark's feasibility, usability and performance to the ROOT-based analysis.

  4. Upgrade of the CMS Event Builder

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The Data Acquisition (DAQ) system of the Compact Muon Solenoid (CMS) experiment at CERN assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GB/s. By the time the LHC restarts after the 2013/14 shut-down, the current compute nodes and networking infrastructure will have reached the end of their lifetime. We are presenting design studies for an upgrade of the CMS event builder based on advanced networking technologies such as 10 Gb/s Ethernet. We report on tests and performance measurements with small-scale test setups.

  5. The CMS High Level Trigger System: Experience and Future Development

    CERN Document Server

    Bauer, Gerry; Bowen, Matthew; Branson, James G; Bukowiec, Sebastian; Cittolin, Sergio; Coarasa, J A; Deldicque, Christian; Dobson, Marc; Dupont, Aymeric; Erhan, Samim; Flossdorf, Alexander; Gigi, Dominique; Glege, Frank; Gomez-Reino, R; Hartl, Christian; Hegeman, Jeroen; Holzner, André; Y L Hwong; Masetti, Lorenzo; Meijers, Frans; Meschi, Emilio; Mommsen, R K; O'Dell, Vivian; Orsini, Luciano; Paus, Christoph; Petrucci, Andrea; Pieri, Marco; Polese, Giovanni; Racz, Attila; Raginel, Olivier; Sakulin, Hannes; Sani, Matteo; Schwick, Christoph; Shpakov, Dennis; Simon, M; Spataru, A C; Sumorok, Konstanty

    2012-01-01

    The CMS experiment at the LHC features a two-level trigger system. Events accepted by the first level trigger, at a maximum rate of 100 kHz, are read out by the Data Acquisition system (DAQ), and subsequently assembled in memory in a farm of computers running a software high-level trigger (HLT), which selects interesting events for offline storage and analysis at a rate of order few hundred Hz. The HLT algorithms consist of sequences of offline-style reconstruction and filtering modules, executed on a farm of 0(10000) CPU cores built from commodity hardware. Experience from the operation of the HLT system in the collider run 2010/2011 is reported. The current architecture of the CMS HLT, its integration with the CMS reconstruction framework and the CMS DAQ, are discussed in the light of future development. The possible short- and medium-term evolution of the HLT software infrastructure to support extensions of the HLT computing power, and to address remaining performance and maintenance issues, are discussed.

  6. The CMS Integration Grid Testbed

    CERN Document Server

    Graham, G E; Aziz, Shafqat; Bauerdick, L.A.T.; Ernst, Michael; Kaiser, Joseph; Ratnikova, Natalia; Wenzel, Hans; Wu, Yu-jun; Aslakson, Erik; Bunn, Julian; Iqbal, Saima; Legrand, Iosif; Newman, Harvey; Singh, Suresh; Steenberg, Conrad; Branson, James; Fisk, Ian; Letts, James; Arbree, Adam; Avery, Paul; Bourilkov, Dimitri; Cavanaugh, Richard; Rodriguez, Jorge Luis; Kategari, Suchindra; Couvares, Peter; DeSmet, Alan; Livny, Miron; Roy, Alain; Tannenbaum, Todd; Graham, Gregory E.; Aziz, Shafqat; Ernst, Michael; Kaiser, Joseph; Ratnikova, Natalia; Wenzel, Hans; Wu, Yujun; Aslakson, Erik; Bunn, Julian; Iqbal, Saima; Legrand, Iosif; Newman, Harvey; Singh, Suresh; Steenberg, Conrad; Branson, James; Fisk, Ian; Letts, James; Arbree, Adam; Avery, Paul; Bourilkov, Dimitri; Cavanaugh, Richard; Rodriguez, Jorge; Kategari, Suchindra; Couvares, Peter; Smet, Alan De; Livny, Miron; Roy, Alain; Tannenbaum, Todd

    2003-01-01

    The CMS Integration Grid Testbed (IGT) comprises USCMS Tier-1 and Tier-2 hardware at the following sites: the California Institute of Technology, Fermi National Accelerator Laboratory, the University of California at San Diego, and the University of Florida at Gainesville. The IGT runs jobs using the Globus Toolkit with a DAGMan and Condor-G front end. The virtual organization (VO) is managed using VO management scripts from the European Data Grid (EDG). Gridwide monitoring is accomplished using local tools such as Ganglia interfaced into the Globus Metadata Directory Service (MDS) and the agent based Mona Lisa. Domain specific software is packaged and installed using the Distrib ution After Release (DAR) tool of CMS, while middleware under the auspices of the Virtual Data Toolkit (VDT) is distributed using Pacman. During a continuo us two month span in Fall of 2002, over 1 million official CMS GEANT based Monte Carlo events were generated and returned to CERN for analysis while being demonstrated at SC2002. ...

  7. CMS results in Electroweak Physics

    CERN Multimedia

    CERN. Geneva

    2011-01-01

    We present the results of electroweak studies performed using data collected in 2010 at a center-of-mass energy of 7 TeV by the CMS experiment at the LHC. Besides their intrinsic interest as unique samples to calibrate and understand the CMS detector response to leptons, jets and missing energy, events containing W and Z bosons appear as dominant components in many Higgs seaches and in most of the searches beyond the Standard Model, either as signal or as background. In addition, the excellent level of theoretical and experimental understanding of these processes allows electroweak tests at the LHC at an unprecendented level of precision. CMS uses a wide range of final states to measure cross sections, asymmetries, polarizations and differential distributions in general. The current integrated luminosity is already sufficient to perform not just inclusive measurements using W and Z decays into muons and electrons, but also precise studies of associated jet production and final states containing taus, as well...

  8. CMS General Poster 2009 : to raise awareness of CMS, the CMS detector, its parts and people

    CERN Multimedia

    CMS outreach

    2012-01-01

    A poster which is identical to the two inside pages of the CMS brochure. The poster contains an image of a cross section of the CMS detector, explanation of detector parts, the aims of the CMS experiment and numbers of scientists and institutions associated with the experiment.

  9. Measurement of the dijet angular distributions and search for quark compositeness with the CMS experiment

    International Nuclear Information System (INIS)

    Hinzmann, Andreas Dominik

    2011-01-01

    The Large Hadron Collider (LHC) at the Conseil Europeen pour la Recherche Nucleaire (CERN) allows to study the interactions of quarks and gluons in a yet unexplored energy regime. In 2010, the LHC delivered an integrated luminosity of more than 36 pb -1 of proton-proton collisions at a center-of-mass energy of √(s)=7 TeV. In these proton-proton collisions, the interactions of the constituent quarks and gluons produced a considerable amount of jets of particles with transverse momenta above 1 TeV. Well suited for the study of these jet processes is the Compact Muon Solenoid (CMS) experiment situated at the LHC point 5 as it can measure jets with the necessary energy and angular resolutions over a large range of transverse momentum (∝30 GeV T dijet = e vertical stroke y 1 -y 2 vertical stroke , where y 1 and y 2 are the rapidities of the two jets, y ≡ (1)/(2)ln [(E+p z )/(E-p z )], and p z is the projection of the jet momentum along the beam axis. The choice of the variable χ dijet is motivated by the fact that the normalized differential cross section (1)/(σ) (dσ)/(dχ dijet ) (the dijet angular distribution) is flat in this variable for Rutherford scattering, characteristic for spin-1 particle exchange. In contrast to QCD which predicts a dijet angular distribution similar to Rutherford scattering, new physics, such as quark compositeness, that might have a more isotropic dijet angular distribution would produce an excess at low values of χ dijet . Since the shapes of the dijet angular distributions for the qg →qg, qq ' →qq ' and gg →gg scattering processes are similar, the QCD prediction does not strongly depend on the parton distribution functions (PDFs) which describe the momentum distribution of the partons inside the protons. Due to the normalization, the dijet angular distribution has a reduced sensitivity to several predominant experimental uncertainties (e.g. the jet energy scale and luminosity uncertainties). The dijet angular distribution

  10. Measurement of the dijet angular distributions and search for quark compositeness with the CMS experiment

    Energy Technology Data Exchange (ETDEWEB)

    Hinzmann, Andreas Dominik

    2011-10-07

    {sub 2}} {sup vertical} {sup stroke}, where y{sub 1} and y{sub 2} are the rapidities of the two jets, y {identical_to} (1)/(2)ln [(E+p{sub z})/(E-p{sub z})], and p{sub z} is the projection of the jet momentum along the beam axis. The choice of the variable {chi}{sub dijet} is motivated by the fact that the normalized differential cross section (1)/({sigma}) (d{sigma})/(d{chi}{sub dijet}) (the dijet angular distribution) is flat in this variable for Rutherford scattering, characteristic for spin-1 particle exchange. In contrast to QCD which predicts a dijet angular distribution similar to Rutherford scattering, new physics, such as quark compositeness, that might have a more isotropic dijet angular distribution would produce an excess at low values of {chi}{sub dijet}. Since the shapes of the dijet angular distributions for the qg {yields}qg, qq{sup '} {yields}qq{sup '} and gg {yields}gg scattering processes are similar, the QCD prediction does not strongly depend on the parton distribution functions (PDFs) which describe the momentum distribution of the partons inside the protons. Due to the normalization, the dijet angular distribution has a reduced sensitivity to several predominant experimental uncertainties (e.g. the jet energy scale and luminosity uncertainties). The dijet angular distribution is therefore well suited to test the predictions of QCD and to search for signals of new physics, in particular for signs of quark compositeness. In the following a measurement of the dijet angular distributions and a search for quark compositeness with the CMS experiment is presented. (orig.)

  11. Measurement of the dijet angular distributions and search for quark compositeness with the CMS experiment

    Energy Technology Data Exchange (ETDEWEB)

    Hinzmann, Andreas Dominik

    2011-10-07

    {sub 2}} {sup vertical} {sup stroke}, where y{sub 1} and y{sub 2} are the rapidities of the two jets, y {identical_to} (1)/(2)ln [(E+p{sub z})/(E-p{sub z})], and p{sub z} is the projection of the jet momentum along the beam axis. The choice of the variable {chi}{sub dijet} is motivated by the fact that the normalized differential cross section (1)/({sigma}) (d{sigma})/(d{chi}{sub dijet}) (the dijet angular distribution) is flat in this variable for Rutherford scattering, characteristic for spin-1 particle exchange. In contrast to QCD which predicts a dijet angular distribution similar to Rutherford scattering, new physics, such as quark compositeness, that might have a more isotropic dijet angular distribution would produce an excess at low values of {chi}{sub dijet}. Since the shapes of the dijet angular distributions for the qg {yields}qg, qq{sup '} {yields}qq{sup '} and gg {yields}gg scattering processes are similar, the QCD prediction does not strongly depend on the parton distribution functions (PDFs) which describe the momentum distribution of the partons inside the protons. Due to the normalization, the dijet angular distribution has a reduced sensitivity to several predominant experimental uncertainties (e.g. the jet energy scale and luminosity uncertainties). The dijet angular distribution is therefore well suited to test the predictions of QCD and to search for signals of new physics, in particular for signs of quark compositeness. In the following a measurement of the dijet angular distributions and a search for quark compositeness with the CMS experiment is presented. (orig.)

  12. Cloud Computing as Evolution of Distributed Computing – A Case Study for SlapOS Distributed Cloud Computing Platform

    Directory of Open Access Journals (Sweden)

    George SUCIU

    2013-01-01

    Full Text Available The cloud computing paradigm has been defined from several points of view, the main two directions being either as an evolution of the grid and distributed computing paradigm, or, on the contrary, as a disruptive revolution in the classical paradigms of operating systems, network layers and web applications. This paper presents a distributed cloud computing platform called SlapOS, which unifies technologies and communication protocols into a new technology model for offering any application as a service. Both cloud and distributed computing can be efficient methods for optimizing resources that are aggregated from a grid of standard PCs hosted in homes, offices and small data centers. The paper fills a gap in the existing distributed computing literature by providing a distributed cloud computing model which can be applied for deploying various applications.

  13. HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation

    Energy Technology Data Exchange (ETDEWEB)

    Holzman, Burt [Fermilab; Bauerdick, Lothar A.T. [Fermilab; Bockelman, Brian [Nebraska U.; Dykstra, Dave [Fermilab; Fisk, Ian [New York U.; Fuess, Stuart [Fermilab; Garzoglio, Gabriele [Fermilab; Girone, Maria [CERN; Gutsche, Oliver [Fermilab; Hufnagel, Dirk [Fermilab; Kim, Hyunwoo [Fermilab; Kennedy, Robert [Fermilab; Magini, Nicolo [Fermilab; Mason, David [Fermilab; Spentzouris, Panagiotis [Fermilab; Tiradani, Anthony [Fermilab; Timm, Steve [Fermilab; Vaandering, Eric W. [Fermilab

    2017-09-29

    Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing nterest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized both local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. In addition, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.

  14. Integrating Amazon EC2 with the CMS Production Framework

    CERN Document Server

    Melo, Andrew Malone

    2011-01-01

    As cloud middleware and cloud providers have become more robust, various experiments with experience in Grid submission have begun to investigate the possibility of taking previously Grid-Enabled applications and making them compatible with Cloud Computing. Successful implementation will allow for dynamic scaling of the available hardware resources, providing access to peak-load handling capabilities and possibly resulting in lower costs to the experiment. Here we discuss current work within the CMS collaboration at the LHC to both perform computation on EC2, both for production and analysis use-cases. We also discuss break-even points between dedicated and cloud resources using real-world costs derived from a CMS site.

  15. Integrating Amazon EC2 with the CMS production framework

    International Nuclear Information System (INIS)

    Melo, Andrew; Sheldon, Paul

    2012-01-01

    As cloud middleware and cloud providers have become more robust, various experiments with experience in Grid submission have begun to investigate the possibility of taking previously Grid-Enabled applications and making them compatible with Cloud Computing. Successful implementation will allow for dynamic scaling of the available hardware resources, providing access to peak-load handling capabilities and possibly resulting in lower costs to the experiment. Here we discuss current work within the CMS collaboration at the LHC to both perform computation on EC2, both for production and analysis use-cases. We also discuss break-even points between dedicated and cloud resources using real-world costs derived from a CMS site.

  16. Recent experience and future evolution of the CMS High Level Trigger System

    CERN Document Server

    Bauer, Gerry; Branson, James; Bukowiec, Sebastian Czeslaw; Chaze, Olivier; Cittolin, Sergio; Coarasa Perez, Jose Antonio; Deldicque, Christian; Dobson, Marc; Dupont, Aymeric; Erhan, Samim; Gigi, Dominique; Glege, Frank; Gomez-Reino Garrido, Robert; Hartl, Christian; Holzner, Andre Georg; Masetti, Lorenzo; Meijers, Franciscus; Meschi, Emilio; Mommsen, Remigius; Nunez Barranco Fernandez, Carlos; O'Dell, Vivian; Orsini, Luciano; Paus, Christoph Maria Ernst; Petrucci, Andrea; Pieri, Marco; Polese, Giovanni; Racz, Attila; Raginel, Olivier; Sakulin, Hannes; Sani, Matteo; Schwick, Christoph; Spataru, Andrei Cristian; Stoeckli, Fabian; Sumorok, Konstanty

    2012-01-01

    The CMS experiment at the LHC uses a two-stage trigger system, with events flowing from the first level trigger at a rate of 100 kHz. These events are read out by the Data Acquisition system (DAQ), assembled in memory in a farm of computers, and finally fed into the high-level trigger (HLT) software running on the farm. The HLT software selects interesting events for offline storage and analysis at a rate of a few hundred Hz. The HLT algorithms consist of sequences of offline-style reconstruction and filtering modules, executed on a farm of 0(10000) CPU cores built from commodity hardware. Experience from the 2010-2011 collider run is detailed, as well as the current architecture of the CMS HLT, and its integration with the CMS reconstruction framework and CMS DAQ. The short- and medium-term evolution of the HLT software infrastructure is discussed, with future improvements aimed at supporting extensions of the HLT computing power, and addressing remaining performance and maintenance issues.

  17. CMS Fast Facts

    Data.gov (United States)

    U.S. Department of Health & Human Services — CMS has developed a new quick reference statistical summary on annual CMS program and financial data. CMS Fast Facts includes summary information on total program...

  18. Use of DAGMan in CRAB3 to Improve the Splitting of CMS User Jobs

    Energy Technology Data Exchange (ETDEWEB)

    Wolf, M. [Notre Dame U.; Mascheroni, M. [Fermilab; Woodard, A. [Notre Dame U.; Belforte, S. [INFN, Trieste; Bockelman, B. [Nebraska U.; Hernandez, J. M. [Madrid, CIEMAT; Vaandering, E. [Fermilab

    2017-11-22

    CRAB3 is a workload management tool used by CMS physicists to analyze data acquired by the Compact Muon Solenoid (CMS) detector at the CERN Large Hadron Collider (LHC). Research in high energy physics often requires the analysis of large collections of files, referred to as datasets. The task is divided into jobs that are distributed among a large collection of worker nodes throughout the Worldwide LHC Computing Grid (WLCG). Splitting a large analysis task into optimally sized jobs is critical to efficient use of distributed computing resources. Jobs that are too big will have excessive runtimes and will not distribute the work across all of the available nodes. However, splitting the project into a large number of very small jobs is also inefficient, as each job creates additional overhead which increases load on infrastructure resources. Currently this splitting is done manually, using parameters provided by the user. However the resources needed for each job are difficult to predict because of frequent variations in the performance of the user code and the content of the input dataset. As a result, dividing a task into jobs by hand is difficult and often suboptimal. In this work we present a new feature called “automatic splitting” which removes the need for users to manually specify job splitting parameters. We discuss how HTCondor DAGMan can be used to build dynamic Directed Acyclic Graphs (DAGs) to optimize the performance of large CMS analysis jobs on the Grid. We use DAGMan to dynamically generate interconnected DAGs that estimate the processing time the user code will require to analyze each event. This is used to calculate an estimate of the total processing time per job, and a set of analysis jobs are run using this estimate as a specified time limit. Some jobs may not finish within the alloted time; they are terminated at the time limit, and the unfinished data is regrouped into smaller jobs and resubmitted.

  19. Use of DAGMan in CRAB3 to improve the splitting of CMS user jobs

    Science.gov (United States)

    Wolf, M.; Mascheroni, M.; Woodard, A.; Belforte, S.; Bockelman, B.; Hernandez, J. M.; Vaandering, E.

    2017-10-01

    CRAB3 is a workload management tool used by CMS physicists to analyze data acquired by the Compact Muon Solenoid (CMS) detector at the CERN Large Hadron Collider (LHC). Research in high energy physics often requires the analysis of large collections of files, referred to as datasets. The task is divided into jobs that are distributed among a large collection of worker nodes throughout the Worldwide LHC Computing Grid (WLCG). Splitting a large analysis task into optimally sized jobs is critical to efficient use of distributed computing resources. Jobs that are too big will have excessive runtimes and will not distribute the work across all of the available nodes. However, splitting the project into a large number of very small jobs is also inefficient, as each job creates additional overhead which increases load on infrastructure resources. Currently this splitting is done manually, using parameters provided by the user. However the resources needed for each job are difficult to predict because of frequent variations in the performance of the user code and the content of the input dataset. As a result, dividing a task into jobs by hand is difficult and often suboptimal. In this work we present a new feature called “automatic splitting” which removes the need for users to manually specify job splitting parameters. We discuss how HTCondor DAGMan can be used to build dynamic Directed Acyclic Graphs (DAGs) to optimize the performance of large CMS analysis jobs on the Grid. We use DAGMan to dynamically generate interconnected DAGs that estimate the processing time the user code will require to analyze each event. This is used to calculate an estimate of the total processing time per job, and a set of analysis jobs are run using this estimate as a specified time limit. Some jobs may not finish within the alloted time; they are terminated at the time limit, and the unfinished data is regrouped into smaller jobs and resubmitted.

  20. CMS Wallet Card

    Data.gov (United States)

    U.S. Department of Health & Human Services — The CMS Wallet Card is a quick reference statistical summary on annual CMS program and financial data. The CMS Wallet Card is available for each year from 2004...

  1. ATLAS Distributed Computing

    CERN Document Server

    Schovancova, J; The ATLAS collaboration

    2011-01-01

    The poster details the different aspects of the ATLAS Distributed Computing experience after the first year of LHC data taking. We describe the performance of the ATLAS distributed computing system and the lessons learned during the 2010 run, pointing out parts of the system which were in a good shape, and also spotting areas which required improvements. Improvements ranged from hardware upgrade on the ATLAS Tier-0 computing pools to improve data distribution rates, tuning of FTS channels between CERN and Tier-1s, and studying data access patterns for Grid analysis to improve the global processing rate. We show recent software development driven by operational needs with emphasis on data management and job execution in the ATLAS production system.

  2. CMS Sensitivity to Quark Contact Interactions using Dijets

    CERN Document Server

    Esen, Selda

    2006-01-01

    We estimate CMS sensitivity to quark contact interactions in the dijet final state. The canonical model of a contact interaction among left-handed composite quarks changes the dijet angular distribution at high dijet mass. The dijet ratio variable introduced at the Tevatron is used as a simple measure of the angular distribution as a function of dijet mass. The contact interaction signal and QCD background are estimated for the dijet ratio as a function of dijet mass from 0.3 to 6.5 TeV. Statistical uncertainties are estimated for integrated luminosities of 100 pb^-1, 1 fb^-1, and 10 fb^-1 and a realistic trigger table including multiple thresholds and prescales for the single jet triggers. Systematic uncertainties on the dijet ratio are estimated and are found to be small. The chisquard between the background and the signal is estimated, including systematics, and is used to find CMS sensitivity to the contact interaction scale Lambda^+. For an integrated luminosity of 100 pb^-1, 1 fb^-1, and 10 fb^-1, CMS c...

  3. The architecture of the CMS Level-1 Trigger Control and Monitoring System using UML

    International Nuclear Information System (INIS)

    Magrans de Abril, Marc; Ghabrous Larrea, Carlos; Lazaridis, Christos; Da Rocha Melo, Jose L; Hammer, Josef; Hartl, Christian

    2011-01-01

    The architecture of the Compact Muon Solenoid (CMS) Level-1 Trigger Control and Monitoring software system is presented. This system has been installed and commissioned on the trigger online computers and is currently used for data taking. It has been designed to handle the trigger configuration and monitoring during data taking as well as all communications with the main run control of CMS. Furthermore its design has foreseen the provision of the software infrastructure for detailed testing of the trigger system during beam down time. This is a medium-size distributed system that runs over 40 PCs and 200 processes that control about 4000 electronic boards. The architecture of this system is described using the industry-standard Universal Modeling Language (UML). This way the relationships between the different subcomponents of the system become clear and all software upgrades and modifications are simplified. The described architecture has allowed for frequent upgrades that were necessary during the commissioning phase of CMS when the trigger system evolved constantly. As a secondary objective, the paper provides a UML usage example and tries to encourage the standardization of the software documentation of large projects across the LHC and High Energy Physics community.

  4. The architecture of the CMS Level-1 Trigger Control and Monitoring System using UML

    Science.gov (United States)

    Magrans de Abril, Marc; Da Rocha Melo, Jose L.; Ghabrous Larrea, Carlos; Hammer, Josef; Hartl, Christian; Lazaridis, Christos

    2011-12-01

    The architecture of the Compact Muon Solenoid (CMS) Level-1 Trigger Control and Monitoring software system is presented. This system has been installed and commissioned on the trigger online computers and is currently used for data taking. It has been designed to handle the trigger configuration and monitoring during data taking as well as all communications with the main run control of CMS. Furthermore its design has foreseen the provision of the software infrastructure for detailed testing of the trigger system during beam down time. This is a medium-size distributed system that runs over 40 PCs and 200 processes that control about 4000 electronic boards. The architecture of this system is described using the industry-standard Universal Modeling Language (UML). This way the relationships between the different subcomponents of the system become clear and all software upgrades and modifications are simplified. The described architecture has allowed for frequent upgrades that were necessary during the commissioning phase of CMS when the trigger system evolved constantly. As a secondary objective, the paper provides a UML usage example and tries to encourage the standardization of the software documentation of large projects across the LHC and High Energy Physics community.

  5. CMS Statistics

    Data.gov (United States)

    U.S. Department of Health & Human Services — The CMS Center for Strategic Planning produces an annual CMS Statistics reference booklet that provides a quick reference for summary information about health...

  6. The CMS software performance at the start of data taking

    CERN Document Server

    Benelli, Gabriele

    2009-01-01

    The CMS software framework (CMSSW) is a complex project evolving very rapidly as the first LHC colliding beams approach. The computing requirements constrain performance in terms of CPU time, memory footprint and event size on disk to allow for planning and managing the computing infrastructure necessary to handle the needs of the experiment. A performance suite of tools has been developed to track all aspects of code performance, through the software release cycles, allowing for regression and guiding code development for optimization. In this talk, we describe the CMSSW performance suite tools used and present some sample performance results from the release integration process for the CMS software.

  7. Database usage for the CMS ECAL Laser Monitoring System

    CERN Document Server

    Timciuc, Vladlen

    2009-01-01

    The CMS detector at LHC is equipped with a high precision electromagnetic crystal calorimeter (ECAL). The crystals experience a transparency change when exposed to radiation during LHC operation, which recovers in absents of irradiation on the time scale of hours. This change of the crystal response is monitored with a laser system which performs a transparency measurement of each crystal of the ECAL within twenty minutes. The monitoring data is analyzed on a PC farm attached to the central data acquisition system of CMS. After analyzing the raw data, a reduced data set is stored in the Online Master Data Base (OMDS) which is connected to the online computing infrastructure of CMS. The data stored in OMDS, representing the largest data set stored in OMDS for ECAL, contains all necessary information to perform a detailed crystal response monitoring as well as an analysis of the dynamics of the transparency change. For the CMS physics event data reconstruction, only a reduced set of information from the transpa...

  8. CMS tier structure and operation of the experiment-specific tasks in Germany

    International Nuclear Information System (INIS)

    Nowack, A

    2008-01-01

    In Germany, several university institutes and research centres take part in the CMS experiment. Concerning the data analysis, a couple of computing centres at different Tier levels, ranging from Tier 1 to Tier 3, exists at these places. The German Tier 1 centre GridKa at the research centre at Karlsruhe serves all four LHC experiments as well as four non-LHC experiments. With respect to the CMS experiment, GridKa is mainly involved in central tasks. The Tier 2 centre in Germany consists of two sites, one at the research centre DESY at Hamburg and one at RWTH Aachen University, forming a federated Tier 2 centre. Both parts cover different aspects of a Tier 2 centre. The German Tier 3 centres are located at the research centre DESY at Hamburg, at RWTH Aachen University, and at the University of Karlsruhe. Furthermore the building of a German user analysis facility is planned. Since the CMS community in German is rather small, a good cooperation between the different sites is essential. This cooperation includes physical topics as well as technical and operational issues. All available communication channels such as email, phone, monthly video conferences, and regular personal meetings are used. For example, the distribution of data sets is coordinated globally within Germany. Also the CMS-specific services such as the data transfer tool PhEDEx or the Monte Carlo production are operated by people from different sites in order to spread the knowledge widely and increase the redundancy in terms of operators

  9. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  10. Intelligent Distributed Computing VI : Proceedings of the 6th International Symposium on Intelligent Distributed Computing

    CERN Document Server

    Badica, Costin; Malgeri, Michele; Unland, Rainer

    2013-01-01

    This book represents the combined peer-reviewed proceedings of the Sixth International Symposium on Intelligent Distributed Computing -- IDC~2012, of the International Workshop on Agents for Cloud -- A4C~2012 and of the Fourth International Workshop on Multi-Agent Systems Technology and Semantics -- MASTS~2012. All the events were held in Calabria, Italy during September 24-26, 2012. The 37 contributions published in this book address many topics related to theory and applications of intelligent distributed computing and multi-agent systems, including: adaptive and autonomous distributed systems, agent programming, ambient assisted living systems, business process modeling and verification, cloud computing, coalition formation, decision support systems, distributed optimization and constraint satisfaction, gesture recognition, intelligent energy management in WSNs, intelligent logistics, machine learning, mobile agents, parallel and distributed computational intelligence, parallel evolutionary computing, trus...

  11. Track reconstruction in CMS high luminosity environment

    CERN Document Server

    AUTHOR|(CDS)2067159

    2016-01-01

    The CMS tracker is the largest silicon detector ever built, covering 200 square meters and providing an average of 14 high-precision measurements per track. Tracking is essential for the reconstruction of objects like jets, muons, electrons and tau leptons starting from the raw data from the silicon pixel and strip detectors. Track reconstruction is widely used also at trigger level as it improves objects tagging and resolution.The CMS tracking code is organized in several levels, known as iterative steps, each optimized to reconstruct a class of particle trajectories, as the ones of particles originating from the primary vertex or displaced tracks from particles resulting from secondary vertices. Each iterative step consists of seeding, pattern recognition and fitting by a kalman filter, and a final filtering and cleaning. Each subsequent step works on hits not yet associated to a reconstructed particle trajectory.The CMS tracking code is continuously evolving to make the reconstruction computing load compat...

  12. Track reconstruction in CMS high luminosity environment

    CERN Document Server

    Goetzmann, Christophe

    2014-01-01

    The CMS tracker is the largest silicon detector ever built, covering 200 square meters and providing an average of 14 high-precision measurements per track. Tracking is essential for the reconstruction of objects like jets, muons, electrons and tau leptons starting from the raw data from the silicon pixel and strip detectors. Track reconstruction is widely used also at trigger level as it improves objects tagging and resolution.The CMS tracking code is organized in several levels, known as iterative steps, each optimized to reconstruct a class of particle trajectories, as the ones of particles originating from the primary vertex or displaced tracks from particles resulting from secondary vertices. Each iterative step consists of seeding, pattern recognition and fitting by a kalman filter, and a final filtering and cleaning. Each subsequent step works on hits not yet associated to a reconstructed particle trajectory.The CMS tracking code is continuously evolving to make the reconstruction computing load compat...

  13. Overview of recent heavy-ion results from CMS

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Byungsik

    2016-12-15

    Most recent CMS data related to the high-density QCD are presented for pp and PbPb collisions at 2.76 TeV and pPb collisions at 5.02 TeV. The PbPb collision is essential to understand collective behavior and the final-state effects for the detailed characteristics of hot, dense partonic matter, whereas the pPb collision provides the critical information on the initial-state effects including the modification of the parton distribution function in cold nuclei. This paper highlights some of recent heavy-ion related results from CMS.

  14. Distributed-memory matrix computations

    DEFF Research Database (Denmark)

    Balle, Susanne Mølleskov

    1995-01-01

    The main goal of this project is to investigate, develop, and implement algorithms for numerical linear algebra on parallel computers in order to acquire expertise in methods for parallel computations. An important motivation for analyzaing and investigating the potential for parallelism in these......The main goal of this project is to investigate, develop, and implement algorithms for numerical linear algebra on parallel computers in order to acquire expertise in methods for parallel computations. An important motivation for analyzaing and investigating the potential for parallelism...... in these algorithms is that many scientific applications rely heavily on the performance of the involved dense linear algebra building blocks. Even though we consider the distributed-memory as well as the shared-memory programming paradigm, the major part of the thesis is dedicated to distributed-memory architectures....... We emphasize distributed-memory massively parallel computers - such as the Connection Machines model CM-200 and model CM-5/CM-5E - available to us at UNI-C and at Thinking Machines Corporation. The CM-200 was at the time this project started one of the few existing massively parallel computers...

  15. Radiation background with the CMS RPCs at the LHC

    CERN Document Server

    Costantini, Silvia; Cai, J.; Li, Q.; Liu, S.; Qian, S.; Wang, D.; Xu, Z.; Zhang, F.; Choi, Y.; Goh, J.; Kim, D.; Choi, S.; Hong, B.; Kang, J.W.; Kang, M.; Kwon, J.H.; Lee, K.S.; Lee, S.K.; Park, S.K.; Pant, L.M.; Mohanty, A.K.; Chudasama, R.; Singh, J.B.; Bhatnagar, V.; Mehta, A.; Kumar, R.; Cauwenbergh, S.; Cimmino, A.; Crucy, S.; Fagot, A.; Garcia, G.; Ocampo, A.; Poyraz, D.; Salva, S.; Thyssen, F.; Tytgat, M.; Zaganidis, N.; Doninck, W.V.; Cabrera, A.; Chaparro, L.; Gomez, J.P.; Gomez, B.; Sanabria, J.C.; Avila, C.; Ahmad, A.; Muhammad, S.; Shoaib, M.; Hoorani, H.; Awan, I.; Ali, I.; Ahmed, W.; Asghar, M.I.; Shahzad, H.; Sayed, A.; Ibrahim, A.; Aly, S.; Assran, Y.; Radi, A.; Elkafrawy, T.; Sharma, A.; Colafranceschi, S.; Abbrescia, M.; Calabria, C.; Colaleo, A.; Iaselli, G.; Loddo, F.; Maggi, M.; Nuzzo, S.; Pugliese, G.; Radogna, R.; Venditti, R.; Verwilligen, P.; Benussi, L.; Bianco, S.; Piccolo, D.; Paolucci, P.; Buontempo, S.; Cavallo, N.; Merola, M.; Fabozzi, F.; Iorio, O.M.; Braghieri, A.; Montagna, P.; Riccardi, C.; Salvini, P.; Vitulo, P.; Vai, I.; Magnani, A.; Dimitrov, A.; Litov, L.; Pavlov, B.; Petkov, P.; Aleksandrov, A.; Genchev, V.; Iaydjiev, P.; Rodozov, M.; Sultanov, G.; Vutova, M.; Stoykova, S.; Hadjiiska, R.; Ibargüen, H.S.; Morales, M.I.P.; Bernardino, S.C.; Bagaturia, I.; Tsamalaidze, Z.; Crotty, I.; Kim, M.S.

    2015-05-28

    The Resistive Plate Chambers (RPCs) are employed in the CMS experiment at the LHC as dedicated trigger system both in the barrel and in the endcap. This note presents results of the radiation background measurements performed with the 2011 and 2012 proton-proton collision data collected by CMS. Emphasis is given to the measurements of the background distribution inside the RPCs. The expected background rates during the future running of the LHC are estimated both from extrapolated measurements and from simulation.

  16. ATLAS Distributed Computing Automation

    CERN Document Server

    Schovancova, J; The ATLAS collaboration; Borrego, C; Campana, S; Di Girolamo, A; Elmsheuser, J; Hejbal, J; Kouba, T; Legger, F; Magradze, E; Medrano Llamas, R; Negri, G; Rinaldi, L; Sciacca, G; Serfon, C; Van Der Ster, D C

    2012-01-01

    The ATLAS Experiment benefits from computing resources distributed worldwide at more than 100 WLCG sites. The ATLAS Grid sites provide over 100k CPU job slots, over 100 PB of storage space on disk or tape. Monitoring of status of such a complex infrastructure is essential. The ATLAS Grid infrastructure is monitored 24/7 by two teams of shifters distributed world-wide, by the ATLAS Distributed Computing experts, and by site administrators. In this paper we summarize automation efforts performed within the ATLAS Distributed Computing team in order to reduce manpower costs and improve the reliability of the system. Different aspects of the automation process are described: from the ATLAS Grid site topology provided by the ATLAS Grid Information System, via automatic site testing by the HammerCloud, to automatic exclusion from production or analysis activities.

  17. CERN Open Days 2013, Point 5 - CMS: CMS Experiment

    CERN Multimedia

    CERN Photolab

    2013-01-01

    Stand description: Come to LHC's Point 5 and visit the Compact Muon Solenoid (CMS) experiment that discovered the Higgs boson ! Descend 100 metres underground and take a walk in the cathedral-sized cavern housing the 14,000-tonne CMS detector. Ask Higgs hunters and other scientists just about anything, be it questions about their work, particle physics or the engineering challenges of building CMS.  On surface no restricted access  Point 5 will be abuzz all day long with activities for all ages, including literally "cool" cryogenics shows featuring the world's fastest ice-cream maker, dance performances, and much more.

  18. CMS Thesis Award

    CERN Multimedia

    2004-01-01

    The 2003 CMS thesis award was presented to Riccardo Ranieri on 15 March for his Ph.D. thesis "Trigger Selection of WH → μ ν b bbar with CMS" where 'WH → μ ν b bbar' represents the associated production of the W boson and the Higgs boson and their subsequent decays. Riccardo received his Ph.D. from the University of Florence and was supervised by Carlo Civinini. In total nine thesis were nominated for the award, which was judged on originality, impact within the field of high energy physics, impact within CMS and clarity of writing. Gregory Snow, secretary of the awarding committee, explains why Riccardo's thesis was chosen, ‘‘The search for the Higgs boson is one of the main physics goals of CMS. Riccardo's thesis helps the experiment to formulate the strategy which will be used in that search.'' Lorenzo Foà, Chairperson of the CMS Collaboration Board, presented Riccardo with an commemorative engraved plaque. He will also receive the opportunity to...

  19. Dose-ranging pharmacokinetics of colistin methanesulphonate (CMS) and colistin in rats following single intravenous CMS doses.

    Science.gov (United States)

    Marchand, Sandrine; Lamarche, Isabelle; Gobin, Patrice; Couet, William

    2010-08-01

    The aim of this study was to evaluate the effect of colistin methanesulphonate (CMS) dose on CMS and colistin pharmacokinetics in rats. Three rats per group received an intravenous bolus of CMS at a dose of 5, 15, 30, 60 or 120 mg/kg. Arterial blood samples were drawn at 0, 5, 15, 30, 60, 90, 120, 150 and 180 min. CMS and colistin plasma concentrations were determined by liquid chromatography-tandem mass spectrometry (LC-MS/MS). The pharmacokinetic parameters of CMS and colistin were calculated by non-compartmental analysis. Linear relationships were observed between CMS and colistin AUCs to infinity and CMS doses, as well as between CMS and colistin C(max) and CMS doses. CMS and colistin pharmacokinetics were linear for a range of colistin concentrations covering the range of values encountered and recommended in patients even during treatment with higher doses.

  20. CMS tracker observes muons

    CERN Multimedia

    2006-01-01

    A computer image of a cosmic ray traversing the many layers of the TEC+ silicon sensors. The first cosmic muon tracks have been observed in one of the CMS tracker endcaps. On 14 March, a sector on one of the two large tracker endcaps underwent a cosmic muon run. Since then, thousands of tracks have been recorded. These data will be used not only to study the tracking, but also to exercise various track alignment algorithms The endcap tested, called the TEC+, is under construction at RWTH Aachen in Germany. The endcaps have a modular design, with silicon strip modules mounted onto wedge-shaped carbon fibre support plates, so-called petals. Up to 28 modules are arranged in radial rings on both sides of these plates. One eighth of an endcap is populated with 18 petals and called a sector. The next major step is a test of the first sector at CMS operating conditions, with the silicon modules at a temperature below -10°C. Afterwards, the remaining seven sectors have to be integrated. In autumn 2006, TEC+ wil...

  1. The CMS Level-1 Trigger Barrel Track Finder

    International Nuclear Information System (INIS)

    Ero, J.; Wulz, C.; Evangelou, I.; Flouris, G.; Foudas, C.; Loukas, N.; Manthos, N.; Papadopoulos, I.; Paradas, E.; Guiducci, L.; Sotiropoulos, S.; Sphicas, P.; Triossi, A.

    2016-01-01

    The design and performance of the upgraded CMS Level-1 Trigger Barrel Muon Track Finder (BMTF) is presented. Monte Carlo simulation data as well as cosmic ray data from a CMS muon detector slice test have been used to study in detail the performance of the new track finder. The design architecture is based on twelve MP7 cards each of which uses a Xilinx Virtex-7 FPGA and can receive and transmit data at 10 Gbps from 72 input and 72 output fibers. According to the CMS Trigger Upgrade TDR the BMTF receives trigger primitive data which are computed using both RPC and DT data and transmits data from a number of muon candidates to the upgraded Global Muon Trigger. Results from detailed studies of comparisons between the BMTF algorithm results and the results of a C++ emulator are also presented. The new BMTF will be commissioned for data taking in 2016

  2. Dynamic configuration of the CMS Data Acquisition cluster

    CERN Document Server

    Bauer, Gerry; Biery, Kurt; Boyer, Vincent; Branson, James; Cano, Eric; Cheung, Harry; Ciganek, Marek; Cittolin, Sergio; Coarasa, Jose Antonio; Deldicque, Christian; Dusinberre, Elizabeth; Erhan, Samim; Fortes Rodrigues, Fabiana; Gigi, Dominique; Glege, Frank; Gomez-Reino, Robert; Gutleber, Johannes; Hatton, Derek; Laurens, Jean-Francois; Lopez Perez, Juan Antonio; Meijers, Frans; Meschi, Emilio; Meyer, Andreas; Mommsen, Remigius K; Moser, Roland; O'Dell, Vivian; Oh, Alexander; Orsini, Luciano; Patras, Vaios; Paus, Christoph; Petrucci, Andrea; Pieri, Marco; Racz, Attila; Sakulin, Hannes; Sani, Matteo; Schieferdecker, Philipp; Schwick, Christoph; Shpakov, Dennis; Simon, Sean; Sumorok, Konstanty; Zanetti, Marco

    2010-01-01

    The CMS Data Acquisition cluster, which runs around 10000 applications, is configured dynamically at run time. XML configuration documents determine what applications are executed on each node and over what networks these applications communicate. Through this mechanism the DAQ System may be adapted to the required performance, partitioned in order to perform (test-) runs in parallel, or re-structured in case of hardware faults. This paper presents the CMS DAQ Configurator tool, which is used to generate comprehensive configurations of the CMS DAQ system based on a high-level description given by the user. Using a database of configuration templates and a database containing a detailed model of hardware modules, data and control links, nodes and the network topology, the tool automatically determines which applications are needed, on which nodes they should run, and over which networks the event traffic will flow. The tool computes application parameters and generates the XML configuration documents as well a...

  3. CMS Central Hadron Calorimeter

    OpenAIRE

    Budd, Howard S.

    2001-01-01

    We present a description of the CMS central hadron calorimeter. We describe the production of the 1996 CMS hadron testbeam module. We show the results of the quality control tests of the testbeam module. We present some results of the 1995 CMS hadron testbeam.

  4. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  5. CMS Records Schedule

    Data.gov (United States)

    U.S. Department of Health & Human Services — The CMS Records Schedule provides disposition authorizations approved by the National Archives and Records Administration (NARA) for CMS program-related records...

  6. CMS Dashboard Task Monitoring: A user-centric monitoring view

    International Nuclear Information System (INIS)

    Karavakis, Edward; Khan, Akram; Andreeva, Julia; Maier, Gerhild; Gaidioz, Benjamin

    2010-01-01

    We are now in a phase change of the CMS experiment where people are turning more intensely to physics analysis and away from construction. This brings a lot of challenging issues with respect to monitoring of the user analysis. The physicists must be able to monitor the execution status, application and grid-level messages of their tasks that may run at any site within the CMS Virtual Organisation. The CMS Dashboard Task Monitoring project provides this information towards individual analysis users by collecting and exposing a user-centric set of information regarding submitted tasks including reason of failure, distribution by site and over time, consumed time and efficiency. The development was user-driven with physicists invited to test the prototype in order to assemble further requirements and identify weaknesses with the application.

  7. CMS analysis school model

    International Nuclear Information System (INIS)

    Malik, S; Bloom, K; Shipsey, I; Cavanaugh, R; Klima, B; Chan, Kai-Feng; D'Hondt, J; Narain, M; Palla, F; Rolandi, G; Schörner-Sadenius, T

    2014-01-01

    To impart hands-on training in physics analysis, CMS experiment initiated the concept of CMS Data Analysis School (CMSDAS). It was born over three years ago at the LPC (LHC Physics Centre), Fermilab and is based on earlier workshops held at the LPC and CLEO Experiment. As CMS transitioned from construction to the data taking mode, the nature of earlier training also evolved to include more of analysis tools, software tutorials and physics analysis. This effort epitomized as CMSDAS has proven to be a key for the new and young physicists to jump start and contribute to the physics goals of CMS by looking for new physics with the collision data. With over 400 physicists trained in six CMSDAS around the globe, CMS is trying to engage the collaboration in its discovery potential and maximize physics output. As a bigger goal, CMS is striving to nurture and increase engagement of the myriad talents, in the development of physics, service, upgrade, education of those new to CMS and the career development of younger members. An extension of the concept to the dedicated software and hardware schools is also planned, keeping in mind the ensuing upgrade phase.

  8. CMS Analysis School Model

    Energy Technology Data Exchange (ETDEWEB)

    Malik, S. [Nebraska U.; Shipsey, I. [Purdue U.; Cavanaugh, R. [Illinois U., Chicago; Bloom, K. [Nebraska U.; Chan, Kai-Feng [Taiwan, Natl. Taiwan U.; D' Hondt, J. [Vrije U., Brussels; Klima, B. [Fermilab; Narain, M. [Brown U.; Palla, F. [INFN, Pisa; Rolandi, G. [CERN; Schörner-Sadenius, T. [DESY

    2014-01-01

    To impart hands-on training in physics analysis, CMS experiment initiated the concept of CMS Data Analysis School (CMSDAS). It was born over three years ago at the LPC (LHC Physics Centre), Fermilab and is based on earlier workshops held at the LPC and CLEO Experiment. As CMS transitioned from construction to the data taking mode, the nature of earlier training also evolved to include more of analysis tools, software tutorials and physics analysis. This effort epitomized as CMSDAS has proven to be a key for the new and young physicists to jump start and contribute to the physics goals of CMS by looking for new physics with the collision data. With over 400 physicists trained in six CMSDAS around the globe, CMS is trying to engage the collaboration in its discovery potential and maximize physics output. As a bigger goal, CMS is striving to nurture and increase engagement of the myriad talents, in the development of physics, service, upgrade, education of those new to CMS and the career development of younger members. An extension of the concept to the dedicated software and hardware schools is also planned, keeping in mind the ensuing upgrade phase.

  9. Impact of CMS Multi-jets and Missing Energy Search on CMSSM Fits

    CERN Document Server

    Allanach, B C

    2011-01-01

    Recent CMS data significantly extend the direct search exclusion for supersymmetry. We examine the impact of such data on global fits of the constrained minimal supersymmetric standard model (CMSSM) to indirect and cosmological data. By simulating supersymmetric signal events at the LHC, we construct a likelihood map for the recent CMS data, validating it against the exclusion region calculated by the experiment itself. A previous CMSSM global fit is then re-weighted by our likelihood map. The CMS results nibble away at the high fit probability density region, transforming probability distributions for the scalar and gluino masses. The CMS search has a with non-trivial effect on tan \\beta due to correlations between the parameters implied by the fits to indirect data.

  10. CMS geometry through 2020

    International Nuclear Information System (INIS)

    Osborne, I; Brownson, E; Eulisse, G; Jones, C D; Sexton-Kennedy, E; Lange, D J

    2014-01-01

    CMS faces real challenges with upgrade of the CMS detector through 2020 and beyond. One of the challenges, from the software point of view, is managing upgrade simulations with the same software release as the 2013 scenario. We present the CMS geometry description software model, its integration with the CMS event setup and core software. The CMS geometry configuration and selection is implemented in Python. The tools collect the Python configuration fragments into a script used in CMS workflow. This flexible and automated geometry configuration allows choosing either transient or persistent version of the same scenario and specific version of the same scenario. We describe how the geometries are integrated and validated, and how we define and handle different geometry scenarios in simulation and reconstruction. We discuss how to transparently manage multiple incompatible geometries in the same software release. Several examples are shown based on current implementation assuring consistent choice of scenario conditions. The consequences and implications for multiple/different code algorithms are discussed.

  11. CMS Program Statistics

    Data.gov (United States)

    U.S. Department of Health & Human Services — The CMS Office of Enterprise Data and Analytics has developed CMS Program Statistics, which includes detailed summary statistics on national health care, Medicare...

  12. CMS Drug Spending

    Data.gov (United States)

    U.S. Department of Health & Human Services — CMS has released several information products that provide spending information for prescription drugs in the Medicare and Medicaid programs. The CMS Drug Spending...

  13. Electronic system of the RPC Muon Trigger in CMS experiment at LHC accelerator (Elektroniczny system trygera mionowego RPC w eksperymencie CMS akceleratora LHC

    CERN Document Server

    Bialkowska, H

    2009-01-01

    This paper presents implementation of distributed, multichannel electronic measurement system for RPC - based Muon Trigger in the CMS experiment at LHC. The introduction shortly describes the research aims of LHC and shows the metrological requirements for CMS - good spatial and time resolution, and possibility to estimate multiple physical parameters from registered collisions of particles. Further the paper describes RPC Muon Trigger consisting of 200 000 independent channels for position measurement. The first part of the paper presents the functional structure of the system in the context of requirements put by the CMS experiment, like global triggering system and data acquisition. The second part describes the hardware solutions used in particular parts of the RPC detector measuremnt system and shows some test results. The paper has a digest and overview nature.

  14. CMS Awards

    CERN Multimedia

    2004-01-01

    Ali Mohammad Rafiee receives the CMS Gold Award from Michel Della Negra of CMS. As part of the fifth annual CMS Awards, Iranian contractor HEPCO, located in Arak, an industrial town 200 km west of Tehran, received their Gold Award in a ceremony held on 14 June 2004 (the other award winners were reported in bulletin 13/2004). The Awards are given each year to a small number of the approximately one thousand contractors working on the CMS project. Gold Awards are given for outstanding technical achievement in work carried out for the detector. HEPCO received the Award for the excellent quality of their work in constructing two 25 tonne support tables, two 75 tonne shields (FCS) and eight supporting brackets to lower the HF into the cavern. Welds and machining obtained tolerances that were very difficult in structures of that size. Mr. A. M. Rafiee, the General Manager of the company, acknowledged the benefits of this collaboration, and thanked the efforts and skills of the many staff involved.

  15. Tracking at High Level Trigger in CMS

    CERN Document Server

    Tosi, Mia

    2016-01-01

    The trigger systems of the LHC detectors play a crucial role in determining the physics capabili- ties of the experiments. A reduction of several orders of magnitude of the event rate is needed to reach values compatible with detector readout, offline storage and analysis capability. The CMS experiment has been designed with a two-level trigger system: the Level-1 Trigger (L1T), implemented on custom-designed electronics, and the High Level Trigger (HLT), a stream- lined version of the CMS offline reconstruction software running on a computer farm. A software trigger system requires a trade-off between the complexity of the algorithms, the sustainable out- put rate, and the selection efficiency. With the computing power available during the 2012 data taking the maximum reconstruction time at HLT was about 200 ms per event, at the nominal L1T rate of 100 kHz. Track reconstruction algorithms are widely used in the HLT, for the reconstruction of the physics objects as well as in the identification of b-jets and ...

  16. Job life cycle management libraries for CMS workflow management projects

    International Nuclear Information System (INIS)

    Lingen, Frank van; Wilkinson, Rick; Evans, Dave; Foulkes, Stephen; Afaq, Anzar; Vaandering, Eric; Ryu, Seangchan

    2010-01-01

    Scientific analysis and simulation requires the processing and generation of millions of data samples. These tasks are often comprised of multiple smaller tasks divided over multiple (computing) sites. This paper discusses the Compact Muon Solenoid (CMS) workflow infrastructure, and specifically the Python based workflow library which is used for so called task lifecycle management. The CMS workflow infrastructure consists of three layers: high level specification of the various tasks based on input/output data sets, life cycle management of task instances derived from the high level specification and execution management. The workflow library is the result of a convergence of three CMS sub projects that respectively deal with scientific analysis, simulation and real time data aggregation from the experiment. This will reduce duplication and hence development and maintenance costs.

  17. Model unspecific search in CMS. Results at 8 TeV

    Energy Technology Data Exchange (ETDEWEB)

    Albert, Andreas; Duchardt, Deborah; Hebbeker, Thomas; Knutzen, Simon; Lieb, Jonas; Meyer, Arnd; Pook, Tobias; Roemer, Jonas [III. Physikalisches Institut A, RWTH Aachen University (Germany)

    2016-07-01

    In the year 2012, CMS collected a total data set of approximately 20 fb{sup -1} in proton-proton collisions at √(s)=8 TeV. Dedicated searches for physics beyond the standard model are commonly designed with the signatures of a given theoretical model in mind. While this approach allows for an optimised sensitivity to the sought-after signal, it may cause unexpected phenomena to be overlooked. In a complementary approach, the Model Unspecific Search in CMS (MUSiC) analyses CMS data in a general way. Depending on the reconstructed final state objects (e.g. electrons), collision events are sorted into classes. In each of the classes, the distributions of selected kinematic variables are compared to standard model simulation. An automated statistical analysis is performed to quantify the agreement between data and prediction. In this talk, the analysis concept is introduced and selected results of the analysis of the 2012 CMS data set are presented.

  18. Standard Model Higgs decay for two Photons in CMS

    CERN Multimedia

    Daniel Denegri

    2000-01-01

    Simulated two-photon mass distribution for SM Higgs and expected background in the CMS PbW04 crystal calorimeter for an integrated luminosity of 10 . 5 pb-1, with detailed simulation of calorimeter response.

  19. Heavy Flavour distributions from CMS with 2017 data at 13 TeV

    CERN Document Server

    CMS Collaboration

    2018-01-01

    We report plots on heavy flavor from the data collected in 2017 by CMS at LHC at 13 TeV. B meson performance plots in two different periods, characterized by different instantaneous luminosity are included.

  20. The architecture and operation of the CMS Tier-0

    International Nuclear Information System (INIS)

    Hufnagel, Dirk

    2011-01-01

    The Tier-0 processing system is the initial stage of the multi-tiered computing system of CMS. It takes care of the first processing steps of data at the LHC at CERN. The automated workflows running in the Tier-0 contain both low-latency processing chains for time-critical applications and bulk chains to archive the recorded data offsite the host laboratory. It is a mix between an online and offline system, because the data the CMS DAQ writes out initially is of a temporary nature. Most of the complexity in the design of this system comes from this unique combination of online and offline use cases and dependencies. In this talk, we want to present the software design of the CMS Tier-0 system and present an analysis of the 24/7 operation of the system in the 2009/2010 data taking periods.

  1. CMS Data Processing Workflows during an Extended Cosmic Ray Run

    CERN Document Server

    Chatrchyan, S; Sirunyan, A M; Adam, W; Arnold, B; Bergauer, H; Bergauer, T; Dragicevic, M; Eichberger, M; Erö, J; Friedl, M; Frühwirth, R; Ghete, V M; Hammer, J; Hänsel, S; Hoch, M; Hörmann, N; Hrubec, J; Jeitler, M; Kasieczka, G; Kastner, K; Krammer, M; Liko, D; Magrans de Abril, I; Mikulec, I; Mittermayr, F; Neuherz, B; Oberegger, M; Padrta, M; Pernicka, M; Rohringer, H; Schmid, S; Schöfbeck, R; Schreiner, T; Stark, R; Steininger, H; Strauss, J; Taurok, A; Teischinger, F; Themel, T; Uhl, D; Wagner, P; Waltenberger, W; Walzel, G; Widl, E; Wulz, C E; Chekhovsky, V; Dvornikov, O; Emeliantchik, I; Litomin, A; Makarenko, V; Marfin, I; Mossolov, V; Shumeiko, N; Solin, A; Stefanovitch, R; Suarez Gonzalez, J; Tikhonov, A; Fedorov, A; Karneyeu, A; Korzhik, M; Panov, V; Zuyeuski, R; Kuchinsky, P; Beaumont, W; Benucci, L; Cardaci, M; De Wolf, E A; Delmeire, E; Druzhkin, D; Hashemi, M; Janssen, X; Maes, T; Mucibello, L; Ochesanu, S; Rougny, R; Selvaggi, M; Van Haevermaet, H; Van Mechelen, P; Van Remortel, N; Adler, V; Beauceron, S; Blyweert, S; D'Hondt, J; De Weirdt, S; Devroede, O; Heyninck, J; Kalogeropoulos, A; Maes, J; Maes, M; Mozer, M U; Tavernier, S; Van Doninck, W; Van Mulders, P; Villella, I; Bouhali, O; Chabert, E C; Charaf, O; Clerbaux, B; De Lentdecker, G; Dero, V; Elgammal, S; Gay, A P R; Hammad, G H; Marage, P E; Rugovac, S; Vander Velde, C; Vanlaer, P; Wickens, J; Grunewald, M; Klein, B; Marinov, A; Ryckbosch, D; Thyssen, F; Tytgat, M; Vanelderen, L; Verwilligen, P; Basegmez, S; Bruno, G; Caudron, J; Delaere, C; Demin, P; Favart, D; Giammanco, A; Grégoire, G; Lemaitre, V; Militaru, O; Ovyn, S; Piotrzkowski, K; Quertenmont, L; Schul, N; Beliy, N; Daubie, E; Alves, G A; Pol, M E; Souza, M H G; Carvalho, W; De Jesus Damiao, D; De Oliveira Martins, C; Fonseca De Souza, S; Mundim, L; Oguri, V; Santoro, A; Silva Do Amaral, S M; Sznajder, A; Fernandez Perez Tomei, T R; Ferreira Dias, M A; Gregores, E M; Novaes, S F; Abadjiev, K; Anguelov, T; Damgov, J; Darmenov, N; Dimitrov, L; Genchev, V; Iaydjiev, P; Piperov, S; Stoykova, S; Sultanov, G; Trayanov, R; Vankov, I; Dimitrov, A; Dyulendarova, M; Kozhuharov, V; Litov, L; Marinova, E; Mateev, M; Pavlov, B; Petkov, P; Toteva, Z; Chen, G M; Chen, H S; Guan, W; Jiang, C H; Liang, D; Liu, B; Meng, X; Tao, J; Wang, J; Wang, Z; Xue, Z; Zhang, Z; Ban, Y; Cai, J; Ge, Y; Guo, S; Hu, Z; Mao, Y; Qian, S J; Teng, H; Zhu, B; Avila, C; Baquero Ruiz, M; Carrillo Montoya, C A; Gomez, A; Gomez Moreno, B; Ocampo Rios, A A; Osorio Oliveros, A F; Reyes Romero, D; Sanabria, J C; Godinovic, N; Lelas, K; Plestina, R; Polic, D; Puljak, I; Antunovic, Z; Dzelalija, M; Brigljevic, V; Duric, S; Kadija, K; Morovic, S; Fereos, R; Galanti, M; Mousa, J; Papadakis, A; Ptochos, F; Razis, P A; Tsiakkouri, D; Zinonos, Z; Hektor, A; Kadastik, M; Kannike, K; Müntel, M; Raidal, M; Rebane, L; Anttila, E; Czellar, S; Härkönen, J; Heikkinen, A; Karimäki, V; Kinnunen, R; Klem, J; Kortelainen, M J; Lampén, T; Lassila-Perini, K; Lehti, S; Lindén, T; Luukka, P; Mäenpää, T; Nysten, J; Tuominen, E; Tuominiemi, J; Ungaro, D; Wendland, L; Banzuzi, K; Korpela, A; Tuuva, T; Nedelec, P; Sillou, D; Besancon, M; Chipaux, R; Dejardin, M; Denegri, D; Descamps, J; Fabbro, B; Faure, J L; Ferri, F; Ganjour, S; Gentit, F X; Givernaud, A; Gras, P; Hamel de Monchenault, G; Jarry, P; Lemaire, M C; Locci, E; Malcles, J; Marionneau, M; Millischer, L; Rander, J; Rosowsky, A; Rousseau, D; Titov, M; Verrecchia, P; Baffioni, S; Bianchini, L; Bluj, M; Busson, P; Charlot, C; Dobrzynski, L; Granier de Cassagnac, R; Haguenauer, M; Miné, P; Paganini, P; Sirois, Y; Thiebaux, C; Zabi, A; Agram, J L; Besson, A; Bloch, D; Bodin, D; Brom, J M; Conte, E; Drouhin, F; Fontaine, J C; Gelé, D; Goerlach, U; Gross, L; Juillot, P; Le Bihan, A C; Patois, Y; Speck, J; Van Hove, P; Baty, C; Bedjidian, M; Blaha, J; Boudoul, G; Brun, H; Chanon, N; Chierici, R; Contardo, D; Depasse, P; Dupasquier, T; El Mamouni, H; Fassi, F; Fay, J; Gascon, S; Ille, B; Kurca, T; Le Grand, T; Lethuillier, M; Lumb, N; Mirabito, L; Perries, S; Vander Donckt, M; Verdier, P; Djaoshvili, N; Roinishvili, N; Roinishvili, V; Amaglobeli, N; Adolphi, R; Anagnostou, G; Brauer, R; Braunschweig, W; Edelhoff, M; Esser, H; Feld, L; Karpinski, W; Khomich, A; Klein, K; Mohr, N; Ostaptchouk, A; Pandoulas, D; Pierschel, G; Raupach, F; Schael, S; Schultz von Dratzig, A; Schwering, G; Sprenger, D; Thomas, M; Weber, M; Wittmer, B; Wlochal, M; Actis, O; Altenhöfer, G; Bender, W; Biallass, P; Erdmann, M; Fetchenhauer, G; Frangenheim, J; Hebbeker, T; Hilgers, G; Hinzmann, A; Hoepfner, K; Hof, C; Kirsch, M; Klimkovich, T; Kreuzer, P; Lanske, D; Merschmeyer, M; Meyer, A; Philipps, B; Pieta, H; Reithler, H; Schmitz, S A; Sonnenschein, L; Sowa, M; Steggemann, J; Szczesny, H; Teyssier, D; Zeidler, C; Bontenackels, M; Davids, M; Duda, M; Flügge, G; Geenen, H; Giffels, M; Haj Ahmad, W; Hermanns, T; Heydhausen, D; Kalinin, S; Kress, T; Linn, A; Nowack, A; Perchalla, L; Poettgens, M; Pooth, O; Sauerland, P; Stahl, A; Tornier, D; Zoeller, M H; Aldaya Martin, M; Behrens, U; Borras, K; Campbell, A; Castro, E; Dammann, D; Eckerlin, G; Flossdorf, A; Flucke, G; Geiser, A; Hatton, D; Hauk, J; Jung, H; Kasemann, M; Katkov, I; Kleinwort, C; Kluge, H; Knutsson, A; Kuznetsova, E; Lange, W; Lohmann, W; Mankel, R; Marienfeld, M; Meyer, A B; Miglioranzi, S; Mnich, J; Ohlerich, M; Olzem, J; Parenti, A; Rosemann, C; Schmidt, R; Schoerner-Sadenius, T; Volyanskyy, D; Wissing, C; Zeuner, W D; Autermann, C; Bechtel, F; Draeger, J; Eckstein, D; Gebbert, U; Kaschube, K; Kaussen, G; Klanner, R; Mura, B; Naumann-Emme, S; Nowak, F; Pein, U; Sander, C; Schleper, P; Schum, T; Stadie, H; Steinbrück, G; Thomsen, J; Wolf, R; Bauer, J; Blüm, P; Buege, V; Cakir, A; Chwalek, T; De Boer, W; Dierlamm, A; Dirkes, G; Feindt, M; Felzmann, U; Frey, M; Furgeri, A; Gruschke, J; Hackstein, C; Hartmann, F; Heier, S; Heinrich, M; Held, H; Hirschbuehl, D; Hoffmann, K H; Honc, S; Jung, C; Kuhr, T; Liamsuwan, T; Martschei, D; Mueller, S; Müller, Th; Neuland, M B; Niegel, M; Oberst, O; Oehler, A; Ott, J; Peiffer, T; Piparo, D; Quast, G; Rabbertz, K; Ratnikov, F; Ratnikova, N; Renz, M; Saout, C; Sartisohn, G; Scheurer, A; Schieferdecker, P; Schilling, F P; Schott, G; Simonis, H J; Stober, F M; Sturm, P; Troendle, D; Trunov, A; Wagner, W; Wagner-Kuhr, J; Zeise, M; Zhukov, V; Ziebarth, E B; Daskalakis, G; Geralis, T; Karafasoulis, K; Kyriakis, A; Loukas, D; Markou, A; Markou, C; Mavrommatis, C; Petrakou, E; Zachariadou, A; Gouskos, L; Katsas, P; Panagiotou, A; Evangelou, I; Kokkas, P; Manthos, N; Papadopoulos, I; Patras, V; Triantis, F A; Bencze, G; Boldizsar, L; Debreczeni, G; Hajdu, C; Hernath, S; Hidas, P; Horvath, D; Krajczar, K; Laszlo, A; Patay, G; Sikler, F; Toth, N; Vesztergombi, G; Beni, N; Christian, G; Imrek, J; Molnar, J; Novak, D; Palinkas, J; Szekely, G; Szillasi, Z; Tokesi, K; Veszpremi, V; Kapusi, A; Marian, G; Raics, P; Szabo, Z; Trocsanyi, Z L; Ujvari, B; Zilizi, G; Bansal, S; Bawa, H S; Beri, S B; Bhatnagar, V; Jindal, M; Kaur, M; Kaur, R; Kohli, J M; Mehta, M Z; Nishu, N; Saini, L K; Sharma, A; Singh, A; Singh, J B; Singh, S P; Ahuja, S; Arora, S; Bhattacharya, S; Chauhan, S; Choudhary, B C; Gupta, P; Jain, S; Jha, M; Kumar, A; Ranjan, K; Shivpuri, R K; Srivastava, A K; Choudhury, R K; Dutta, D; Kailas, S; Kataria, S K; Mohanty, A K; Pant, L M; Shukla, P; Topkar, A; Aziz, T; Guchait, M; Gurtu, A; Maity, M; Majumder, D; Majumder, G; Mazumdar, K; Nayak, A; Saha, A; Sudhakar, K; Banerjee, S; Dugad, S; Mondal, N K; Arfaei, H; Bakhshiansohi, H; Fahim, A; Jafari, A; Mohammadi Najafabadi, M; Moshaii, A; Paktinat Mehdiabadi, S; Rouhani, S; Safarzadeh, B; Zeinali, M; Felcini, M; Abbrescia, M; Barbone, L; Chiumarulo, F; Clemente, A; Colaleo, A; Creanza, D; Cuscela, G; De Filippis, N; De Palma, M; De Robertis, G; Donvito, G; Fedele, F; Fiore, L; Franco, M; Iaselli, G; Lacalamita, N; Loddo, F; Lusito, L; Maggi, G; Maggi, M; Manna, N; Marangelli, B; My, S; Natali, S; Nuzzo, S; Papagni, G; Piccolomo, S; Pierro, G A; Pinto, C; Pompili, A; Pugliese, G; Rajan, R; Ranieri, A; Romano, F; Roselli, G; Selvaggi, G; Shinde, Y; Silvestris, L; Tupputi, S; Zito, G; Abbiendi, G; Bacchi, W; Benvenuti, A C; Boldini, M; Bonacorsi, D; Braibant-Giacomelli, S; Cafaro, V D; Caiazza, S S; Capiluppi, P; Castro, A; Cavallo, F R; Codispoti, G; Cuffiani, M; D'Antone, I; Dallavalle, G M; Fabbri, F; Fanfani, A; Fasanella, D; Giacomelli, P; Giordano, V; Giunta, M; Grandi, C; Guerzoni, M; Marcellini, S; Masetti, G; Montanari, A; Navarria, F L; Odorici, F; Pellegrini, G; Perrotta, A; Rossi, A M; Rovelli, T; Siroli, G; Torromeo, G; Travaglini, R; Albergo, S; Costa, S; Potenza, R; Tricomi, A; Tuve, C; Barbagli, G; Broccolo, G; Ciulli, V; Civinini, C; D'Alessandro, R; Focardi, E; Frosali, S; Gallo, E; Genta, C; Landi, G; Lenzi, P; Meschini, M; Paoletti, S; Sguazzoni, G; Tropiano, A; Benussi, L; Bertani, M; Bianco, S; Colafranceschi, S; Colonna, D; Fabbri, F; Giardoni, M; Passamonti, L; Piccolo, D; Pierluigi, D; Ponzio, B; Russo, A; Fabbricatore, P; Musenich, R; Benaglia, A; Calloni, M; Cerati, G B; D'Angelo, P; De Guio, F; Farina, F M; Ghezzi, A; Govoni, P; Malberti, M; Malvezzi, S; Martelli, A; Menasce, D; Miccio, V; Moroni, L; Negri, P; Paganoni, M; Pedrini, D; Pullia, A; Ragazzi, S; Redaelli, N; Sala, S; Salerno, R; Tabarelli de Fatis, T; Tancini, V; Taroni, S; Buontempo, S; Cavallo, N; Cimmino, A; De Gruttola, M; Fabozzi, F; Iorio, A O M; Lista, L; Lomidze, D; Noli, P; Paolucci, P; Sciacca, C; Azzi, P; Bacchetta, N; Barcellan, L; Bellan, P; Bellato, M; Benettoni, M; Biasotto, M; Bisello, D; Borsato, E; Branca, A; Carlin, R; Castellani, L; Checchia, P; Conti, E; Dal Corso, F; De Mattia, M; Dorigo, T; Dosselli, U; Fanzago, F; Gasparini, F; Gasparini, U; Giubilato, P; Gonella, F; Gresele, A; Gulmini, M; Kaminskiy, A; Lacaprara, S; Lazzizzera, I; Margoni, M; Maron, G; Mattiazzo, S; Mazzucato, M; Meneghelli, M; Meneguzzo, A T; Michelotto, M; Montecassiano, F; Nespolo, M; Passaseo, M; Pegoraro, M; Perrozzi, L; Pozzobon, N; Ronchese, P; Simonetto, F; Toniolo, N; Torassa, E; Tosi, M; Triossi, A; Vanini, S; Ventura, S; Zotto, P; Zumerle, G; Baesso, P; Berzano, U; Bricola, S; Necchi, M M; Pagano, D; Ratti, S P; Riccardi, C; Torre, P; Vicini, A; Vitulo, P; Viviani, C; Aisa, D; Aisa, S; Babucci, E; Biasini, M; Bilei, G M; Caponeri, B; Checcucci, B; Dinu, N; Fanò, L; Farnesini, L; Lariccia, P; Lucaroni, A; Mantovani, G; Nappi, A; Piluso, A; Postolache, V; Santocchia, A; Servoli, L; Tonoiu, D; Vedaee, A; Volpe, R; Azzurri, P; Bagliesi, G; Bernardini, J; Berretta, L; Boccali, T; Bocci, A; Borrello, L; Bosi, F; Calzolari, F; Castaldi, R; Dell'Orso, R; Fiori, F; Foà, L; Gennai, S; Giassi, A; Kraan, A; Ligabue, F; Lomtadze, T; Mariani, F; Martini, L; Massa, M; Messineo, A; Moggi, A; Palla, F; Palmonari, F; Petragnani, G; Petrucciani, G; Raffaelli, F; Sarkar, S; Segneri, G; Serban, A T; Spagnolo, P; Tenchini, R; Tolaini, S; Tonelli, G; Venturi, A; Verdini, P G; Baccaro, S; Barone, L; Bartoloni, A; Cavallari, F; Dafinei, I; Del Re, D; Di Marco, E; Diemoz, M; Franci, D; Longo, E; Organtini, G; Palma, A; Pandolfi, F; Paramatti, R; Pellegrino, F; Rahatlou, S; Rovelli, C; Alampi, G; Amapane, N; Arcidiacono, R; Argiro, S; Arneodo, M; Biino, C; Borgia, M A; Botta, C; Cartiglia, N; Castello, R; Cerminara, G; Costa, M; Dattola, D; Dellacasa, G; Demaria, N; Dughera, G; Dumitrache, F; Graziano, A; Mariotti, C; Marone, M; Maselli, S; Migliore, E; Mila, G; Monaco, V; Musich, M; Nervo, M; Obertino, M M; Oggero, S; Panero, R; Pastrone, N; Pelliccioni, M; Romero, A; Ruspa, M; Sacchi, R; Solano, A; Staiano, A; Trapani, P P; Trocino, D; Vilela Pereira, A; Visca, L; Zampieri, A; Ambroglini, F; Belforte, S; Cossutti, F; Della Ricca, G; Gobbo, B; Penzo, A; Chang, S; Chung, J; Kim, D H; Kim, G N; Kong, D J; Park, H; Son, D C; Bahk, S Y; Song, S; Jung, S Y; Hong, B; Kim, H; Kim, J H; Lee, K S; Moon, D H; Park, S K; Rhee, H B; Sim, K S; Kim, J; Choi, M; Hahn, G; Park, I C; Choi, S; Choi, Y; Goh, J; Jeong, H; Kim, T J; Lee, J; Lee, S; Janulis, M; Martisiute, D; Petrov, P; Sabonis, T; Castilla Valdez, H; Sánchez Hernández, A; Carrillo Moreno, S; Morelos Pineda, A; Allfrey, P; Gray, R N C; Krofcheck, D; Bernardino Rodrigues, N; Butler, P H; Signal, T; Williams, J C; Ahmad, M; Ahmed, I; Ahmed, W; Asghar, M I; Awan, M I M; Hoorani, H R; Hussain, I; Khan, W A; Khurshid, T; Muhammad, S; Qazi, S; Shahzad, H; Cwiok, M; Dabrowski, R; Dominik, W; Doroba, K; Konecki, M; Krolikowski, J; Pozniak, K; Romaniuk, Ryszard; Zabolotny, W; Zych, P; Frueboes, T; Gokieli, R; Goscilo, L; Górski, M; Kazana, M; Nawrocki, K; Szleper, M; Wrochna, G; Zalewski, P; Almeida, N; Antunes Pedro, L; Bargassa, P; David, A; Faccioli, P; Ferreira Parracho, P G; Freitas Ferreira, M; Gallinaro, M; Guerra Jordao, M; Martins, P; Mini, G; Musella, P; Pela, J; Raposo, L; Ribeiro, P Q; Sampaio, S; Seixas, J; Silva, J; Silva, P; Soares, D; Sousa, M; Varela, J; Wöhri, H K; Altsybeev, I; Belotelov, I; Bunin, P; Ershov, Y; Filozova, I; Finger, M; Finger, M., Jr.; Golunov, A; Golutvin, I; Gorbounov, N; Kalagin, V; Kamenev, A; Karjavin, V; Konoplyanikov, V; Korenkov, V; Kozlov, G; Kurenkov, A; Lanev, A; Makankin, A; Mitsyn, V V; Moisenz, P; Nikonov, E; Oleynik, D; Palichik, V; Perelygin, V; Petrosyan, A; Semenov, R; Shmatov, S; Smirnov, V; Smolin, D; Tikhonenko, E; Vasil'ev, S; Vishnevskiy, A; Volodko, A; Zarubin, A; Zhiltsov, V; Bondar, N; Chtchipounov, L; Denisov, A; Gavrikov, Y; Gavrilov, G; Golovtsov, V; Ivanov, Y; Kim, V; Kozlov, V; Levchenko, P; Obrant, G; Orishchin, E; Petrunin, A; Shcheglov, Y; Shchetkovskiy, A; Sknar, V; Smirnov, I; Sulimov, V; Tarakanov, V; Uvarov, L; Vavilov, S; Velichko, G; Volkov, S; Vorobyev, A; Andreev, Yu; Anisimov, A; Antipov, P; Dermenev, A; Gninenko, S; Golubev, N; Kirsanov, M; Krasnikov, N; Matveev, V; Pashenkov, A; Postoev, V E; Solovey, A; Toropin, A; Troitsky, S; Baud, A; Epshteyn, V; Gavrilov, V; Ilina, N; Kaftanov, V; Kolosov, V; Kossov, M; Krokhotin, A; Kuleshov, S; Oulianov, A; Safronov, G; Semenov, S; Shreyber, I; Stolin, V; Vlasov, E; Zhokin, A; Boos, E; Dubinin, M; Dudko, L; Ershov, A; Gribushin, A; Klyukhin, V; Kodolova, O; Lokhtin, I; Petrushanko, S; Sarycheva, L; Savrin, V; Snigirev, A; Vardanyan, I; Dremin, I; Kirakosyan, M; Konovalova, N; Rusakov, S V; Vinogradov, A; Akimenko, S; Artamonov, A; Azhgirey, I; Bitioukov, S; Burtovoy, V; Grishin, V; Kachanov, V; Konstantinov, D; Krychkine, V; Levine, A; Lobov, I; Lukanin, V; Mel'nik, Y; Petrov, V; Ryutin, R; Slabospitsky, S; Sobol, A; Sytine, A; Tourtchanovitch, L; Troshin, S; Tyurin, N; Uzunian, A; Volkov, A; Adzic, P; Djordjevic, M; Jovanovic, D; Krpic, D; Maletic, D; Puzovic, J; Smiljkovic, N; Aguilar-Benitez, M; Alberdi, J; Alcaraz Maestre, J; Arce, P; Barcala, J M; Battilana, C; Burgos Lazaro, C; Caballero Bejar, J; Calvo, E; Cardenas Montes, M; Cepeda, M; Cerrada, M; Chamizo Llatas, M; Clemente, F; Colino, N; Daniel, M; De La Cruz, B; Delgado Peris, A; Diez Pardos, C; Fernandez Bedoya, C; Fernández Ramos, J P; Ferrando, A; Flix, J; Fouz, M C; Garcia-Abia, P; Garcia-Bonilla, A C; Gonzalez Lopez, O; Goy Lopez, S; Hernandez, J M; Josa, M I; Marin, J; Merino, G; Molina, J; Molinero, A; Navarrete, J J; Oller, J C; Puerta Pelayo, J; Romero, L; Santaolalla, J; Villanueva Munoz, C; Willmott, C; Yuste, C; Albajar, C; Blanco Otano, M; de Trocóniz, J F; Garcia Raboso, A; Lopez Berengueres, J O; Cuevas, J; Fernandez Menendez, J; Gonzalez Caballero, I; Lloret Iglesias, L; Naves Sordo, H; Vizan Garcia, J M; Cabrillo, I J; Calderon, A; Chuang, S H; Diaz Merino, I; Diez Gonzalez, C; Duarte Campderros, J; Fernandez, M; Gomez, G; Gonzalez Sanchez, J; Gonzalez Suarez, R; Jorda, C; Lobelle Pardo, P; Lopez Virto, A; Marco, J; Marco, R; Martinez Rivero, C; Martinez Ruiz del Arbol, P; Matorras, F; Rodrigo, T; Ruiz Jimeno, A; Scodellaro, L; Sobron Sanudo, M; Vila, I; Vilar Cortabitarte, R; Abbaneo, D; Albert, E; Alidra, M; Ashby, S; Auffray, E; Baechler, J; Baillon, P; Ball, A H; Bally, S L; Barney, D; Beaudette, F; Bellan, R; Benedetti, D; Benelli, G; Bernet, C; Bloch, P; Bolognesi, S; Bona, M; Bos, J; Bourgeois, N; Bourrel, T; Breuker, H; Bunkowski, K; Campi, D; Camporesi, T; Cano, E; Cattai, A; Chatelain, J P; Chauvey, M; Christiansen, T; Coarasa Perez, J A; Conde Garcia, A; Covarelli, R; Curé, B; De Roeck, A; Delachenal, V; Deyrail, D; Di Vincenzo, S; Dos Santos, S; Dupont, T; Edera, L M; Elliott-Peisert, A; Eppard, M; Favre, M; Frank, N; Funk, W; Gaddi, A; Gastal, M; Gateau, M; Gerwig, H; Gigi, D; Gill, K; Giordano, D; Girod, J P; Glege, F; Gomez-Reino Garrido, R; Goudard, R; Gowdy, S; Guida, R; Guiducci, L; Gutleber, J; Hansen, M; Hartl, C; Harvey, J; Hegner, B; Hoffmann, H F; Holzner, A; Honma, A; Huhtinen, M; Innocente, V; Janot, P; Le Godec, G; Lecoq, P; Leonidopoulos, C; Loos, R; Lourenço, C; Lyonnet, A; Macpherson, A; Magini, N; Maillefaud, J D; Maire, G; Mäki, T; Malgeri, L; Mannelli, M; Masetti, L; Meijers, F; Meridiani, P; Mersi, S; Meschi, E; Meynet Cordonnier, A; Moser, R; Mulders, M; Mulon, J; Noy, M; Oh, A; Olesen, G; Onnela, A; Orimoto, T; Orsini, L; Perez, E; Perinic, G; Pernot, J F; Petagna, P; Petiot, P; Petrilli, A; Pfeiffer, A; Pierini, M; Pimiä, M; Pintus, R; Pirollet, B; Postema, H; Racz, A; Ravat, S; Rew, S B; Rodrigues Antunes, J; Rolandi, G; Rovere, M; Ryjov, V; Sakulin, H; Samyn, D; Sauce, H; Schäfer, C; Schlatter, W D; Schröder, M; Schwick, C; Sciaba, A; Segoni, I; Sharma, A; Siegrist, N; Siegrist, P; Sinanis, N; Sobrier, T; Sphicas, P; Spiga, D; Spiropulu, M; Stöckli, F; Traczyk, P; Tropea, P; Troska, J; Tsirou, A; Veillet, L; Veres, G I; Voutilainen, M; Wertelaers, P; Zanetti, M; Bertl, W; Deiters, K; Erdmann, W; Gabathuler, K; Horisberger, R; Ingram, Q; Kaestli, H C; König, S; Kotlinski, D; Langenegger, U; Meier, F; Renker, D; Rohe, T; Sibille, J; Starodumov, A; Betev, B; Caminada, L; Chen, Z; Cittolin, S; Da Silva Di Calafiori, D R; Dambach, S; Dissertori, G; Dittmar, M; Eggel, C; Eugster, J; Faber, G; Freudenreich, K; Grab, C; Hervé, A; Hintz, W; Lecomte, P; Luckey, P D; Lustermann, W; Marchica, C; Milenovic, P; Moortgat, F; Nardulli, A; Nessi-Tedaldi, F; Pape, L; Pauss, F; Punz, T; Rizzi, A; Ronga, F J; Sala, L; Sanchez, A K; Sawley, M C; Sordini, V; Stieger, B; Tauscher, L; Thea, A; Theofilatos, K; Treille, D; Trüb, P; Weber, M; Wehrli, L; Weng, J; Zelepoukine, S; Amsler, C; Chiochia, V; De Visscher, S; Regenfus, C; Robmann, P; Rommerskirchen, T; Schmidt, A; Tsirigkas, D; Wilke, L; Chang, Y H; Chen, E A; Chen, W T; Go, A; Kuo, C M; Li, S W; Lin, W; Bartalini, P; Chang, P; Chao, Y; Chen, K F; Hou, W S; Hsiung, Y; Lei, Y J; Lin, S W; Lu, R S; Schümann, J; Shiu, J G; Tzeng, Y M; Ueno, K; Velikzhanin, Y; Wang, C C; Wang, M; Adiguzel, A; Ayhan, A; Azman Gokce, A; Bakirci, M N; Cerci, S; Dumanoglu, I; Eskut, E; Girgis, S; Gurpinar, E; Hos, I; Karaman, T; Kayis Topaksu, A; Kurt, P; Önengüt, G; Önengüt Gökbulut, G; Ozdemir, K; Ozturk, S; Polatöz, A; Sogut, K; Tali, B; Topakli, H; Uzun, D; Vergili, L N; Vergili, M; Akin, I V; Aliev, T; Bilmis, S; Deniz, M; Gamsizkan, H; Guler, A M; Öcalan, K; Serin, M; Sever, R; Surat, U E; Zeyrek, M; Deliomeroglu, M; Demir, D; Gülmez, E; Halu, A; Isildak, B; Kaya, M; Kaya, O; Ozkorucuklu, S; Sonmez, N; Levchuk, L; Lukyanenko, S; Soroka, D; Zub, S; Bostock, F; Brooke, J J; Cheng, T L; Cussans, D; Frazier, R; Goldstein, J; Grant, N; Hansen, M; Heath, G P; Heath, H F; Hill, C; Huckvale, B; Jackson, J; Mackay, C K; Metson, S; Newbold, D M; Nirunpong, K; Smith, V J; Velthuis, J; Walton, R; Bell, K W; Brew, C; Brown, R M; Camanzi, B; Cockerill, D J A; Coughlan, J A; Geddes, N I; Harder, K; Harper, S; Kennedy, B W; Murray, P; Shepherd-Themistocleous, C H; Tomalin, I R; Williams, J H; Womersley, W J; Worm, S D; Bainbridge, R; Ball, G; Ballin, J; Beuselinck, R; Buchmuller, O; Colling, D; Cripps, N; Davies, G; Della Negra, M; Foudas, C; Fulcher, J; Futyan, D; Hall, G; Hays, J; Iles, G; Karapostoli, G; MacEvoy, B C; Magnan, A M; Marrouche, J; Nash, J; Nikitenko, A; Papageorgiou, A; Pesaresi, M; Petridis, K; Pioppi, M; Raymond, D M; Rompotis, N; Rose, A; Ryan, M J; Seez, C; Sharp, P; Sidiropoulos, G; Stettler, M; Stoye, M; Takahashi, M; Tapper, A; Timlin, C; Tourneur, S; Vazquez Acosta, M; Virdee, T; Wakefield, S; Wardrope, D; Whyntie, T; Wingham, M; Cole, J E; Goitom, I; Hobson, P R; Khan, A; Kyberd, P; Leslie, D; Munro, C; Reid, I D; Siamitros, C; Taylor, R; Teodorescu, L; Yaselli, I; Bose, T; Carleton, M; Hazen, E; Heering, A H; Heister, A; John, J St; Lawson, P; Lazic, D; Osborne, D; Rohlf, J; Sulak, L; Wu, S; Andrea, J; Avetisyan, A; Bhattacharya, S; Chou, J P; Cutts, D; Esen, S; Kukartsev, G; Landsberg, G; Narain, M; Nguyen, D; Speer, T; Tsang, K V; Breedon, R; Calderon De La Barca Sanchez, M; Case, M; Cebra, D; Chertok, M; Conway, J; Cox, P T; Dolen, J; Erbacher, R; Friis, E; Ko, W; Kopecky, A; Lander, R; Lister, A; Liu, H; Maruyama, S; Miceli, T; Nikolic, M; Pellett, D; Robles, J; Searle, M; Smith, J; Squires, M; Stilley, J; Tripathi, M; Vasquez Sierra, R; Veelken, C; Andreev, V; Arisaka, K; Cline, D; Cousins, R; Erhan, S; Hauser, J; Ignatenko, M; Jarvis, C; Mumford, J; Plager, C; Rakness, G; Schlein, P; Tucker, J; Valuev, V; Wallny, R; Yang, X; Babb, J; Bose, M; Chandra, A; Clare, R; Ellison, J A; Gary, J W; Hanson, G; Jeng, G Y; Kao, S C; Liu, F; Liu, H; Luthra, A; Nguyen, H; Pasztor, G; Satpathy, A; Shen, B C; Stringer, R; Sturdy, J; Sytnik, V; Wilken, R; Wimpenny, S; Branson, J G; Dusinberre, E; Evans, D; Golf, F; Kelley, R; Lebourgeois, M; Letts, J; Lipeles, E; Mangano, B; Muelmenstaedt, J; Norman, M; Padhi, S; Petrucci, A; Pi, H; Pieri, M; Ranieri, R; Sani, M; Sharma, V; Simon, S; Würthwein, F; Yagil, A; Campagnari, C; D'Alfonso, M; Danielson, T; Garberson, J; Incandela, J; Justus, C; Kalavase, P; Koay, S A; Kovalskyi, D; Krutelyov, V; Lamb, J; Lowette, S; Pavlunin, V; Rebassoo, F; Ribnik, J; Richman, J; Rossin, R; Stuart, D; To, W; Vlimant, J R; Witherell, M; Apresyan, A; Bornheim, A; Bunn, J; Chiorboli, M; Gataullin, M; Kcira, D; Litvine, V; Ma, Y; Newman, H B; Rogan, C; Timciuc, V; Veverka, J; Wilkinson, R; Yang, Y; Zhang, L; Zhu, K; Zhu, R Y; Akgun, B; Carroll, R; Ferguson, T; Jang, D W; Jun, S Y; Paulini, M; Russ, J; Terentyev, N; Vogel, H; Vorobiev, I; Cumalat, J P; Dinardo, M E; Drell, B R; Ford, W T; Heyburn, B; Luiggi Lopez, E; Nauenberg, U; Stenson, K; Ulmer, K; Wagner, S R; Zang, S L; Agostino, L; Alexander, J; Blekman, F; Cassel, D; Chatterjee, A; Das, S; Gibbons, L K; Heltsley, B; Hopkins, W; Khukhunaishvili, A; Kreis, B; Kuznetsov, V; Patterson, J R; Puigh, D; Ryd, A; Shi, X; Stroiney, S; Sun, W; Teo, W D; Thom, J; Vaughan, J; Weng, Y; Wittich, P; Beetz, C P; Cirino, G; Sanzeni, C; Winn, D; Abdullin, S; Afaq, M A; Albrow, M; Ananthan, B; Apollinari, G; Atac, M; Badgett, W; Bagby, L; Bakken, J A; Baldin, B; Banerjee, S; Banicz, K; Bauerdick, L A T; Beretvas, A; Berryhill, J; Bhat, P C; Biery, K; Binkley, M; Bloch, I; Borcherding, F; Brett, A M; Burkett, K; Butler, J N; Chetluru, V; Cheung, H W K; Chlebana, F; Churin, I; Cihangir, S; Crawford, M; Dagenhart, W; Demarteau, M; Derylo, G; Dykstra, D; Eartly, D P; Elias, J E; Elvira, V D; Evans, D; Feng, L; Fischler, M; Fisk, I; Foulkes, S; Freeman, J; Gartung, P; Gottschalk, E; Grassi, T; Green, D; Guo, Y; Gutsche, O; Hahn, A; Hanlon, J; Harris, R M; Holzman, B; Howell, J; Hufnagel, D; James, E; Jensen, H; Johnson, M; Jones, C D; Joshi, U; Juska, E; Kaiser, J; Klima, B; Kossiakov, S; Kousouris, K; Kwan, S; Lei, C M; Limon, P; Lopez Perez, J A; Los, S; Lueking, L; Lukhanin, G; Lusin, S; Lykken, J; Maeshima, K; Marraffino, J M; Mason, D; McBride, P; Miao, T; Mishra, K; Moccia, S; Mommsen, R; Mrenna, S; Muhammad, A S; Newman-Holmes, C; Noeding, C; O'Dell, V; Prokofyev, O; Rivera, R; Rivetta, C H; Ronzhin, A; Rossman, P; Ryu, S; Sekhri, V; Sexton-Kennedy, E; Sfiligoi, I; Sharma, S; Shaw, T M; Shpakov, D; Skup, E; Smith, R P; Soha, A; Spalding, W J; Spiegel, L; Suzuki, I; Tan, P; Tanenbaum, W; Tkaczyk, S; Trentadue, R; Uplegger, L; Vaandering, E W; Vidal, R; Whitmore, J; Wicklund, E; Wu, W; Yarba, J; Yumiceva, F; Yun, J C; Acosta, D; Avery, P; Barashko, V; Bourilkov, D; Chen, M; Di Giovanni, G P; Dobur, D; Drozdetskiy, A; Field, R D; Fu, Y; Furic, I K; Gartner, J; Holmes, D; Kim, B; Klimenko, S; Konigsberg, J; Korytov, A; Kotov, K; Kropivnitskaya, A; Kypreos, T; Madorsky, A; Matchev, K; Mitselmakher, G; Pakhotin, Y; Piedra Gomez, J; Prescott, C; Rapsevicius, V; Remington, R; Schmitt, M; Scurlock, B; Wang, D; Yelton, J; Ceron, C; Gaultney, V; Kramer, L; Lebolo, L M; Linn, S; Markowitz, P; Martinez, G; Rodriguez, J L; Adams, T; Askew, A; Baer, H; Bertoldi, M; Chen, J; Dharmaratna, W G D; Gleyzer, S V; Haas, J; Hagopian, S; Hagopian, V; Jenkins, M; Johnson, K F; Prettner, E; Prosper, H; Sekmen, S; Baarmand, M M; Guragain, S; Hohlmann, M; Kalakhety, H; Mermerkaya, H; Ralich, R; Vodopiyanov, I; Abelev, B; Adams, M R; Anghel, I M; Apanasevich, L; Bazterra, V E; Betts, R R; Callner, J; Castro, M A; Cavanaugh, R; Dragoiu, C; Garcia-Solis, E J; Gerber, C E; Hofman, D J; Khalatian, S; Mironov, C; Shabalina, E; Smoron, A; Varelas, N; Akgun, U; Albayrak, E A; Ayan, A S; Bilki, B; Briggs, R; Cankocak, K; Chung, K; Clarida, W; Debbins, P; Duru, F; Ingram, F D; Lae, C K; McCliment, E; Merlo, J P; Mestvirishvili, A; Miller, M J; Moeller, A; Nachtman, J; Newsom, C R; Norbeck, E; Olson, J; Onel, Y; Ozok, F; Parsons, J; Schmidt, I; Sen, S; Wetzel, J; Yetkin, T; Yi, K; Barnett, B A; Blumenfeld, B; Bonato, A; Chien, C Y; Fehling, D; Giurgiu, G; Gritsan, A V; Guo, Z J; Maksimovic, P; Rappoccio, S; Swartz, M; Tran, N V; Zhang, Y; Baringer, P; Bean, A; Grachov, O; Murray, M; Radicci, V; Sanders, S; Wood, J S; Zhukova, V; Bandurin, D; Bolton, T; Kaadze, K; Liu, A; Maravin, Y; Onoprienko, D; Svintradze, I; Wan, Z; Gronberg, J; Hollar, J; Lange, D; Wright, D; Baden, D; Bard, R; Boutemeur, M; Eno, S C; Ferencek, D; Hadley, N J; Kellogg, R G; Kirn, M; Kunori, S; Rossato, K; Rumerio, P; Santanastasio, F; Skuja, A; Temple, J; Tonjes, M B; Tonwar, S C; Toole, T; Twedt, E; Alver, B; Bauer, G; Bendavid, J; Busza, W; Butz, E; Cali, I A; Chan, M; D'Enterria, D; Everaerts, P; Gomez Ceballos, G; Hahn, K A; Harris, P; Jaditz, S; Kim, Y; Klute, M; Lee, Y J; Li, W; Loizides, C; Ma, T; Miller, M; Nahn, S; Paus, C; Roland, C; Roland, G; Rudolph, M; Stephans, G; Sumorok, K; Sung, K; Vaurynovich, S; Wenger, E A; Wyslouch, B; Xie, S; Yilmaz, Y; Yoon, A S; Bailleux, D; Cooper, S I; Cushman, P; Dahmes, B; De Benedetti, A; Dolgopolov, A; Dudero, P R; Egeland, R; Franzoni, G; Haupt, J; Inyakin, A; Klapoetke, K; Kubota, Y; Mans, J; Mirman, N; Petyt, D; Rekovic, V; Rusack, R; Schroeder, M; Singovsky, A; Zhang, J; Cremaldi, L M; Godang, R; Kroeger, R; Perera, L; Rahmat, R; Sanders, D A; Sonnek, P; Summers, D; Bloom, K; Bockelman, B; Bose, S; Butt, J; Claes, D R; Dominguez, A; Eads, M; Keller, J; Kelly, T; Kravchenko, I; Lazo-Flores, J; Lundstedt, C; Malbouisson, H; Malik, S; Snow, G R; Baur, U; Iashvili, I; Kharchilava, A; Kumar, A; Smith, K; Strang, M; Alverson, G; Barberis, E; Boeriu, O; Eulisse, G; Govi, G; McCauley, T; Musienko, Y; Muzaffar, S; Osborne, I; Paul, T; Reucroft, S; Swain, J; Taylor, L; Tuura, L; Anastassov, A; Gobbi, B; Kubik, A; Ofierzynski, R A; Pozdnyakov, A; Schmitt, M; Stoynev, S; Velasco, M; Won, S; Antonelli, L; Berry, D; Hildreth, M; Jessop, C; Karmgard, D J; Kolberg, T; Lannon, K; Lynch, S; Marinelli, N; Morse, D M; Ruchti, R; Slaunwhite, J; Warchol, J; Wayne, M; Bylsma, B; Durkin, L S; Gilmore, J; Gu, J; Killewald, P; Ling, T Y; Williams, G; Adam, N; Berry, E; Elmer, P; Garmash, A; Gerbaudo, D; Halyo, V; Hunt, A; Jones, J; Laird, E; Marlow, D; Medvedeva, T; Mooney, M; Olsen, J; Piroué, P; Stickland, D; Tully, C; Werner, J S; Wildish, T; Xie, Z; Zuranski, A; Acosta, J G; Bonnett Del Alamo, M; Huang, X T; Lopez, A; Mendez, H; Oliveros, S; Ramirez Vargas, J E; Santacruz, N; Zatzerklyany, A; Alagoz, E; Antillon, E; Barnes, V E; Bolla, G; Bortoletto, D; Everett, A; Garfinkel, A F; Gecse, Z; Gutay, L; Ippolito, N; Jones, M; Koybasi, O; Laasanen, A T; Leonardo, N; Liu, C; Maroussov, V; Merkel, P; Miller, D H; Neumeister, N; Sedov, A; Shipsey, I; Yoo, H D; Zheng, Y; Jindal, P; Parashar, N; Cuplov, V; Ecklund, K M; Geurts, F J M; Liu, J H; Maronde, D; Matveev, M; Padley, B P; Redjimi, R; Roberts, J; Sabbatini, L; Tumanov, A; Betchart, B; Bodek, A; Budd, H; Chung, Y S; de Barbaro, P; Demina, R; Flacher, H; Gotra, Y; Harel, A; Korjenevski, S; Miner, D C; Orbaker, D; Petrillo, G; Vishnevskiy, D; Zielinski, M; Bhatti, A; Demortier, L; Goulianos, K; Hatakeyama, K; Lungu, G; Mesropian, C; Yan, M; Atramentov, O; Bartz, E; Gershtein, Y; Halkiadakis, E; Hits, D; Lath, A; Rose, K; Schnetzer, S; Somalwar, S; Stone, R; Thomas, S; Watts, T L; Cerizza, G; Hollingsworth, M; Spanier, S; Yang, Z C; York, A; Asaadi, J; Aurisano, A; Eusebi, R; Golyash, A; Gurrola, A; Kamon, T; Nguyen, C N; Pivarski, J; Safonov, A; Sengupta, S; Toback, D; Weinberger, M; Akchurin, N; Berntzon, L; Gumus, K; Jeong, C; Kim, H; Lee, S W; Popescu, S; Roh, Y; Sill, A; Volobouev, I; Washington, E; Wigmans, R; Yazgan, E; Engh, D; Florez, C; Johns, W; Pathak, S; Sheldon, P; Andelin, D; Arenton, M W; Balazs, M; Boutle, S; Buehler, M; Conetti, S; Cox, B; Hirosky, R; Ledovskoy, A; Neu, C; Phillips II, D; Ronquest, M; Yohay, R; Gollapinni, S; Gunthoti, K; Harr, R; Karchin, P E; Mattson, M; Sakharov, A; Anderson, M; Bachtis, M; Bellinger, J N; Carlsmith, D; Crotty, I; Dasu, S; Dutta, S; Efron, J; Feyzi, F; Flood, K; Gray, L; Grogg, K S; Grothe, M; Hall-Wilton, R; Jaworski, M; Klabbers, P; Klukas, J; Lanaro, A; Lazaridis, C; Leonard, J; Loveless, R; Magrans de Abril, M; Mohapatra, A; Ott, G; Polese, G; Reeder, D; Savin, A; Smith, W H; Sourkov, A; Swanson, J; Weinberg, M; Wenman, D; Wensveen, M; White, A

    2010-01-01

    The CMS Collaboration conducted a month-long data taking exercise, the Cosmic Run At Four Tesla, during October-November 2008, with the goal of commissioning the experiment for extended operation. With all installed detector systems participating, CMS recorded 270 million cosmic ray events with the solenoid at a magnetic field strength of 3.8 T. This paper describes the data flow from the detector through the various online and offline computing systems, as well as the workflows used for recording the data, for aligning and calibrating the detector, and for analysis of the data.

  2. CMS Data Processing Workflows during an Extended Cosmic Ray Run

    Energy Technology Data Exchange (ETDEWEB)

    2009-11-01

    The CMS Collaboration conducted a month-long data taking exercise, the Cosmic Run At Four Tesla, during October-November 2008, with the goal of commissioning the experiment for extended operation. With all installed detector systems participating, CMS recorded 270 million cosmic ray events with the solenoid at a magnetic field strength of 3.8 T. This paper describes the data flow from the detector through the various online and offline computing systems, as well as the workflows used for recording the data, for aligning and calibrating the detector, and for analysis of the data.

  3. Coping with distributed computing

    International Nuclear Information System (INIS)

    Cormell, L.

    1992-09-01

    The rapid increase in the availability of high performance, cost-effective RISC/UNIX workstations has been both a blessing and a curse. The blessing of having extremely powerful computing engines available on the desk top is well-known to many users. The user has tremendous freedom, flexibility, and control of his environment. That freedom can, however, become the curse of distributed computing. The user must become a system manager to some extent, he must worry about backups, maintenance, upgrades, etc. Traditionally these activities have been the responsibility of a central computing group. The central computing group, however, may find that it can no longer provide all of the traditional services. With the plethora of workstations now found on so many desktops throughout the entire campus or lab, the central computing group may be swamped by support requests. This talk will address several of these computer support and management issues by providing some examples of the approaches taken at various HEP institutions. In addition, a brief review of commercial directions or products for distributed computing and management will be given

  4. Large Scale Commissioning and Operational Experience with Tier-2 to Tier-2 Data Transfer Links in CMS

    CERN Document Server

    Letts, James

    2010-01-01

    Tier-2 to Tier-2 data transfers have been identified as a necessary extension of the CMS computing model. The Debugging Data Transfers (DDT) Task Force in CMS was charged with commissioning Tier-2 to Tier-2 PhEDEx transfer links beginning in late 2009, originally to serve the needs of physics analysis groups for the transfer of their results between the storage elements of the Tier-2 sites associated with the groups. PhEDEx is the data transfer middleware of the CMS experiment. For analysis jobs using CRAB, the CMS Remote Analysis Builder, the challenges of remote stage out of job output at the end of the analysis jobs led to the introduction of a local fallback stage out, and will eventually require the asynchronous transfer of user data over essentially all of the Tier-2 to Tier-2 network using the same PhEDEx infrastructure. In addition, direct file sharing of physics and Monte Carlo simulated data between Tier-2 sites can relieve the operational load of the Tier-1 sites in the original CMS Computing Model...

  5. CMS Data Analysis School Model

    CERN Document Server

    Malik, Sudhir; Cavanaugh, R; Bloom, K; Chan, Kai-Feng; D'Hondt, J; Klima, B; Narain, M; Palla, F; Rolandi, G; Schörner-Sadenius, T

    2014-01-01

    To impart hands-on training in physics analysis, CMS experiment initiated the  concept of CMS Data Analysis School (CMSDAS). It was born three years ago at the LPC (LHC Physics Center), Fermilab and is based on earlier workshops held at the LPC and CLEO Experiment. As CMS transitioned from construction to the data taking mode, the nature of earlier training also evolved to include more of analysis tools, software tutorials and physics analysis. This effort epitomized as CMSDAS has proven to be a key for the new and young physicists to jump start and contribute to the physics goals of CMS by looking for new physics with the collision data. With over 400 physicists trained in six CMSDAS around the globe , CMS is trying to  engage the collaboration discovery potential and maximize the physics output. As a bigger goal, CMS is striving to nurture and increase engagement of the myriad talents of CMS, in the development of physics, service, upgrade, education of those new to CMS and the caree...

  6. Health and performance monitoring of the online computer cluster of CMS

    Energy Technology Data Exchange (ETDEWEB)

    Bauer, G.; et al.

    2012-01-01

    The CMS experiment at the LHC features over 2'500 devices that need constant monitoring in order to ensure proper data taking. The monitoring solution has been migrated from Nagios to Icinga, with several useful plugins. The motivations behind the migration and the selection of the plugins are discussed.

  7. QCD measurements with the CMS detector

    CERN Multimedia

    CERN. Geneva

    2011-01-01

    In the first year of LHC data taking, CMS pursued a rich program of QCD physics. In the low-pt front, results on momentum-, pseudorapidity- and multiplicity distributions of charged and strange hadrons, underlying event observables, two particle rapidity correlations and Bose-Einstein correlations are presented. In the high-pt front, jet and photon cross-section measurements are reported on inclusive and di-object production, as well as ratios of 3/2 jet cross sections. Finally, the QCD multi-jet dynamics is explored with event-shapes variables, dijet azimuthal decorrelations and dijet angular distributions

  8. CMS Comic Book Brochure

    CERN Document Server

    2006-01-01

    To raise students' awareness of what the CMS detector is, how it was constructed and what it hopes to find. Titled "CMS Particle Hunter," this colorful comic book style brochure explains to young budding scientists and science enthusiasts in colorful animation how the CMS detector was made, its main parts, and what scientists hope to find using this complex tool.

  9. Performance of the CMS Event Builder

    CERN Document Server

    Andre, Jean-Marc Olivier; Branson, James; Brummer, Philipp Maximilian; Chaze, Olivier; Cittolin, Sergio; Contescu, Cristian; Craigs, Benjamin Gordon; Darlea, Georgiana Lavinia; Deldicque, Christian; Demiragli, Zeynep; Dobson, Marc; Doualot, Nicolas; Erhan, Samim; Fulcher, Jonathan Richard; Gigi, Dominique; Gladki, Maciej Szymon; Glege, Frank; Gomez Ceballos, Guillelmo; Hegeman, Jeroen Guido; Holzner, Andre Georg; Janulis, Mindaugas; Jimenez Estupinan, Raul; Masetti, Lorenzo; Meijers, Franciscus; Meschi, Emilio; Mommsen, Remigius; Morovic, Srecko; O'Dell, Vivian; Orsini, Luciano; Paus, Christoph Maria Ernst; Petrova, Petia; Pieri, Marco; Racz, Attila; Reis, Thomas; Sakulin, Hannes; Schwick, Christoph; Simelevicius, Dainius; Zejdl, Petr

    2017-01-01

    The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider (LHC) assembles events at a rate of 100 kHz. It transports event data at an aggregate throughput of ~100 GB/s to the high-level trigger (HLT) farm. The CMS DAQ system has been completely rebuilt during the first long shutdown of the LHC in 2013/14. The new DAQ architecture is based on state-of-the-art network technologies for the event building. For the data concentration, 10/40 Gb/s Ethernet technologies are used together with a reduced TCP/IP protocol implemented in FPGA for a reliable transport between custom electronics and commercial computing hardware. A 56 Gb/s Infiniband FDR CLOS network has been chosen for the event builder. We report on the performance of the event builder system and the steps taken to exploit the full potential of the network technologies.

  10. Distributed GPU Computing in GIScience

    Science.gov (United States)

    Jiang, Y.; Yang, C.; Huang, Q.; Li, J.; Sun, M.

    2013-12-01

    Geoscientists strived to discover potential principles and patterns hidden inside ever-growing Big Data for scientific discoveries. To better achieve this objective, more capable computing resources are required to process, analyze and visualize Big Data (Ferreira et al., 2003; Li et al., 2013). Current CPU-based computing techniques cannot promptly meet the computing challenges caused by increasing amount of datasets from different domains, such as social media, earth observation, environmental sensing (Li et al., 2013). Meanwhile CPU-based computing resources structured as cluster or supercomputer is costly. In the past several years with GPU-based technology matured in both the capability and performance, GPU-based computing has emerged as a new computing paradigm. Compare to traditional computing microprocessor, the modern GPU, as a compelling alternative microprocessor, has outstanding high parallel processing capability with cost-effectiveness and efficiency(Owens et al., 2008), although it is initially designed for graphical rendering in visualization pipe. This presentation reports a distributed GPU computing framework for integrating GPU-based computing within distributed environment. Within this framework, 1) for each single computer, computing resources of both GPU-based and CPU-based can be fully utilized to improve the performance of visualizing and processing Big Data; 2) within a network environment, a variety of computers can be used to build up a virtual super computer to support CPU-based and GPU-based computing in distributed computing environment; 3) GPUs, as a specific graphic targeted device, are used to greatly improve the rendering efficiency in distributed geo-visualization, especially for 3D/4D visualization. Key words: Geovisualization, GIScience, Spatiotemporal Studies Reference : 1. Ferreira de Oliveira, M. C., & Levkowitz, H. (2003). From visual data exploration to visual data mining: A survey. Visualization and Computer Graphics, IEEE

  11. Efficient Monitoring of CRAB Jobs at CMS

    Energy Technology Data Exchange (ETDEWEB)

    Silva, J. M.D. [Sao Paulo, IFT; Balcas, J. [Caltech; Belforte, S. [INFN, Trieste; Ciangottini, D. [INFN, Perugia; Mascheroni, M. [Fermilab; Rupeika, E. A. [Vilnius U.; Ivanov, T. T. [Sofiya U.; Hernandez, J. M. [Madrid, CIEMAT; Vaandering, E. [Fermilab

    2017-11-22

    CRAB is a tool used for distributed analysis of CMS data. Users can submit sets of jobs with similar requirements (tasks) with a single request. CRAB uses a client-server architecture, where a lightweight client, a server, and ancillary services work together and are maintained by CMS operators at CERN. As with most complex software, good monitoring tools are crucial for efficient use and longterm maintainability. This work gives an overview of the monitoring tools developed to ensure the CRAB server and infrastructure are functional, help operators debug user problems, and minimize overhead and operating cost. This work also illustrates the design choices and gives a report on our experience with the tools we developed and the external ones we used.

  12. Distributed computing and nuclear reactor analysis

    International Nuclear Information System (INIS)

    Brown, F.B.; Derstine, K.L.; Blomquist, R.N.

    1994-01-01

    Large-scale scientific and engineering calculations for nuclear reactor analysis can now be carried out effectively in a distributed computing environment, at costs far lower than for traditional mainframes. The distributed computing environment must include support for traditional system services, such as a queuing system for batch work, reliable filesystem backups, and parallel processing capabilities for large jobs. All ANL computer codes for reactor analysis have been adapted successfully to a distributed system based on workstations and X-terminals. Distributed parallel processing has been demonstrated to be effective for long-running Monte Carlo calculations

  13. Distributed storage and cloud computing: a test case

    International Nuclear Information System (INIS)

    Piano, S; Ricca, G Delia

    2014-01-01

    Since 2003 the computing farm hosted by the INFN Tier3 facility in Trieste supports the activities of many scientific communities. Hundreds of jobs from 45 different VOs, including those of the LHC experiments, are processed simultaneously. Given that normally the requirements of the different computational communities are not synchronized, the probability that at any given time the resources owned by one of the participants are not fully utilized is quite high. A balanced compensation should in principle allocate the free resources to other users, but there are limits to this mechanism. In fact, the Trieste site may not hold the amount of data needed to attract enough analysis jobs, and even in that case there could be a lack of bandwidth for their access. The Trieste ALICE and CMS computing groups, in collaboration with other Italian groups, aim to overcome the limitations of existing solutions using two approaches: sharing the data among all the participants taking full advantage of GARR-X wide area networks (10 GB/s) and integrating the resources dedicated to batch analysis with the ones reserved for dynamic interactive analysis, through modern solutions as cloud computing.

  14. A TCP/IP transport layer for the DAQ of the CMS experiment

    International Nuclear Information System (INIS)

    Kozlovszky, M.

    2004-01-01

    The CMS collaboration is currently investigating various networking technologies that may meet the requirements of the CMS Data Acquisition System (DAQ). During this study, a peer transport component based on TCP/IP has been developed using object-oriented techniques for the distributed DAQ framework named XDAQ. This framework has been designed to facilitate the development of distributed data acquisition systems within the CMS Experiment. The peer transport component has to meet 3 main requirements. Firstly, it had to provide fair access to the communication medium for competing applications. Secondly, it had to provide as much of the available bandwidth to the application layer as possible. Finally, it had to hide the complexity of using non-blocking TCP/IP connections from the application layer. This paper describes the development of the peer transport component and then presents and draws conclusions on the measurements made during tests. The major topics investigated include: blocking versus non-blocking communication, TCP/IP configuration options, multi-rail connections

  15. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  16. Wide Area Network Access to CMS Data Using the Lustre Filesystem

    CERN Document Server

    Rodríguez, J L; Prescott, C; Wu, Y; Kim, B; Fu, Y; Bourilkov, D; Avery, P

    2009-01-01

    In this paper, we explore the use of the Lustre cluster filesystem over the wide area network to access Compact Muon Solenoid (CMS) data stored on physical devices located hundreds of kilometres away. We describe the experimental testbed and report on the I/O performance of applications writing and reading data on the distributed Lustre filesystem established across the WAN. We compare the I/O performance of a CMS application to the performance obtained with IOzone, a standard benchmark tool. We then examine the I/O performance of the CMS application running multiple processes on a single server. And compare the Lustre results to results obtained on data stored on local filesystems. Our measurements reveal that the IOzone benchmark tool, accessing data sequentially, can saturate the Gbps network link that connects our Lustre client in Miami Florida to the Lustre storage located in Gainesville, Florida. We also find that the I/O rates of the CMS application is significantly less than what can be obtained with ...

  17. Support system for ATLAS distributed computing operations

    CERN Document Server

    Kishimoto, Tomoe; The ATLAS collaboration

    2018-01-01

    The ATLAS distributed computing system has allowed the experiment to successfully meet the challenges of LHC Run 2. In order for distributed computing to operate smoothly and efficiently, several support teams are organized in the ATLAS experiment. The ADCoS (ATLAS Distributed Computing Operation Shifts) is a dedicated group of shifters who follow and report failing jobs, failing data transfers between sites, degradation of ATLAS central computing services, and more. The DAST (Distributed Analysis Support Team) provides user support to resolve issues related to running distributed analysis on the grid. The CRC (Computing Run Coordinator) maintains a global view of the day-to-day operations. In this presentation, the status and operational experience of the support system for ATLAS distributed computing in LHC Run 2 will be reported. This report also includes operations experience from the grid site point of view, and an analysis of the errors that create the biggest waste of wallclock time. The report of oper...

  18. Evacuation drill at CMS

    CERN Multimedia

    Niels Dupont-Sagorin and Christoph Schaefer

    2012-01-01

    Training personnel, including evacuation guides and shifters, checking procedures, improving collaboration with the CERN Fire Brigade: the first real-life evacuation drill at CMS took place on Friday 3 February from 12p.m. to 3p.m. in the two caverns located at Point 5 of the LHC.   CERN personnel during the evacuation drill at CMS. Evacuation drills are required by law and have to be organized periodically in all areas of CERN, both above and below ground. The last drill at CMS, which took place in June 2007, revealed some desiderata, most notably the need for a public address system. With this equipment in place, it is now possible to broadcast audio messages from the CMS control room to the underground areas.   The CMS Technical Coordination Team and the GLIMOS have focused particularly on preparing collaborators for emergency situations by providing training and organizing regular safety drills with the HSE Unit and the CERN Fire Brigade. This Friday, the practical traini...

  19. CMS readiness for multi-core workload scheduling

    Science.gov (United States)

    Perez-Calero Yzquierdo, A.; Balcas, J.; Hernandez, J.; Aftab Khan, F.; Letts, J.; Mason, D.; Verguilov, V.

    2017-10-01

    In the present run of the LHC, CMS data reconstruction and simulation algorithms benefit greatly from being executed as multiple threads running on several processor cores. The complexity of the Run 2 events requires parallelization of the code to reduce the memory-per- core footprint constraining serial execution programs, thus optimizing the exploitation of present multi-core processor architectures. The allocation of computing resources for multi-core tasks, however, becomes a complex problem in itself. The CMS workload submission infrastructure employs multi-slot partitionable pilots, built on HTCondor and GlideinWMS native features, to enable scheduling of single and multi-core jobs simultaneously. This provides a solution for the scheduling problem in a uniform way across grid sites running a diversity of gateways to compute resources and batch system technologies. This paper presents this strategy and the tools on which it has been implemented. The experience of managing multi-core resources at the Tier-0 and Tier-1 sites during 2015, along with the deployment phase to Tier-2 sites during early 2016 is reported. The process of performance monitoring and optimization to achieve efficient and flexible use of the resources is also described.

  20. CMS Readiness for Multi-Core Workload Scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Perez-Calero Yzquierdo, A. [Madrid, CIEMAT; Balcas, J. [Caltech; Hernandez, J. [Madrid, CIEMAT; Aftab Khan, F. [NCP, Islamabad; Letts, J. [UC, San Diego; Mason, D. [Fermilab; Verguilov, V. [CLMI, Sofia

    2017-11-22

    In the present run of the LHC, CMS data reconstruction and simulation algorithms benefit greatly from being executed as multiple threads running on several processor cores. The complexity of the Run 2 events requires parallelization of the code to reduce the memory-per- core footprint constraining serial execution programs, thus optimizing the exploitation of present multi-core processor architectures. The allocation of computing resources for multi-core tasks, however, becomes a complex problem in itself. The CMS workload submission infrastructure employs multi-slot partitionable pilots, built on HTCondor and GlideinWMS native features, to enable scheduling of single and multi-core jobs simultaneously. This provides a solution for the scheduling problem in a uniform way across grid sites running a diversity of gateways to compute resources and batch system technologies. This paper presents this strategy and the tools on which it has been implemented. The experience of managing multi-core resources at the Tier-0 and Tier-1 sites during 2015, along with the deployment phase to Tier-2 sites during early 2016 is reported. The process of performance monitoring and optimization to achieve efficient and flexible use of the resources is also described.

  1. CMS Space Monitoring

    Science.gov (United States)

    Ratnikova, N.; Huang, C.-H.; Sanchez-Hernandez, A.; Wildish, T.; Zhang, X.

    2014-06-01

    During the first LHC run, CMS stored about one hundred petabytes of data. Storage accounting and monitoring help to meet the challenges of storage management, such as efficient space utilization, fair share between users and groups and resource planning. We present a newly developed CMS space monitoring system based on the storage metadata dumps produced at the sites. The information extracted from the storage dumps is aggregated and uploaded to a central database. A web based data service is provided to retrieve the information for a given time interval and a range of sites, so it can be further aggregated and presented in the desired format. The system has been designed based on the analysis of CMS monitoring requirements and experiences of the other LHC experiments. In this paper, we demonstrate how the existing software components of the CMS data placement system, PhEDEx, have been re-used, dramatically reducing the development effort.

  2. CMS Space Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Ratnikova, N. [Fermilab; Huang, C.-H. [Fermilab; Sanchez-Hernandez, A. [CINVESTAV, IPN; Wildish, T. [Princeton U.; Zhang, X. [Beijing, Inst. High Energy Phys.

    2014-01-01

    During the first LHC run, CMS stored about one hundred petabytes of data. Storage accounting and monitoring help to meet the challenges of storage management, such as efficient space utilization, fair share between users and groups and resource planning. We present a newly developed CMS space monitoring system based on the storage metadata dumps produced at the sites. The information extracted from the storage dumps is aggregated and uploaded to a central database. A web based data service is provided to retrieve the information for a given time interval and a range of sites, so it can be further aggregated and presented in the desired format. The system has been designed based on the analysis of CMS monitoring requirements and experiences of the other LHC experiments. In this paper, we demonstrate how the existing software components of the CMS data placement system, PhEDEx, have been re-used, dramatically reducing the development effort.

  3. Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS

    International Nuclear Information System (INIS)

    Letts, J; Magini, N

    2011-01-01

    Tier-2 to Tier-2 data transfers have been identified as a necessary extension of the CMS computing model. The Debugging Data Transfers (DDT) Task Force in CMS was charged with commissioning Tier-2 to Tier-2 PhEDEx transfer links beginning in late 2009, originally to serve the needs of physics analysis groups for the transfer of their results between the storage elements of the Tier-2 sites associated with the groups. PhEDEx is the data transfer middleware of the CMS experiment. For analysis jobs using CRAB, the CMS Remote Analysis Builder, the challenges of remote stage out of job output at the end of the analysis jobs led to the introduction of a local fallback stage out, and will eventually require the asynchronous transfer of user data over essentially all of the Tier-2 to Tier-2 network using the same PhEDEx infrastructure. In addition, direct file sharing of physics and Monte Carlo simulated data between Tier-2 sites can relieve the operational load of the Tier-1 sites in the original CMS Computing Model, and already represents an important component of CMS PhEDEx data transfer volume. The experience, challenges and methods used to debug and commission the thousands of data transfers links between CMS Tier-2 sites world-wide are explained and summarized. The resulting operational experience with Tier-2 to Tier-2 transfers is also presented.

  4. Monitoring light source for CMS lead tungstate crystal calorimeter at LHC

    CERN Document Server

    Zhang Li Yuan; Zhu Ren Yuan; Liu Dun Can

    2000-01-01

    Light monitoring will serve as an inter calibration for CMS lead tungstate crystals in situ at LHC, which is crucial for maintaining crystal calorimeter's sub percent constant term in the energy resolution. This paper presents the design of the CMS ECAL monitoring light source and high level distribution system. The correlations between variations of the light output and the transmittance for the CMS choice of Y doped PbWO//4 crystals were investigated, and were used to study monitoring linearity and sensitivity as a function of the wavelength. The monitoring wavelength was determined so that a good linearity as well as adequate sensitivity can be achieved. The performance of a custom manufactured tunable laser system is presented. Issues related to monitoring precision are discussed. 29 Refs.

  5. Energy efficient distributed computing systems

    CERN Document Server

    Lee, Young-Choon

    2012-01-01

    The energy consumption issue in distributed computing systems raises various monetary, environmental and system performance concerns. Electricity consumption in the US doubled from 2000 to 2005.  From a financial and environmental standpoint, reducing the consumption of electricity is important, yet these reforms must not lead to performance degradation of the computing systems.  These contradicting constraints create a suite of complex problems that need to be resolved in order to lead to 'greener' distributed computing systems.  This book brings together a group of outsta

  6. Distributed computing for global health

    CERN Multimedia

    CERN. Geneva; Schwede, Torsten; Moore, Celia; Smith, Thomas E; Williams, Brian; Grey, François

    2005-01-01

    Distributed computing harnesses the power of thousands of computers within organisations or over the Internet. In order to tackle global health problems, several groups of researchers have begun to use this approach to exceed by far the computing power of a single lab. This event illustrates how companies, research institutes and the general public are contributing their computing power to these efforts, and what impact this may have on a range of world health issues. Grids for neglected diseases Vincent Breton, CNRS/EGEE This talk introduces the topic of distributed computing, explaining the similarities and differences between Grid computing, volunteer computing and supercomputing, and outlines the potential of Grid computing for tackling neglected diseases where there is little economic incentive for private R&D efforts. Recent results on malaria drug design using the Grid infrastructure of the EU-funded EGEE project, which is coordinated by CERN and involves 70 partners in Europe, the US and Russi...

  7. International Masterclass at CMS

    CERN Multimedia

    Lapka, M

    2012-01-01

    The CMS collaboration welcomed a class of French high school students to the CERN facility in Meyrin, Switzerland on the 12 of March, 2012. Students spent the day meeting with physicists, hearing talks, asking questions, and participating in a hands-on exercise using real data collected by the CMS experiment on the Large Hadron Colider. Talks and other resources are available here: http://ippog-dev.web.cern.ch/resources/2012/ippog-international-masterclass-2012-cms

  8. A data Grid prototype for distributed data production in CMS

    International Nuclear Information System (INIS)

    Hafeez, Mehnaz; Samar, Asad; Stockinger, Heinz

    2001-01-01

    The CMS experiment at CERN is setting up a Grid infrastructure required to fulfill the needs imposed by Terabyte scale productions for the next few years. The goal is to automate the production and at the same time allow the users to interact with the system, if required, to make decisions which would optimize performance. We present the architecture, design and functionality of our first working Objectivity file replication prototype. The middle-ware of choice is the Globus toolkit that provides promising functionality. Our results prove the ability of the Globus toolkit to be used as an underlying technology for a world-wide Data Grid. The required data management functionality includes high speed file transfers, secure access to remote files, selection and synchronization of replicas and managing the meta information. The whole system is expected to be flexible enough to incorporate site specific policies. The data management granularity is the file rather than the object level. The first prototype is currently in use for the High Level Trigger (HLT) production (autumn 2000). Owing to these efforts, CMS is one of the pioneers to use the Data Grid functionality in a running production system. The project can be viewed as an evaluator of different strategies, a test for the capabilities of middle-ware tools and a provider of basic Grid functionalities

  9. CMS Industries awarded gold, crystal

    CERN Multimedia

    2006-01-01

    The CMS collaboration honoured 10 of its top suppliers in the seventh annual awards ceremony The representatives of the firms that recieved the CMS Gold and Crystal Awards stand with their awards after the ceremony. The seventh annual CMS Awards ceremony was held on Monday 13 March to recognize the industries that have made substantial contributions to the construction of the collaboration's detector. Nine international firms received Gold Awards, and General Tecnica of Italy received the prestigious Crystal Award. Representatives from the companies attended the ceremony during the plenary session of CMS week. 'The role of CERN, its machines and experiments, beyond particle physics is to push the development of equipment technologies related to high-energy physics,'said CMS Awards Coordinator Domenico Campi. 'All of these industries must go beyond the technologies that are currently available.' Without the involvement of good companies over the years, the construction of the CMS detector wouldn't be possible...

  10. CMS MANAGEMENT MEETINGS

    CERN Multimedia

    2010-01-01

    The Agendas and Minutes of the Management Board meetings are accessible to CMS members at: http://indico.cern.ch/categoryDisplay.py?categId=223 The Agendas and Minutes of the Collaboration Board meetings are accessible to CMS members at: http://indico.cern.ch/categoryDisplay.py?categId=174

  11. Pharmacokinetics of colistin and colistimethate sodium after a single 80-mg intravenous dose of CMS in young healthy volunteers.

    Science.gov (United States)

    Couet, W; Grégoire, N; Gobin, P; Saulnier, P J; Frasca, D; Marchand, S; Mimoz, O

    2011-06-01

    Colistin pharmacokinetics (PK) was investigated in young healthy volunteers after a 1-h infusion of 80 mg (1 million international units (MIU)) of the prodrug colistin methanesulfonate (CMS). Concentration levels of CMS and colistin were determined in plasma and urine using a new chromatographic assay and analyzed simultaneously with a population approach after correcting the urine-related data for postexcretion hydrolysis of CMS into colistin. CMS and colistin have low volumes of distribution (14.0 and 12.4 liters, respectively), consistent with distribution being restricted to extracellular fluid. CMS is mainly excreted unchanged in urine (70% on average), with a typical renal clearance estimated at 103 ml/min-close to the glomerular filtration rate. Colistin elimination is essentially extrarenal, given that its renal clearance is 1.9 ml/min, consistent with extensive reabsorption. Colistin elimination is not limited by the formation rate because its half-life (3 h) is longer than that of CMS. The values of these pharmacokinetic parameters will serve as reference points for future comparisons with patients' data.

  12. CERN Researchers' Night @ CMS + TOTEM

    CERN Multimedia

    Hoch, Michael

    2011-01-01

    Young researchers' shifter training at CMS; • Introduction talk with discussion, • CMS control room shadowing the shifters • TOTEM control room introduction and discusson • Scientific poster work shop and presentation • Science Art installations ‘Faces of CMS’ & ‘Science Cloud’ • CMS Shift diploma presentation

  13. Scaling up a CMS tier-3 site with campus resources and a 100 Gb/s network connection: what could go wrong?

    Science.gov (United States)

    Wolf, Matthias; Woodard, Anna; Li, Wenzhao; Hurtado Anampa, Kenyi; Tovar, Benjamin; Brenner, Paul; Lannon, Kevin; Hildreth, Mike; Thain, Douglas

    2017-10-01

    The University of Notre Dame (ND) CMS group operates a modest-sized Tier-3 site suitable for local, final-stage analysis of CMS data. However, through the ND Center for Research Computing (CRC), Notre Dame researchers have opportunistic access to roughly 25k CPU cores of computing and a 100 Gb/s WAN network link. To understand the limits of what might be possible in this scenario, we undertook to use these resources for a wide range of CMS computing tasks from user analysis through large-scale Monte Carlo production (including both detector simulation and data reconstruction.) We will discuss the challenges inherent in effectively utilizing CRC resources for these tasks and the solutions deployed to overcome them.

  14. Quantitative proteomic analysis of CMS-related changes in Honglian CMS rice anther.

    Science.gov (United States)

    Sun, Qingping; Hu, Chaofeng; Hu, Jun; Li, Shaoqing; Zhu, Yingguo

    2009-10-01

    Honglian (HL) cytoplasmic male sterility (CMS) is one of the rice CMS types and has been widely used in hybrid rice production in China. The CMS line (Yuetai A, YTA) has a Yuetai B (maintainer line, YTB) nuclear genome, but has a rearranged mitochondrial (mt) genome consisting of Yuetai B. The fertility of hybrid (HL-6) was restored by restorer gene in nuclear genome of restorer line (9311). We used isotope-code affinity tag (ICAT) technology to perform the protein profiling of uninucleate stage rice anther and identify the CMS-HL related proteins. Two separate ICAT analyses were performed in this study: (1) anthers from YTA versus anthers from YTB, and (2) anthers from YTA versus anthers from HL-6. Based on the two analyses, a total of 97 unique proteins were identified and quantified in uninucleate stage rice anther under the error rate of less than 10%, of which eight proteins showed abundance changes of at least twofold between YTA and YTB. Triosephosphate isomerase, fructokinase II, DNA-binding protein GBP16 and ribosomal protein L3B were over-expressed in YTB, while oligopeptide transporter, floral organ regulator 1, kinase and S-adenosyl-L: -methionine synthetase were over-expressed in YTA. Reduction of the proteins associated with energy production and lesser ATP equivalents detected in CMS anther indicated that the low level of energy production played an important role in inducing CMS-HL.

  15. CMS MANAGEMENT MEETINGS

    CERN Multimedia

    The Agendas and Minutes of the Management Board meetings are accessible to CMS members at: http://indico.cern.ch/categoryDisplay.py?categId=223  The Agendas and Minutes of the Collaboration Board meetings are accessible to CMS members at: http://indico.cern.ch/categoryDisplay.py?categId=174 

  16. A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations

    Science.gov (United States)

    Demir, I.; Agliamzanov, R.

    2014-12-01

    Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.

  17. AsyncStageOut: Distributed user data management for CMS Analysis

    Science.gov (United States)

    Riahi, H.; Wildish, T.; Ciangottini, D.; Hernández, J. M.; Andreeva, J.; Balcas, J.; Karavakis, E.; Mascheroni, M.; Tanasijczuk, A. J.; Vaandering, E. W.

    2015-12-01

    AsyncStageOut (ASO) is a new component of the distributed data analysis system of CMS, CRAB, designed for managing users' data. It addresses a major weakness of the previous model, namely that mass storage of output data was part of the job execution resulting in inefficient use of job slots and an unacceptable failure rate at the end of the jobs. ASO foresees the management of up to 400k files per day of various sizes, spread worldwide across more than 60 sites. It must handle up to 1000 individual users per month, and work with minimal delay. This creates challenging requirements for system scalability, performance and monitoring. ASO uses FTS to schedule and execute the transfers between the storage elements of the source and destination sites. It has evolved from a limited prototype to a highly adaptable service, which manages and monitors the user file placement and bookkeeping. To ensure system scalability and data monitoring, it employs new technologies such as a NoSQL database and re-uses existing components of PhEDEx and the FTS Dashboard. We present the asynchronous stage-out strategy and the architecture of the solution we implemented to deal with those issues and challenges. The deployment model for the high availability and scalability of the service is discussed. The performance of the system during the commissioning and the first phase of production are also shown, along with results from simulations designed to explore the limits of scalability.

  18. AsyncStageOut: Distributed User Data Management for CMS Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Riahi, H. [CERN; Wildish, T. [Princeton U.; Ciangottini, D. [Perugia U.; Hernández, J. M. [Madrid, CIEMAT; Andreeva, J. [CERN; Balcas, J. [Vilnius U.; Karavakis, E. [CERN; Mascheroni, M. [INFN, Milan Bicocca; Tanasijczuk, A. J. [UC, San Diego; Vaandering, E. W. [Fermilab

    2015-12-23

    AsyncStageOut (ASO) is a new component of the distributed data analysis system of CMS, CRAB, designed for managing users' data. It addresses a major weakness of the previous model, namely that mass storage of output data was part of the job execution resulting in inefficient use of job slots and an unacceptable failure rate at the end of the jobs. ASO foresees the management of up to 400k files per day of various sizes, spread worldwide across more than 60 sites. It must handle up to 1000 individual users per month, and work with minimal delay. This creates challenging requirements for system scalability, performance and monitoring. ASO uses FTS to schedule and execute the transfers between the storage elements of the source and destination sites. It has evolved from a limited prototype to a highly adaptable service, which manages and monitors the user file placement and bookkeeping. To ensure system scalability and data monitoring, it employs new technologies such as a NoSQL database and re-uses existing components of PhEDEx and the FTS Dashboard. We present the asynchronous stage-out strategy and the architecture of the solution we implemented to deal with those issues and challenges. The deployment model for the high availability and scalability of the service is discussed. The performance of the system during the commissioning and the first phase of production are also shown, along with results from simulations designed to explore the limits of scalability.

  19. An Applet-based Anonymous Distributed Computing System.

    Science.gov (United States)

    Finkel, David; Wills, Craig E.; Ciaraldi, Michael J.; Amorin, Kevin; Covati, Adam; Lee, Michael

    2001-01-01

    Defines anonymous distributed computing systems and focuses on the specifics of a Java, applet-based approach for large-scale, anonymous, distributed computing on the Internet. Explains the possibility of a large number of computers participating in a single computation and describes a test of the functionality of the system. (Author/LRW)

  20. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  1. Monitoring light source for CMS lead tungstate crystal calorimeter at LHC

    CERN Document Server

    Zhang Liang Ying; Zhu, R Y; Liu, D T

    2001-01-01

    Light monitoring will serve as an intercalibration for Compact Muon Solenoid (CMS) lead tungstate crystals in situ at the Large Hadronic Collider, which is crucial for maintaining crystal calorimeter's subpercent constant term in the energy resolution. This paper presents the design of the CMS electromagnetic calorimeter monitoring light source and high-level distribution system. The correlations between variations of the light output and the transmittance for the CMS choice of yttrium-doped PbWO/sub 4/ crystals were investigated and were used to study monitoring linearity and sensitivity as a function of wavelength. The monitoring wavelength was determined so that a good linearity as well as adequate sensitivity can be achieved. The performance of a custom manufactured tunable laser system is presented. Issues related to monitoring precision are discussed. (12 refs).

  2. The CMS dataset bookkeeping service

    Science.gov (United States)

    Afaq, A.; Dolgert, A.; Guo, Y.; Jones, C.; Kosyakov, S.; Kuznetsov, V.; Lueking, L.; Riley, D.; Sekhri, V.

    2008-07-01

    The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS is available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems.

  3. The CMS dataset bookkeeping service

    Energy Technology Data Exchange (ETDEWEB)

    Afaq, A; Guo, Y; Kosyakov, S; Lueking, L; Sekhri, V [Fermilab, Batavia, Illinois 60510 (United States); Dolgert, A; Jones, C; Kuznetsov, V; Riley, D [Cornell University, Ithaca, New York 14850 (United States)

    2008-07-15

    The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS is available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems.

  4. The CMS dataset bookkeeping service

    International Nuclear Information System (INIS)

    Afaq, A; Guo, Y; Kosyakov, S; Lueking, L; Sekhri, V; Dolgert, A; Jones, C; Kuznetsov, V; Riley, D

    2008-01-01

    The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS is available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems

  5. The CMS dataset bookkeeping service

    International Nuclear Information System (INIS)

    Afaq, Anzar; Dolgert, Andrew; Guo, Yuyi; Jones, Chris; Kosyakov, Sergey; Kuznetsov, Valentin; Lueking, Lee; Riley, Dan; Sekhri, Vijay

    2007-01-01

    The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS is available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems

  6. A new dawn for CMS

    CERN Multimedia

    2007-01-01

    Supported by a gigantic crane and a factory-size room full of enthusiasm, the central barrel of CMS made its final journey underground on 28 February. The central section of the CMS detector starts its dramatic 10-hour descent underground.Several hours (and 100 metres) later, the massive barrel rests on the cavern floor. CMS scientists, journalists, photographers and members of the transport crew basked in the final rays of the 'solenoid-set' on 28 February as the central barrel of the CMS detector sinks below the horizon and began its ten-hour descent into the cavern 100 metres below. Thirteen metres long and weighing as much as five jumbo jets (1920 tonnes), the barrel is the largest of the 15 chunks of CMS detector that are being lowered one by one into the cavern. 'This is a challenging feat of engineering, as there are just 20 cm of leeway between the detector and the walls of the shaft,' said Austin Ball, Technical Coordinator of CMS. The section of the detector, which contains the solenoid of the magne...

  7. 45 CFR 150.203 - Circumstances requiring CMS enforcement.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Circumstances requiring CMS enforcement. 150.203... CARE ACCESS CMS ENFORCEMENT IN GROUP AND INDIVIDUAL INSURANCE MARKETS CMS Enforcement Processes for... requiring CMS enforcement. CMS enforces HIPAA requirements to the extent warranted (as determined by CMS) in...

  8. Monitoring the CMS Data Acquisition System

    CERN Document Server

    Bauer, Gerry; Biery, K; Branson, J; Cano, E; Cheung, H; Ciganek, M; Cittolin, S; Coarasa, J A; Deldicque, C; Dusinberre, E; Erhan, S; Fortes Rodrigues, F; Gigi, D; Glege, F; Gomez-Reino, R; Gutleber, J; Hatton, D; Laurens, J F; Lopez Perez, J A; Meijers, F; Meschi, E; Meyer, A; Mommsen, R; Moser, R; O'Dell, V; Oh, A; Orsini, L B; Patras, V; Paus, C; Petrucci, A; Pieri, M; Racz, A; Sakulin, H; Sani, M; Schieferdecker, P; Schwick, C; Shpakov, D; Simon, S; Sumorok, K; Zanetti, M.

    2010-01-01

    The CMS data acquisition system comprises O(20000) interdependent services that need to be monitored in near real-time. The ability to monitor a large number of distributed applications accurately and effectively is of paramount importance for robust operations. Application monitoring entails the collection of a large number of simple and composed values made available by the software components and hardware devices. A key aspect is that detection of deviations from a specified behaviour is supported in a timely manner, which is a prerequisite in order to take corrective actions efficiently. Given the size and time constraints of the CMS data acquisition system, efficient application monitoring is an interesting research problem. We propose an approach that uses the emerging paradigm of Web-service based eventing systems in combination with hierarchical data collection and load balancing. Scalability and efficiency are achieved by a decentralized architecture, splitting up data collections into regions of col...

  9. [Personal computer-based computer monitoring system of the anesthesiologist (2-year experience in development and use)].

    Science.gov (United States)

    Buniatian, A A; Sablin, I N; Flerov, E V; Mierbekov, E M; Broĭtman, O G; Shevchenko, V V; Shitikov, I I

    1995-01-01

    Creation of computer monitoring systems (CMS) for operating rooms is one of the most important spheres of personal computer employment in anesthesiology. The authors developed a PC RS/AT-based CMS and effectively used it for more than 2 years. This system permits comprehensive monitoring in cardiosurgical operations by real time processing the values of arterial and central venous pressure, pressure in the pulmonary artery, bioelectrical activity of the brain, and two temperature values. Use of this CMS helped appreciably improve patients' safety during surgery. The possibility to assess brain function by computer monitoring the EEF simultaneously with central hemodynamics and body temperature permit the anesthesiologist to objectively assess the depth of anesthesia and to diagnose cerebral hypoxia. Automated anesthesiological chart issued by the CMS after surgery reliably reflects the patient's status and the measures taken by the anesthesiologist.

  10. CMS Brochure (german version)

    CERN Multimedia

    Marcastel, F

    2007-01-01

    CMS is the heaviest detector at the LHC, the most powerful particle accelerator in the world, which will start up in 2008. A multi-purpose detector, CMS is composed of several systems built around a powerful superconducting magnet.

  11. CMS brochure (English version)

    CERN Document Server

    Marcastel, Fabienne

    2014-01-01

    CMS is the heaviest detector at the LHC, the most powerful particle accelerator in the world, which has started up in 2008. A multi-purpose detector, CMS is composed of several systems built around a powerful superconducting magnet.

  12. CMS brochure (Spanish version)

    CERN Multimedia

    Lefevre, C

    2008-01-01

    CMS is the heaviest detector at the LHC, the most powerful particle accelerator in the world, which will start up in 2008. A multi-purpose detector, CMS is composed of several systems built around a powerful superconducting magnet.

  13. Distributed computing at the SSCL

    International Nuclear Information System (INIS)

    Cormell, L.; White, R.

    1993-05-01

    The rapid increase in the availability of high performance, cost- effective RISC/UNIX workstations has been both a blessing and a curse. The blessing of having extremely powerful computing engines available on the desk top is well-known to many users. The user has tremendous freedom, flexibility, and control of his environment. That freedom can, however, become the curse of distributed computing. The user must become a system manager to some extent, he must worry about backups, maintenance, upgrades, etc. Traditionally these activities have been the responsibility of a central computing group. The central computing group, however, may find that it can no linger provide all of the traditional services. With the plethora of workstations now found on so many desktops throughout the entire campus or lab, the central computing group may be swamped by support requests. This talk will address several of these computer support and management issues by discussing the approach taken at the Superconducting Super Collider Laboratory. In addition, a brief review of the future directions of commercial products for distributed computing and management will be given

  14. Distributed computing at the SSCL

    International Nuclear Information System (INIS)

    Cormell, L.R.; White, R.C.

    1994-01-01

    The rapid increase in the availability of high performance, cost-effective RISC/UNIX workstations has been both a blessing and a curse. The blessing of having extremely powerful computing engines available on the desk top is well-known to many users. The user has tremendous freedom, flexibility, and control of his environment. That freedom can, however, become the curse of distributed computing. The user must become a system manager to some extent, he must worry about backups, maintenance, upgrades, etc. Traditionally these activities have been the responsibility of central computing group. The central computing group, however, may find that it can no longer provide all of the traditional services. With the plethora of workstations now found on so many desktops throughout the entire campus or lab, the central computing group may be swamped by support requests. This talk will address several of these computer support and management issues by discussing the approach taken at the Superconducting Super Collider Laboratory (SSCL). In addition, a brief review of the future directions of commercial products for distributed computing and management will be given

  15. Proceedings of the second workshop of LHC Computing Grid, LCG-France

    International Nuclear Information System (INIS)

    Chollet, Frederique; Hernandez, Fabio; Malek, Fairouz; Gaelle, Shifrin

    2007-03-01

    The second LCG-France Workshop was held in Clermont-Ferrand on 14-15 March 2007. These sessions organized by IN2P3 and DAPNIA were attended by around 70 participants working with the Computing Grid of LHC in France. The workshop was a opportunity of exchanges of information between the French and foreign site representatives on one side and delegates of experiments on the other side. The event allowed enlightening the place of LHC Computing Task within the frame of W-LCG world project, the undergoing actions and the prospects in 2007 and beyond. The following communications were presented: 1. The current status of the LHC computation in France; 2.The LHC Grid infrastructure in France and associated resources; 3.Commissioning of Tier 1; 4.The sites of Tier-2s and Tier-3s; 5.Computing in ALICE experiment; 6.Computing in ATLAS experiment; 7.Computing in the CMS experiments; 8.Computing in the LHCb experiments; 9.Management and operation of computing grids; 10.'The VOs talk to sites'; 11.Peculiarities of ATLAS; 12.Peculiarities of CMS and ALICE; 13.Peculiarities of LHCb; 14.'The sites talk to VOs'; 15. Worldwide operation of Grid; 16.Following-up the Grid jobs; 17.Surveillance and managing the failures; 18. Job scheduling and tuning; 19.Managing the site infrastructure; 20.LCG-France communications; 21.Managing the Grid data; 22.Pointing the net infrastructure and site storage. 23.ALICE bulk transfers; 24.ATLAS bulk transfers; 25.CMS bulk transfers; 26. LHCb bulk transfers; 27.Access to LHCb data; 28.Access to CMS data; 29.Access to ATLAS data; 30.Access to ALICE data; 31.Data analysis centers; 32.D0 Analysis Farm; 33.Some CMS grid analyses; 34.PROOF; 35.Distributed analysis using GANGA; 36.T2 set-up for end-users. In their concluding remarks Fairouz Malek and Dominique Pallin stressed that the current workshop was more close to users while the tasks for tightening the links between the sites and the experiments were definitely achieved. The IN2P3 leadership expressed

  16. CMS brochure (English version)

    CERN Document Server

    2017-01-01

    CMS is the heaviest detector at the LHC, the most powerful particle accelerator in the world, which has started up in 2008. A multi-purpose detector, CMS is composed of several systems built around a powerful superconducting magnet.CMS est la plus lourde des expériences du LHC, l'accélérateur de particules le plus puissant au monde qui a été mis en service en 2008. Les détecteurs de cette expérience polyvalente sont placés autour d'un puissant aimant supraconducteur.

  17. CMS brochure (French version)

    CERN Document Server

    2017-01-01

    CMS is the heaviest detector at the LHC, the most powerful particle accelerator in the world, which has started up in 2008. A multi-purpose detector, CMS is composed of several systems built around a powerful superconducting magnet.CMS est la plus lourde des expériences du LHC, l'accélérateur de particules le plus puissant au monde qui a été mis en service en 2008. Les détecteurs de cette expérience polyvalente sont placés autour d'un puissant aimant supraconducteur.

  18. QCD Monte-Carlo model tuning studies with CMS data at 13 TeV

    CERN Document Server

    Sunar Cerci, Deniz

    2018-01-01

    New CMS PYTHIA 8 event tunes are presented. The new tunes are obtained using minimum bias and underlying event observables using Monte Carlo configurations with consistent parton distribution functions and strong coupling constant values in the matrix element and the parton shower. Validation and performance studies are presented by comparing the predictions of the new tune to various soft- and hard-QCD measurements at 7, 8 and 13 TeV with CMS.

  19. A Whole-Body Physiologically Based Pharmacokinetic Model for Colistin and Colistin methanesulfonate (CMS) in Rat.

    Science.gov (United States)

    Bouchene, Salim; Marchand, Sandrine; Couet, William; Friberg, Lena E; Gobin, Patrice; Lamarche, Isabelle; Grégoire, Nicolas; Björkman, Sven; Karlsson, Mats O

    2018-04-17

    Colistin is a polymyxin antibiotic used to treat patients infected with multidrug-resistant Gram negative bacteria (MDR-GNB). The objective of this work was to develop a whole-body physiologically based pharmacokinetic (WB-PBPK) model to predict tissue distribution of colistin in rat. The distribution of a drug in a tissue is commonly characterized by its tissue-to-plasma partition coefficient, K p . Colistin and its prodrug, colistin methanesulfonate (CMS) K p priors were measured experimentally from rat tissue homogenates or predicted in silico. The PK parameters of both compounds were estimated fitting in vivo their plasma concentration-time profiles from six rats receiving an i.v. bolus of CMS. The variability in the data was quantified by applying a non-linear mixed effect (NLME) modelling approach. A WB-PBPK model was developed assuming a well-stirred and perfusion-limited distribution in tissue compartments. Prior information on tissue distribution of colistin and CMS was investigated following three scenarios: K p were estimated using in silico K p priors (I) or K p were estimated using experimental K p priors (II) or K p were fixed to the experimental values (III). The WB-PBPK model best described colistun and CMS plasma concentration-time profiles in scenario II. Colistin predicted concentrations in kidneys in scenario II were higher than in other tissues, which was consistent with its large experimental K p prior. This might be explained by a high affinity of colistin for renal parenchyma and active reabsorption into the proximal tubular cells. In contrast, renal accumulation of colistin was not predicted in scenario I. Colistin and CMS clearance estimates were in agreement with published values. The developed model suggests using experimental priors over in silico K p priors for kidneys to provide a better prediction of colistin renal distribution. Such models might serve in drug development for interspecies scaling and investigating the impact of

  20. A new visitor centre for CMS

    CERN Document Server

    2001-01-01

    At the inauguration of the new CMS visitor centre. The CMS experiment inaugurated a new visitor centre at its Cessy site on 14 June. This will allow the thousands of people who come to CERN each year to follow the construction of one the Laboratory's flagship experiments first-hand. CERN receives over 20,000 visitors each year. Until recently, many of them were taken on a guided tour of one of the LEP experiments. With the closure of LEP, however, trips underground are no longer possible, and the Visits' Service has put in place a number of other itineraries (Bulletin 46/2000). Since the CMS detector will be almost entirely constructed in a surface hall, it is now taking a big share of the limelight. The CMS visitor centre has been built on a platform overlooking CMS construction. It contains a set of clear descriptive posters describing the experiment, along with a video projection showing animations and movies about CMS construction. In the coming weeks, a display of CMS detector elements will be added, as...

  1. CMS Trigger Performance

    CERN Document Server

    Donato, Silvio

    2017-01-01

    During its second run of operation (Run 2) which started in 2015, the LHC will deliver a peak instantaneous luminosity that may reach $2 \\cdot 10^{34}$ cm$^{-2}$s$^{-1}$ with an average pile-up of about 55, far larger than the design value. Under these conditions, the online event selection is a very challenging task. In CMS, it is realized by a two-level trigger system the Level-1 (L1) Trigger, implemented in custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the offline reconstruction software running on a computer farm. In order to face this challenge, the L1 trigger has been through a major upgrade compared to Run 1, whereby all electronic boards of the system have been replaced, allowing more sophisticated algorithms to be run online. Its last stage, the global trigger, is now able to perform complex selections and to compute high-level quantities, like invariant masses. Likewise, the algorithms that run in the HLT go through big improvements; in particular, new appr...

  2. Auger Physicists visit CMS

    CERN Multimedia

    Hoch, Michael

    2012-01-01

    Visit at CERN P5 CMS in the experimental cavern Alan Watson, Auger Spokesperson Emeritus, University of Leeds; Jim Cronin, Nobel Laureate, Auger Spokesperson Emeritus, University of Chicago; Jim Virdee, CMS Former Spokesperson, Imperial College; Jim Matthews, Auger Co-Spokesperson, Louisiana State University

  3. Photos from the CMS Photo Book

    CERN Multimedia

    Boreham, S

    2008-01-01

    Photos from the CMS Photo Book. Activities at Point 5 in Cessy, France, between 1998 - 2008. Images of assembly and Installation of the CMS detector: - Civil Engineering - Assembly in the Surface Building - Lowering of the Heavy Elements - Installing and connecting the CMS detector in the underground experiment These images illustrate the assembly, installation and commissioning of the CMS detector. They cover the activities at Point 5 in Cessy, France, between 1998 and 2008. CMS is one of the most complex scientific instruments ever built. It has taken about 20 years to go from conceptual design to the completion of construction of the CMS detector for the LHC start-up in September 2008. Accomplishing this has required the talents, efforts and resources of over 2500 scientists and engineers from about 180 institutions in 38 countries. caverns Compiled by: S. Cittolin, F. Marcastel and T.S. Virdee

  4. CMS Higgs boson results

    CERN Document Server

    Bluj, Michal Jacek

    2018-01-01

    In this report we review recent Higgs boson results obtained with pp collisions at $\\sqrt{s}=\\,$13 TeV recorded by the CMS detector in 2016 for an integrated luminosity of 35.9fb$^{\\text{-1}}$. The 2016 data allowed the observation of the $H \\to \\tau\\tau$ and $H \\to WW$ decays with high significance. We also present a combined measurement based on a full set of CMS analyses performed with 2016 data. These results are compatible with the standard model predictions with precision of several measurements exceeding results from combination of ATLAS and CMS data collected in 2011 and 2012.

  5. 10th International Symposium on Intelligent Distributed Computing

    CERN Document Server

    Seghrouchni, Amal; Beynier, Aurélie; Camacho, David; Herpson, Cédric; Hindriks, Koen; Novais, Paulo

    2017-01-01

    This book presents the combined peer-reviewed proceedings of the tenth International Symposium on Intelligent Distributed Computing (IDC’2016), which was held in Paris, France from October 10th to 12th, 2016. The 23 contributions address a range of topics related to theory and application of intelligent distributed computing, including: Intelligent Distributed Agent-Based Systems, Ambient Intelligence and Social Networks, Computational Sustainability, Intelligent Distributed Knowledge Representation and Processing, Smart Networks, Networked Intelligence and Intelligent Distributed Applications, amongst others.

  6. A multi-dimensional view on information retrieval of CMS data

    International Nuclear Information System (INIS)

    Dolgert, A; Gibbons, L; Kuznetsov, V; Jones, C D; Riley, D

    2008-01-01

    The CMS Dataset Bookkeeping System (DBS) search page is a web-based application used by physicists and production managers to find data from the CMS experiment. The main challenge in the design of the system was to map the complex, distributed data model embodied in the DBS and the Data Location Service (DLS) to a simple, intuitive interface consistent with the mental model of physicists analyzing the data. We used focus groups and user interviews to establish the required features. The resulting interface addresses the physicist and production manager roles separately, offering both a guided search structured for the common physics use cases as well as a dynamic advanced query interface

  7. 78 FR 49525 - Privacy Act of 1974; CMS Computer Match No. 2013-06; HHS Computer Match No. 1308

    Science.gov (United States)

    2013-08-14

    ... certain protections for individuals applying for and receiving Federal benefits. Section 7201 of the.... Verify match findings before reducing, suspending, terminating, or denying an individual's benefits or... credit (APTC) and cost sharing reductions (CSR). The data will be used by CMS in its capacity as a...

  8. DIRAC distributed computing services

    International Nuclear Information System (INIS)

    Tsaregorodtsev, A

    2014-01-01

    DIRAC Project provides a general-purpose framework for building distributed computing systems. It is used now in several HEP and astrophysics experiments as well as for user communities in other scientific domains. There is a large interest from smaller user communities to have a simple tool like DIRAC for accessing grid and other types of distributed computing resources. However, small experiments cannot afford to install and maintain dedicated services. Therefore, several grid infrastructure projects are providing DIRAC services for their respective user communities. These services are used for user tutorials as well as to help porting the applications to the grid for a practical day-to-day work. The services are giving access typically to several grid infrastructures as well as to standalone computing clusters accessible by the target user communities. In the paper we will present the experience of running DIRAC services provided by the France-Grilles NGI and other national grid infrastructure projects.

  9. Russian institute receives CMS Gold Award

    CERN Multimedia

    Patrice Loïez

    2003-01-01

    The Snezhinsk All-Russian Institute of Scientific Research for Technical Physics (VNIITF) of the Russian Federal Nuclear Centre (RFNC) is one of twelve CMS suppliers to receive awards for outstanding performance this year. The CMS Collaboration took the opportunity of the visit to CERN of the Director of VNIITF and his deputy to present the CMS Gold Award, which the institute has received for its exceptional performance in the assembly of steel plates for the CMS forward hadronic calorimeter. This calorimeter consists of two sets of 18 wedge-shaped modules arranged concentrically around the beam-pipe at each end of the CMS detector. Each module consists of steel absorber plates with quartz fibres inserted into them. The institute developed a special welding technique to assemble the absorber plates, enabling a high-quality detector to be produced at relatively low cost.RFNC-VNIITF Director Professor Georgy Rykovanov (right), is seen here receiving the Gold Award from Felicitas Pauss, Vice-Chairman of the CMS ...

  10. CMS (Compact Muon Solenoid)

    International Nuclear Information System (INIS)

    Anon.

    1995-01-01

    The milestone workshops on LHC experiments in Aachen in 1990 and at Evian in 1992 provided the first sketches of how LHC detectors might look. The concept of a compact general-purpose LHC experiment based on a solenoid to provide the magnetic field was first discussed at Aachen, and the formal Expression of Interest was aired at Evian. It was here that the Compact Muon Solenoid (CMS) name first became public. Optimizing first the muon detection system is a natural starting point for a high luminosity (interaction rate) proton-proton collider experiment. The compact CMS design called for a strong magnetic field, of some 4 Tesla, using a superconducting solenoid, originally about 14 metres long and 6 metres bore. (By LHC standards, this warrants the adjective 'compact'.) The main design goals of CMS are: 1 - a very good muon system providing many possibilities for momentum measurement (physicists call this a 'highly redundant' system); 2 - the best possible electromagnetic calorimeter consistent with the above; 3 - high quality central tracking to achieve both the above; and 4 - an affordable detector. Overall, CMS aims to detect cleanly the diverse signatures of new physics by identifying and precisely measuring muons, electrons and photons over a large energy range at very high collision rates, while also exploiting the lower luminosity initial running. As well as proton-proton collisions, CMS will also be able to look at the muons emerging from LHC heavy ion beam collisions. The Evian CMS conceptual design foresaw the full calorimetry inside the solenoid, with emphasis on precision electromagnetic calorimetry for picking up photons. (A light Higgs particle will probably be seen via its decay into photon pairs.) The muon system now foresaw four stations. Inner tracking would use silicon microstrips and microstrip gas chambers, with over 10 7 channels offering high track finding efficiency. In the central CMS barrel, the tracking elements are

  11. 42 CFR 489.53 - Termination by CMS.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Termination by CMS. 489.53 Section 489.53 Public... Reinstatement After Termination § 489.53 Termination by CMS. (a) Basis for termination of agreement with any provider. CMS may terminate the agreement with any provider if CMS finds that any of the following failings...

  12. Computer Graphics Simulations of Sampling Distributions.

    Science.gov (United States)

    Gordon, Florence S.; Gordon, Sheldon P.

    1989-01-01

    Describes the use of computer graphics simulations to enhance student understanding of sampling distributions that arise in introductory statistics. Highlights include the distribution of sample proportions, the distribution of the difference of sample means, the distribution of the difference of sample proportions, and the distribution of sample…

  13. CMS users data management service integration and first experiences with its NoSQL data storage

    International Nuclear Information System (INIS)

    Riahi, H; Spiga, D; Cinquilli, M; Boccali, T; Ciangottini, D; Santocchia, A; Hernàndez, J M; Konstantinov, P; Mascheroni, M

    2014-01-01

    The distributed data analysis workflow in CMS assumes that jobs run in a different location to where their results are finally stored. Typically the user outputs must be transferred from one site to another by a dedicated CMS service, AsyncStageOut. This new service is originally developed to address the inefficiency in using the CMS computing resources when transferring the analysis job outputs, synchronously, once they are produced in the job execution node to the remote site. The AsyncStageOut is designed as a thin application relying only on the NoSQL database (CouchDB) as input and data storage. It has progressed from a limited prototype to a highly adaptable service which manages and monitors the whole user files steps, namely file transfer and publication. The AsyncStageOut is integrated with the Common CMS/Atlas Analysis Framework. It foresees the management of nearly nearly 200k users' files per day of close to 1000 individual users per month with minimal delays, and providing a real time monitoring and reports to users and service operators, while being highly available. The associated data volume represents a new set of challenges in the areas of database scalability and service performance and efficiency. In this paper, we present an overview of the AsyncStageOut model and the integration strategy with the Common Analysis Framework. The motivations for using the NoSQL technology are also presented, as well as data design and the techniques used for efficient indexing and monitoring of the data. We describe deployment model for the high availability and scalability of the service. We also discuss the hardware requirements and the results achieved as they were determined by testing with actual data and realistic loads during the commissioning and the initial production phase with the Common Analysis Framework.

  14. CMS users data management service integration and first experiences with its NoSQL data storage

    Science.gov (United States)

    Riahi, H.; Spiga, D.; Boccali, T.; Ciangottini, D.; Cinquilli, M.; Hernàndez, J. M.; Konstantinov, P.; Mascheroni, M.; Santocchia, A.

    2014-06-01

    The distributed data analysis workflow in CMS assumes that jobs run in a different location to where their results are finally stored. Typically the user outputs must be transferred from one site to another by a dedicated CMS service, AsyncStageOut. This new service is originally developed to address the inefficiency in using the CMS computing resources when transferring the analysis job outputs, synchronously, once they are produced in the job execution node to the remote site. The AsyncStageOut is designed as a thin application relying only on the NoSQL database (CouchDB) as input and data storage. It has progressed from a limited prototype to a highly adaptable service which manages and monitors the whole user files steps, namely file transfer and publication. The AsyncStageOut is integrated with the Common CMS/Atlas Analysis Framework. It foresees the management of nearly nearly 200k users' files per day of close to 1000 individual users per month with minimal delays, and providing a real time monitoring and reports to users and service operators, while being highly available. The associated data volume represents a new set of challenges in the areas of database scalability and service performance and efficiency. In this paper, we present an overview of the AsyncStageOut model and the integration strategy with the Common Analysis Framework. The motivations for using the NoSQL technology are also presented, as well as data design and the techniques used for efficient indexing and monitoring of the data. We describe deployment model for the high availability and scalability of the service. We also discuss the hardware requirements and the results achieved as they were determined by testing with actual data and realistic loads during the commissioning and the initial production phase with the Common Analysis Framework.

  15. Overview over opportunities for measuring new physics with ATLAS and CMS

    CERN Document Server

    Johansson, Per; The ATLAS collaboration

    2018-01-01

    This document gives of an overview over opportunities for measuring new physics with ATLAS and CMSD. Describing different signatures and searches, as angular distributions, different analysis techniques currently ongoing at ATLAS and CMS as well as future prospects.

  16. Jim Virdee, the new spokesperson of CMS

    CERN Multimedia

    2006-01-01

    Jim Virdee and Michel Della Negra. On 21 June Tejinder 'Jim'Virdee was elected by the CMS collaboration as its new spokesperson, his 3-year term of office beginning in January 2007. He will take over from Michel Della Negra, who has been CMS spokesperson since its formalization in 1992. Three distinguished physicists stood as candidates for this election: Dan Green from Fermilab, programme manager of the US-CMS collaboration and coordinator of the CMS Hadron Calorimeter project; Jim Virdee from Imperial College London and CERN, deputy spokesperson of CMS since 1993; Gigi Rolandi from the University of Trieste and CERN, ex-Aleph spokesperson and currently involved in the preparations of the physics analyses to be done with CMS. On the early evening of 21 June, 141 of the 142 members of the CMS collaboration board, some represented by proxies, took part in a secret ballot. After two rounds of voting Jim Virdee was elected as spokesperson with a clear majority. Jim thanked the CMS collaboration 'for putting conf...

  17. Wide area network access to CMS data using the LustreTM filesystem

    Science.gov (United States)

    Rodriguez, J. L.; Avery, P.; Brody, T.; Bourilkov, D.; Fu, Y.; Kim, B.; Prescott, C.; Wu, Y.

    2010-04-01

    In this paper, we explore the use of the LustreTM cluster filesystem over the wide area network to access Compact Muon Solenoid (CMS) data stored on physical devices located hundreds of kilometres away. We describe the experimental testbed and report on the I/O performance of applications writing and reading data on the distributed LustreTM filesystem established across the WAN. We compare the I/O performance of a CMS application to the performance obtained with IOzone, a standard benchmark tool. We then examine the I/O performance of the CMS application running multiple processes on a single server. And compare the Lustre results to results obtained on data stored on local filesystems. Our measurements reveal that the IOzone benchmark tool, accessing data sequentially, can saturate the Gbps network link that connects our Lustre client in Miami Florida to the Lustre storage located in Gainesville, Florida. We also find that the I/O rates of the CMS application is significantly less than what can be obtained with IOzone for sequential access to data.

  18. Wide area network access to CMS data using the LustreTM filesystem

    International Nuclear Information System (INIS)

    Rodriguez, J L; Brody, T; Avery, P; Bourilkov, D; Fu, Y; Kim, B; Wu, Y; Prescott, C

    2010-01-01

    In this paper, we explore the use of the Lustre TM cluster filesystem over the wide area network to access Compact Muon Solenoid (CMS) data stored on physical devices located hundreds of kilometres away. We describe the experimental testbed and report on the I/O performance of applications writing and reading data on the distributed Lustre TM filesystem established across the WAN. We compare the I/O performance of a CMS application to the performance obtained with IOzone, a standard benchmark tool. We then examine the I/O performance of the CMS application running multiple processes on a single server. And compare the Lustre results to results obtained on data stored on local filesystems. Our measurements reveal that the IOzone benchmark tool, accessing data sequentially, can saturate the Gbps network link that connects our Lustre client in Miami Florida to the Lustre storage located in Gainesville, Florida. We also find that the I/O rates of the CMS application is significantly less than what can be obtained with IOzone for sequential access to data.

  19. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  20. 42 CFR 401.108 - CMS rulings.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false CMS rulings. 401.108 Section 401.108 Public Health... GENERAL ADMINISTRATIVE REQUIREMENTS Confidentiality and Disclosure § 401.108 CMS rulings. (a) After... regulations, but which has been adopted by CMS as having precedent, may be published in the Federal Register...

  1. CMS tracker visualization tools

    Energy Technology Data Exchange (ETDEWEB)

    Mennea, M.S. [Dipartimento Interateneo di Fisica ' Michelangelo Merlin' e INFN sezione di Bari, Via Amendola 173 - 70126 Bari (Italy); Osborne, I. [Northeastern University, 360 Huntington Avenue, Boston, MA 02115 (United States); Regano, A. [Dipartimento Interateneo di Fisica ' Michelangelo Merlin' e INFN sezione di Bari, Via Amendola 173 - 70126 Bari (Italy); Zito, G. [Dipartimento Interateneo di Fisica ' Michelangelo Merlin' e INFN sezione di Bari, Via Amendola 173 - 70126 Bari (Italy)]. E-mail: giuseppe.zito@ba.infn.it

    2005-08-21

    This document will review the design considerations, implementations and performance of the CMS Tracker Visualization tools. In view of the great complexity of this sub-detector (more than 50 millions channels organized in 16540 modules each one of these being a complete detector), the standard CMS visualization tools (IGUANA and IGUANACMS) that provide basic 3D capabilities and integration within CMS framework, respectively, have been complemented with additional 2D graphics objects. Based on the experience acquired using this software to debug and understand both hardware and software during the construction phase, we propose possible future improvements to cope with online monitoring and event analysis during data taking.

  2. CMS tracker visualization tools

    CERN Document Server

    Zito, G; Osborne, I; Regano, A

    2005-01-01

    This document will review the design considerations, implementations and performance of the CMS Tracker Visualization tools. In view of the great complexity of this sub-detector (more than 50 millions channels organized in 16540 modules each one of these being a complete detector), the standard CMS visualization tools (IGUANA and IGUANACMS) that provide basic 3D capabilities and integration within CMS framework, respectively, have been complemented with additional 2D graphics objects. Based on the experience acquired using this software to debug and understand both hardware and software during the construction phase, we propose possible future improvements to cope with online monitoring and event analysis during data taking.

  3. CMS tracker visualization tools

    International Nuclear Information System (INIS)

    Mennea, M.S.; Osborne, I.; Regano, A.; Zito, G.

    2005-01-01

    This document will review the design considerations, implementations and performance of the CMS Tracker Visualization tools. In view of the great complexity of this sub-detector (more than 50 millions channels organized in 16540 modules each one of these being a complete detector), the standard CMS visualization tools (IGUANA and IGUANACMS) that provide basic 3D capabilities and integration within CMS framework, respectively, have been complemented with additional 2D graphics objects. Based on the experience acquired using this software to debug and understand both hardware and software during the construction phase, we propose possible future improvements to cope with online monitoring and event analysis during data taking

  4. Data Scouting in CMS

    CERN Document Server

    Anderson, Dustin James

    2016-01-01

    In 2011, the CMS collaboration introduced Data Scouting as a way to produce physics results with events that cannot be stored on disk, due to resource limits in the data acquisition and offline infrastructure. The viability of this technique was demonstrated in 2012, when 18 fb$^{-1}$ of collision data at $\\sqrt{s}$ = 8 TeV were collected. The technique is now a standard ingredient of CMS and ATLAS data-taking strategy. In this talk, we present the status of data scouting in CMS and the improvements introduced in 2015 and 2016, which promoted data scouting to a full-fledged, flexible discovery tool for the LHC Run II.

  5. Distributed computing for macromolecular crystallography.

    Science.gov (United States)

    Krissinel, Evgeny; Uski, Ville; Lebedev, Andrey; Winn, Martyn; Ballard, Charles

    2018-02-01

    Modern crystallographic computing is characterized by the growing role of automated structure-solution pipelines, which represent complex expert systems utilizing a number of program components, decision makers and databases. They also require considerable computational resources and regular database maintenance, which is increasingly more difficult to provide at the level of individual desktop-based CCP4 setups. On the other hand, there is a significant growth in data processed in the field, which brings up the issue of centralized facilities for keeping both the data collected and structure-solution projects. The paradigm of distributed computing and data management offers a convenient approach to tackling these problems, which has become more attractive in recent years owing to the popularity of mobile devices such as tablets and ultra-portable laptops. In this article, an overview is given of developments by CCP4 aimed at bringing distributed crystallographic computations to a wide crystallographic community.

  6. Rivet usage at CMS

    Energy Technology Data Exchange (ETDEWEB)

    Radziej, Markus; Hebbeker, Thomas; Sonnenschein, Lars [III. Phys. Inst. A, RWTH Aachen (Germany)

    2015-07-01

    In this talk an overview of Rivet and its usage at the CMS experiment is presented. Rivet stands for ''Robust Independent Validation of Experiment and Theory'' and is used for optimizing and validating Monte Carlo event generators. By using the results of published analyses, distributions of the simulation can be compared to experimental measurements (corrected for detector effects). This gives insight into the agreement on the particle-level. Starting off with an introduction to the Rivet environment, the purpose of this tool in modern particle physics is explained. Before taking a closer look at the analysis structure, the software necessary to get comparisons is outlined. Analysis implementations are discussed using code examples, showcasing the powerful framework that Rivet provides. A few selected final distributions displaying both Monte Carlo generated events and recorded data are presented, showing the potential to perform particle-level comparisons.

  7. Configuration monitoring tool for large-scale distributed computing

    International Nuclear Information System (INIS)

    Wu, Y.; Graham, G.; Lu, X.; Afaq, A.; Kim, B.J.; Fisk, I.

    2004-01-01

    The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) at CERN will likely use a grid system to achieve much of its offline processing need. Given the heterogeneous and dynamic nature of grid systems, it is desirable to have in place a configuration monitor. The configuration monitoring tool is built using the Globus toolkit and web services. It consists of an information provider for the Globus MDS, a relational database for keeping track of the current and old configurations, and client interfaces to query and administer the configuration system. The Grid Security Infrastructure (GSI), together with EDG Java Security packages, are used for secure authentication and transparent access to the configuration information across the CMS grid. This work has been prototyped and tested using US-CMS grid resources

  8. Configuration monitoring tool for large-scale distributed computing

    CERN Document Server

    Wu, Y; Fisk, I; Graham, G; Kim, B J; Lü, X

    2004-01-01

    The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) at CERN will likely use a grid system to achieve much of its offline processing need. Given the heterogeneous and dynamic nature of grid systems, it is desirable to have in place a configuration monitor. The configuration monitoring tool is built using the Globus toolkit and web services. It consists of an information provider for the Globus MDS, a relational database for keeping track of the current and old configurations, and client interfaces to query and administer the configuration system. The Grid Security Infrastructure (GSI), together with EDG Java Security packages, are used for secure authentication and transparent access to the configuration information across the CMS grid. This work has been prototyped and tested using US-CMS grid resources.

  9. Measuring CMS Software Performance in the first years of LHC collisions

    CERN Document Server

    Benelli, Gabriele; Pfeiffer, Andreas; Piparo, Danilo; Zemleris, Vidmantas

    2011-01-01

    The CMSSW software framework is a complex project enabling the CMS collaboration to investigate the fast growing LHC collision data sample. A software performance suite of tools has been developed and integrated in CMSSW to keep track of cpu time, memory footprint and event size on disk. These three metrics are key constraints in software development in order to meet the computing requirements used in the planning and management of the CMS computing infrastructure. The performance suite allows the measurement and tracking of the performance across the framework, publishing the results in a dedicated database. A web application makes the results easily accessible to software release managers allowing for automatic integration in CMSSW release cycle quality assurance. The performance suite is also available to individual developers for dedicated code optimization and the web application allows historic regression and comparisons across releases. The performance suite tools and the performance of the CMSSW frame...

  10. Electronics and triggering challenges for the CMS High Granularity Calorimeter

    CERN Document Server

    Lobanov, Artur

    2017-01-01

    The High Granularity Calorimeter (HGCAL), presently being designed by the CMS collaboration to replace the CMS endcap calorimeters for the High Luminosity phase of LHC, will feature six million channels distributed over 52 longitudinal layers. The requirements for the front-end electronics are extremely challenging, including high dynamic range (0-10 pC), low noise (~2000e- to be able to calibrate on single minimum ionising particles throughout the detector lifetime) and low power consumption (~10mW/channel), as well as the need to select and transmit trigger information with a high granularity. Exploiting the intrinsic precision-timing capabilities of silicon sensors also requires careful design of the front-end electronics as well as the whole system, particularly clock distribution. The harsh radiation environment and requirement to keep the whole detector as dense as possible will require novel solutions to the on-detector electronics layout. Processing all the data from the HGCAL imposes equally large ch...

  11. Distributed simulation of large computer systems

    International Nuclear Information System (INIS)

    Marzolla, M.

    2001-01-01

    Sequential simulation of large complex physical systems is often regarded as a computationally expensive task. In order to speed-up complex discrete-event simulations, the paradigm of Parallel and Distributed Discrete Event Simulation (PDES) has been introduced since the late 70s. The authors analyze the applicability of PDES to the modeling and analysis of large computer system; such systems are increasingly common in the area of High Energy and Nuclear Physics, because many modern experiments make use of large 'compute farms'. Some feasibility tests have been performed on a prototype distributed simulator

  12. LHCC COMPREHENSIVE REVIEW OF CMS (JULY 07)

    CERN Multimedia

    Extract from the Draft Report 1. EXECUTIVE SUMMARY The CMS Collaboration has made significant progress towards producing a detector ready for LHC operation in 2008. The past year saw all sub-detector groups success fully produce high-quality components and modules, and integrate them into the final objects to be installed into the CMS magnet. Installation and commissioning of final components in the CMS UXC55 cavern are well-under-way. In particular, the heavy lowering of detector elements into the CMS experiment cavern is a major success. The new CMS master schedule V36 incorporates the revised LHC machine schedule and includes an optimized detector sequencing. In spite of various delays, it remains possible that CMS will have an initial detector ready to exploit the initial LHC run in spring 2008. Installation of the Electromagnetic Calorimeter End-Cap (EE) and Pre-shower (ES) detectors is scheduled to be completed no sooner than July 2008 and CMS now plans to install the complete Pixel Detector for ...

  13. The performance of CMS ZDC detector in 2016

    CERN Document Server

    CMS Collaboration

    2017-01-01

    The Zero Degree Calorimeter (ZDC) detects neutral particles in $\\lvert\\eta\\rvert > 8.5$ region. In 2016, the ZDC is cross-calibrated to 2010 dataset. Peaks corresponding to 1, 2 and 3 are visible in the ZDC total signal distribution. The effect of pileup is corrected by a Fourier deconvolution method. Neutron number distribution is unfolded using linear regularization method. The ZDC can be used as an unbiased centrality estimator in pPb collisions - but theoretical models valid at LHC are needed for this. The CMS ZDC is able to measure the spectator neutron multiplicity distribution, which will be a useful information for developing such models.

  14. Bayesian optimization for computationally extensive probability distributions.

    Science.gov (United States)

    Tamura, Ryo; Hukushima, Koji

    2018-01-01

    An efficient method for finding a better maximizer of computationally extensive probability distributions is proposed on the basis of a Bayesian optimization technique. A key idea of the proposed method is to use extreme values of acquisition functions by Gaussian processes for the next training phase, which should be located near a local maximum or a global maximum of the probability distribution. Our Bayesian optimization technique is applied to the posterior distribution in the effective physical model estimation, which is a computationally extensive probability distribution. Even when the number of sampling points on the posterior distributions is fixed to be small, the Bayesian optimization provides a better maximizer of the posterior distributions in comparison to those by the random search method, the steepest descent method, or the Monte Carlo method. Furthermore, the Bayesian optimization improves the results efficiently by combining the steepest descent method and thus it is a powerful tool to search for a better maximizer of computationally extensive probability distributions.

  15. Monitoring the CMS strip tracker readout system

    International Nuclear Information System (INIS)

    Mersi, S; Bainbridge, R; Cripps, N; Fulcher, J; Wingham, M; Baulieu, G; Bel, S; Delaere, C; Drouhin, F; Mirabito, L; Cole, J; Giassi, A; Gross, L; Hahn, K; Nikolic, M; Tkaczyk, S

    2008-01-01

    The CMS Silicon Strip Tracker at the LHC comprises a sensitive area of approximately 200 m 2 and 10 million readout channels. Its data acquisition system is based around a custom analogue front-end chip. Both the control and the readout of the front-end electronics are performed by off-detector VME boards in the counting room, which digitise the raw event data and perform zero-suppression and formatting. The data acquisition system uses the CMS online software framework to configure, control and monitor the hardware components and steer the data acquisition. The first data analysis is performed online within the official CMS reconstruction framework, which provides many services, such as distributed analysis, access to geometry and conditions data, and a Data Quality Monitoring tool based on the online physics reconstruction. The data acquisition monitoring of the Strip Tracker uses both the data acquisition and the reconstruction software frameworks in order to provide real-time feedback to shifters on the operational state of the detector, archiving for later analysis and possibly trigger automatic recovery actions in case of errors. Here we review the proposed architecture of the monitoring system and we describe its software components, which are already in place, the various monitoring streams available, and our experiences of operating and monitoring a large-scale system

  16. The CMS Magnetic Field Map Performance

    CERN Document Server

    Klyukhin, V.I.; Andreev, V.; Ball, A.; Cure, B.; Herve, A.; Gaddi, A.; Gerwig, H.; Karimaki, V.; Loveless, R.; Mulders, M.; Popescu, S.; Sarycheva, L.I.; Virdee, T.

    2010-04-05

    The Compact Muon Solenoid (CMS) is a general-purpose detector designed to run at the highest luminosity at the CERN Large Hadron Collider (LHC). Its distinctive featuresinclude a 4 T superconducting solenoid with 6 m diameter by 12.5 m long free bore, enclosed inside a 10000-ton return yoke made of construction steel. Accurate characterization of the magnetic field everywhere in the CMS detector is required. During two major tests of the CMS magnet the magnetic flux density was measured inside the coil in a cylinder of 3.448 m diameter and 7 m length with a specially designed field-mapping pneumatic machine as well as in 140 discrete regions of the CMS yoke with NMR probes, 3-D Hall sensors and flux-loops. A TOSCA 3-D model of the CMS magnet has been developed to describe the magnetic field everywhere outside the tracking volume measured with the field-mapping machine. A volume based representation of the magnetic field is used to provide the CMS simulation and reconstruction software with the magnetic field ...

  17. Guido Tonelli elected next CMS spokesperson

    CERN Multimedia

    2009-01-01

    Guido Tonelli has been elected as the next CMS spokesperson. He will take over from Jim Virdee on January 1, 2010, and will head the collaboration through the first crucial year of data-taking. Guido Tonelli, CMS spokesperson-elect, into the CMS cavern. "It will be very tough and there will be enormous pressure," explains Guido Tonelli, CMS spokesperson-elect. "It will be the first time that CMS will run for a whole year so it is important to go through the checklist to be able to take good quality data." Tonelli, who is currently CMS Deputy spokesperson, will take over from Jim Virdee on January 1, 2010 – only a few months into CMS’s first full year of data-taking. "The collisions will probably be different to our expectations. So it’s going to take the effort of the entire collaboration worldwide to be ready for this new phase." Born in Italy, Tonelli originally studied at the University of Pisa, where he is now a Professo...

  18. CMS Status

    International Nuclear Information System (INIS)

    Dobrzynski, L.

    2007-01-01

    The status of the construction and installation of CMS detector is reviewed. The 4T magnet is cold since end of February 2006. Its commissioning up to the nominal field started in July 2006 allowing a Cosmic Challenge in which elements of the final detector are involved. All big mechanical pieces equipped with muons chambers have been assembled in the surface hall SX5. Since mid July the detector is closed with commissioned HCAL, two ECAL supermodules and representative elements of the silicon tracker. The trigger system as well as the DAQ are tested. After the achievement of the physics TDR, CMS is now ready for the promising signal hunting. (author)

  19. Faces of CMS: Photomosaic (September 2013, low-resolution)

    CERN Multimedia

    Antonelli, Jamie

    2013-01-01

    The "Faces of CMS" photomosaic project aims to show the human element of the CMS Experiment. Most of the images for public outreach show the experimental equipment of CMS or physics results and collision displays. With a collaboration of around 3,000 people scattered around the globe, it's difficult to present the members of CMS in any one image. We asked any interested CMS members to sign up for the project, and allow us to use their photographs. The resulting photo mosaic contains the faces of 1,271 CMS members.

  20. Distributed computing environments for future space control systems

    Science.gov (United States)

    Viallefont, Pierre

    1993-01-01

    The aim of this paper is to present the results of a CNES research project on distributed computing systems. The purpose of this research was to study the impact of the use of new computer technologies in the design and development of future space applications. The first part of this study was a state-of-the-art review of distributed computing systems. One of the interesting ideas arising from this review is the concept of a 'virtual computer' allowing the distributed hardware architecture to be hidden from a software application. The 'virtual computer' can improve system performance by adapting the best architecture (addition of computers) to the software application without having to modify its source code. This concept can also decrease the cost and obsolescence of the hardware architecture. In order to verify the feasibility of the 'virtual computer' concept, a prototype representative of a distributed space application is being developed independently of the hardware architecture.

  1. A Distributed Computational Infrastructure for Science and Education

    Directory of Open Access Journals (Sweden)

    Rustam K. Bazarov

    2014-06-01

    Full Text Available Researchers have lately been paying increasingly more attention to parallel and distributed algorithms for solving high-dimensionality problems. In this regard, the issue of acquiring or renting computational resources becomes a topical one for employees of scientific and educational institutions. This article examines technology and methods for organizing a distributed computational infrastructure. The author addresses the experience of creating a high-performance system powered by existing clusterization and grid computing technology. The approach examined in the article helps minimize financial costs, aggregate territorially distributed computational resources and ensures a more rational use of available computer equipment, eliminating its downtimes.

  2. New Management for CMS

    CERN Document Server

    CERN Bulletin

    2010-01-01

    As of January 2010, Guido Tonelli becomes the new CMS Spokesperson with a two-year term of office. A Professor of General Physics at the University of Pisa, Italy, and a CERN Staff Member since January 2010, Tonelli had already been appointed as Deputy Spokesperson under the previous management. He has taken over from Jim Virdee, who was CMS Spokesperson from January 2007 to December 2009. Guido Tonelli, new CMS spokesperson At the same time as Tonelli becomes Spokesperson, two new Deputies, Albert De Roeck and Joe Incandela, as well as a whole new set of Coordinators, are also starting their terms of office. ”With the first data-taking run we have shown that CMS is an excellent experiment. The next challenge will be to transform CMS into a discovery machine with a view to making it synonymous with scientific excellence. This will be very tough but, again, the winning element will be the focus and coherent effort of the whole collaboration. On my side I'll do my best but I will need...

  3. Organization of the secure distributed computing based on multi-agent system

    Science.gov (United States)

    Khovanskov, Sergey; Rumyantsev, Konstantin; Khovanskova, Vera

    2018-04-01

    Nowadays developing methods for distributed computing is received much attention. One of the methods of distributed computing is using of multi-agent systems. The organization of distributed computing based on the conventional network computers can experience security threats performed by computational processes. Authors have developed the unified agent algorithm of control system of computing network nodes operation. Network PCs is used as computing nodes. The proposed multi-agent control system for the implementation of distributed computing allows in a short time to organize using of the processing power of computers any existing network to solve large-task by creating a distributed computing. Agents based on a computer network can: configure a distributed computing system; to distribute the computational load among computers operated agents; perform optimization distributed computing system according to the computing power of computers on the network. The number of computers connected to the network can be increased by connecting computers to the new computer system, which leads to an increase in overall processing power. Adding multi-agent system in the central agent increases the security of distributed computing. This organization of the distributed computing system reduces the problem solving time and increase fault tolerance (vitality) of computing processes in a changing computing environment (dynamic change of the number of computers on the network). Developed a multi-agent system detects cases of falsification of the results of a distributed system, which may lead to wrong decisions. In addition, the system checks and corrects wrong results.

  4. The CMS workload management system

    Energy Technology Data Exchange (ETDEWEB)

    Cinquilli, M. [CERN; Evans, D. [Fermilab; Foulkes, S. [Fermilab; Hufnagel, D. [Fermilab; Mascheroni, M. [CERN; Norman, M. [UC, San Diego; Maxa, Z. [Caltech; Melo, A. [Vanderbilt U.; Metson, S. [Bristol U.; Riahi, H. [INFN, Perugia; Ryu, S. [Fermilab; Spiga, D. [CERN; Vaandering, E. [Fermilab; Wakefield, Stuart [Imperial Coll., London; Wilkinson, R. [Caltech

    2012-01-01

    CMS has started the process of rolling out a new workload management system. This system is currently used for reprocessing and Monte Carlo production with tests under way using it for user analysis. It was decided to combine, as much as possible, the production/processing, analysis and T0 codebases so as to reduce duplicated functionality and make best use of limited developer and testing resources. This system now includes central request submission and management (Request Manager), a task queue for parcelling up and distributing work (WorkQueue) and agents which process requests by interfacing with disparate batch and storage resources (WMAgent).

  5. The CMS workload management system

    International Nuclear Information System (INIS)

    Cinquilli, M; Mascheroni, M; Spiga, D; Evans, D; Foulkes, S; Hufnagel, D; Ryu, S; Vaandering, E; Norman, M; Maxa, Z; Wilkinson, R; Melo, A; Metson, S; Riahi, H; Wakefield, S

    2012-01-01

    CMS has started the process of rolling out a new workload management system. This system is currently used for reprocessing and Monte Carlo production with tests under way using it for user analysis. It was decided to combine, as much as possible, the production/processing, analysis and T0 codebases so as to reduce duplicated functionality and make best use of limited developer and testing resources. This system now includes central request submission and management (Request Manager); a task queue for parcelling up and distributing work (WorkQueue) and agents which process requests by interfacing with disparate batch and storage resources (WMAgent).

  6. A distributed computing model for telemetry data processing

    Science.gov (United States)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-05-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  7. A distributed computing model for telemetry data processing

    Science.gov (United States)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-01-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  8. The WorkQueue project - a task queue for the CMS workload management system

    Science.gov (United States)

    Ryu, S.; Wakefield, S.

    2012-12-01

    We present the development and first experience of a new component (termed WorkQueue) in the CMS workload management system. This component provides a link between a global request system (Request Manager) and agents (WMAgents) which process requests at compute and storage resources (known as sites). These requests typically consist of creation or processing of a data sample (possibly terabytes in size). Unlike the standard concept of a task queue, the WorkQueue does not contain fully resolved work units (known typically as jobs in HEP). This would require the WorkQueue to run computationally heavy algorithms that are better suited to run in the WMAgents. Instead the request specifies an algorithm that the WorkQueue uses to split the request into reasonable size chunks (known as elements). An advantage of performing lazy evaluation of an element is that expanding datasets can be accommodated by having job details resolved as late as possible. The WorkQueue architecture consists of a global WorkQueue which obtains requests from the request system, expands them and forms an element ordering based on the request priority. Each WMAgent contains a local WorkQueue which buffers work close to the agent, this overcomes temporary unavailability of the global WorkQueue and reduces latency for an agent to begin processing. Elements are pulled from the global WorkQueue to the local WorkQueue and into the WMAgent based on the estimate of the amount of work within the element and the resources available to the agent. WorkQueue is based on CouchDB, a document oriented NoSQL database. The WorkQueue uses the features of CouchDB (map/reduce views and bi-directional replication between distributed instances) to provide a scalable distributed system for managing large queues of work. The project described here represents an improvement over the old approach to workload management in CMS which involved individual operators feeding requests into agents. This new approach allows for a

  9. The WorkQueue project - a task queue for the CMS workload management system

    International Nuclear Information System (INIS)

    Ryu, S; Wakefield, S

    2012-01-01

    We present the development and first experience of a new component (termed WorkQueue) in the CMS workload management system. This component provides a link between a global request system (Request Manager) and agents (WMAgents) which process requests at compute and storage resources (known as sites). These requests typically consist of creation or processing of a data sample (possibly terabytes in size). Unlike the standard concept of a task queue, the WorkQueue does not contain fully resolved work units (known typically as jobs in HEP). This would require the WorkQueue to run computationally heavy algorithms that are better suited to run in the WMAgents. Instead the request specifies an algorithm that the WorkQueue uses to split the request into reasonable size chunks (known as elements). An advantage of performing lazy evaluation of an element is that expanding datasets can be accommodated by having job details resolved as late as possible. The WorkQueue architecture consists of a global WorkQueue which obtains requests from the request system, expands them and forms an element ordering based on the request priority. Each WMAgent contains a local WorkQueue which buffers work close to the agent, this overcomes temporary unavailability of the global WorkQueue and reduces latency for an agent to begin processing. Elements are pulled from the global WorkQueue to the local WorkQueue and into the WMAgent based on the estimate of the amount of work within the element and the resources available to the agent. WorkQueue is based on CouchDB, a document oriented NoSQL database. The WorkQueue uses the features of CouchDB (map/reduce views and bi-directional replication between distributed instances) to provide a scalable distributed system for managing large queues of work. The project described here represents an improvement over the old approach to workload management in CMS which involved individual operators feeding requests into agents. This new approach allows for a

  10. The WorkQueue project: A task queue for the CMS workload management system

    Energy Technology Data Exchange (ETDEWEB)

    Ryu, S. [Fermilab; Wakefield, Stuart [Imperial Coll., London

    2012-01-01

    We present the development and first experience of a new component (termed WorkQueue) in the CMS workload management system. This component provides a link between a global request system (Request Manager) and agents (WMAgents) which process requests at compute and storage resources (known as sites). These requests typically consist of creation or processing of a data sample (possibly terabytes in size). Unlike the standard concept of a task queue, the WorkQueue does not contain fully resolved work units (known typically as jobs in HEP). This would require the WorkQueue to run computationally heavy algorithms that are better suited to run in the WMAgents. Instead the request specifies an algorithm that the WorkQueue uses to split the request into reasonable size chunks (known as elements). An advantage of performing lazy evaluation of an element is that expanding datasets can be accommodated by having job details resolved as late as possible. The WorkQueue architecture consists of a global WorkQueue which obtains requests from the request system, expands them and forms an element ordering based on the request priority. Each WMAgent contains a local WorkQueue which buffers work close to the agent, this overcomes temporary unavailability of the global WorkQueue and reduces latency for an agent to begin processing. Elements are pulled from the global WorkQueue to the local WorkQueue and into the WMAgent based on the estimate of the amount of work within the element and the resources available to the agent. WorkQueue is based on CouchDB, a document oriented NoSQL database. The WorkQueue uses the features of CouchDB (map/reduce views and bi-directional replication between distributed instances) to provide a scalable distributed system for managing large queues of work. The project described here represents an improvement over the old approach to workload management in CMS which involved individual operators feeding requests into agents. This new approach allows for a

  11. The CMS online cluster: Setup, operation and maintenance of an evolving cluster

    Energy Technology Data Exchange (ETDEWEB)

    Coarasa, J.A.; et al.

    2012-01-01

    The CMS online cluster consists of more than 2700 computers running about 15000 application instances. These applications implement the necessary services to run the data acquisition of the CMS experiment. In this paper the IT solutions employed on the cluster are reviewed. Details are given on the adopted solutions which include the following topics: implementation of reduction and load balanced network and core IT services; deployment and configuration management infrastructure and its customization; a new monitoring infrastructure. Special emphasis will be put on the scalable approach allowing to increase the size of the cluster with no administration overhead. Finally, the lessons learnt from the two years of running will be presented.

  12. Last crystals for the CMS chandelier

    CERN Multimedia

    2008-01-01

    In March, the last crystals for CMS’s electromagnetic calorimeter arrived from Russia and China. Like dedicated jewellers crafting an immense chandelier, the CMS ECAL collaborators are working extremely hard to install all the crystals before the start-up of the LHC. One of the last CMS end-cap crystals, complete with identification bar code. Lead tungstate crystals mounted onto one section of the CMS ECAL end caps. Nearly 10 years after the first production crystal arrived at CERN in September 1998, the very last shipment has arrived. These final crystals will be used to complete the end-caps of the electromagnetic calorimeter (ECAL) at CMS. All in all, there are more than 75,000 crystals in the ECAL. The huge quantity of CMS lead tungstate crystals used in the ECAL corresponds to the highest volume ever produced for a single experiment. The excellent quality of the crystals, both in ter...

  13. LHCb Distributed Data Analysis on the Computing Grid

    CERN Document Server

    Paterson, S; Parkes, C

    2006-01-01

    LHCb is one of the four Large Hadron Collider (LHC) experiments based at CERN, the European Organisation for Nuclear Research. The LHC experiments will start taking an unprecedented amount of data when they come online in 2007. Since no single institute has the compute resources to handle this data, resources must be pooled to form the Grid. Where the Internet has made it possible to share information stored on computers across the world, Grid computing aims to provide access to computing power and storage capacity on geographically distributed systems. LHCb software applications must work seamlessly on the Grid allowing users to efficiently access distributed compute resources. It is essential to the success of the LHCb experiment that physicists can access data from the detector, stored in many heterogeneous systems, to perform distributed data analysis. This thesis describes the work performed to enable distributed data analysis for the LHCb experiment on the LHC Computing Grid.

  14. CMS ready for winding up

    CERN Multimedia

    2003-01-01

    End of October, the last lengths of conductor for the CMS superconducting solenoid have been produced. This is another large sub-project of the CMS Magnet being successfully finished, after completion of the Yoke last year (see Bulletin 43/2002).

  15. Exploiting volatile opportunistic computing resources with Lobster

    Science.gov (United States)

    Woodard, Anna; Wolf, Matthias; Mueller, Charles; Tovar, Ben; Donnelly, Patrick; Hurtado Anampa, Kenyi; Brenner, Paul; Lannon, Kevin; Hildreth, Mike; Thain, Douglas

    2015-12-01

    Analysis of high energy physics experiments using the Compact Muon Solenoid (CMS) at the Large Hadron Collider (LHC) can be limited by availability of computing resources. As a joint effort involving computer scientists and CMS physicists at Notre Dame, we have developed an opportunistic workflow management tool, Lobster, to harvest available cycles from university campus computing pools. Lobster consists of a management server, file server, and worker processes which can be submitted to any available computing resource without requiring root access. Lobster makes use of the Work Queue system to perform task management, while the CMS specific software environment is provided via CVMFS and Parrot. Data is handled via Chirp and Hadoop for local data storage and XrootD for access to the CMS wide-area data federation. An extensive set of monitoring and diagnostic tools have been developed to facilitate system optimisation. We have tested Lobster using the 20 000-core cluster at Notre Dame, achieving approximately 8-10k tasks running simultaneously, sustaining approximately 9 Gbit/s of input data and 340 Mbit/s of output data.

  16. CMS Comic Book

    CERN Document Server

    Gill, Karl Aaron

    2006-01-01

    Titled "CMS Particle Hunter," this colorful comic book style brochure explains to young budding scientists and science enthusiasts in colorful animation how the CMS detector was made, its main parts, and what scientists hope to find using this complex tool. Book invites young students to get involved in particle physics themselves to join the adventure. Written by Dave Barney and Aline Guevera. Layout and drawings by Eric Paiharey and Frederic Vignaux. Available in English, French, German, Italian, Spanish and Portuguese. Year Produced: 2006. Update: September 2013.

  17. An Overview of Cloud Computing in Distributed Systems

    Science.gov (United States)

    Divakarla, Usha; Kumari, Geetha

    2010-11-01

    Cloud computing is the emerging trend in the field of distributed computing. Cloud computing evolved from grid computing and distributed computing. Cloud plays an important role in huge organizations in maintaining huge data with limited resources. Cloud also helps in resource sharing through some specific virtual machines provided by the cloud service provider. This paper gives an overview of the cloud organization and some of the basic security issues pertaining to the cloud.

  18. Performance of the CMS High Level Trigger

    CERN Document Server

    Perrotta, Andrea

    2015-01-01

    The CMS experiment has been designed with a 2-level trigger system. The first level is implemented using custom-designed electronics. The second level is the so-called High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. For Run II of the Large Hadron Collider, the increases in center-of-mass energy and luminosity will raise the event rate to a level challenging for the HLT algorithms. The increase in the number of interactions per bunch crossing, on average 25 in 2012, and expected to be around 40 in Run II, will be an additional complication. We present here the expected performance of the main triggers that will be used during the 2015 data taking campaign, paying particular attention to the new approaches that have been developed to cope with the challenges of the new run. This includes improvements in HLT electron and photon reconstruction as well as better performing muon triggers. We will also present the performance of the improved trac...

  19. Distributed metadata in a high performance computing environment

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Zhang, Zhenhua; Liu, Xuezhao; Tang, Haiying

    2017-07-11

    A computer-executable method, system, and computer program product for managing meta-data in a distributed storage system, wherein the distributed storage system includes one or more burst buffers enabled to operate with a distributed key-value store, the co computer-executable method, system, and computer program product comprising receiving a request for meta-data associated with a block of data stored in a first burst buffer of the one or more burst buffers in the distributed storage system, wherein the meta data is associated with a key-value, determining which of the one or more burst buffers stores the requested metadata, and upon determination that a first burst buffer of the one or more burst buffers stores the requested metadata, locating the key-value in a portion of the distributed key-value store accessible from the first burst buffer.

  20. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  1. 42 CFR 460.20 - Notice of CMS determination.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Notice of CMS determination. 460.20 Section 460.20... ELDERLY (PACE) PACE Organization Application and Waiver Process § 460.20 Notice of CMS determination. (a... application to CMS, CMS takes one of the following actions: (1) Approves the application. (2) Denies the...

  2. First results on the performance of the CMS global calorimeter trigger

    CERN Document Server

    Foudas, C; Jones, J; Rose, A; Stettler, M; Sidiropoulos, G; Tapper, A; Brooke, J; Frazier, R; Heath, G; Hansen, M; PH-EP

    2007-01-01

    The CMS Global Calorimeter Trigger (GCT) uses data from the CMS calorimeters to compute a number kinematical quantities which characterize the LHC event. The GTC output is used by the Global Trigger (GT) along with data from the Global Muon Trigger (GMT) to produce the Level-1 Accept (L1A) decision. The design for the current GCT system commenced early in 2006. After a rapid development phase all the different GCT components have been produced and a large fraction of them have been installed at the CMS electronics cavern (USC-55). There the GCT system has been under test since March 2007. This paper reports results from tests which took place at the USC-55. Initial tests aimed to test the integrity of the GCT data and establish that the proper synchronization had been achieved both internally within GCT as well as with the Regional Calorimeter Trigger (RCT) which provides the GCT input data and with GT which receives the GCT results. After synchronization and data integrity had been established, Monte Carlo E...

  3. A Distributed Snapshot Protocol for Efficient Artificial Intelligence Computation in Cloud Computing Environments

    Directory of Open Access Journals (Sweden)

    JongBeom Lim

    2018-01-01

    Full Text Available Many artificial intelligence applications often require a huge amount of computing resources. As a result, cloud computing adoption rates are increasing in the artificial intelligence field. To support the demand for artificial intelligence applications and guarantee the service level agreement, cloud computing should provide not only computing resources but also fundamental mechanisms for efficient computing. In this regard, a snapshot protocol has been used to create a consistent snapshot of the global state in cloud computing environments. However, the existing snapshot protocols are not optimized in the context of artificial intelligence applications, where large-scale iterative computation is the norm. In this paper, we present a distributed snapshot protocol for efficient artificial intelligence computation in cloud computing environments. The proposed snapshot protocol is based on a distributed algorithm to run interconnected multiple nodes in a scalable fashion. Our snapshot protocol is able to deal with artificial intelligence applications, in which a large number of computing nodes are running. We reveal that our distributed snapshot protocol guarantees the correctness, safety, and liveness conditions.

  4. Proceedings of workshop on distributed computing and network

    International Nuclear Information System (INIS)

    Abe, F.; Yuasa, F.

    1993-02-01

    'Distributed Computing and Network' is one of hot topics in the field of computing. Recent progress in the computer technology is providing new paradigm for computing even in High Energy Physics. Particularly the workstation based computer system is opening new active field of computer application to sciences. The major topics discussed in this symposium are distributed computing and wide area research network for domestic and international link. The two days symposium provided so enough topics to foresee the next direction of our computing environment. 70 people have got together to discuss on these interesting thema as well as information exchange on the computer technologies. (J.P.N.)

  5. The CMS Beam Halo Monitor Electronics

    CERN Document Server

    AUTHOR|(CDS)2080684; Fabbri, F.; Grassi, T.; Hughes, E.; Mans, J.; Montanari, A.; Orfanelli, S.; Rusack, R.; Torromeo, G.; Stickland, D.P.; Stifter, K.

    2016-01-01

    The CMS Beam Halo Monitor has been successfully installed in the CMS cavern in LHC Long Shutdown 1 for measuring the machine induced background for LHC Run II. The system is based on 40 detector units composed of synthetic quartz Cherenkov radiators coupled to fast photomultiplier tubes. The readout electronics chain uses many components developed for the Phase 1 upgrade to the CMS Hadronic Calorimeter electronics, with dedicated firmware and readout adapted to the beam monitoring requirements. The PMT signal is digitized by a charge integrating ASIC (QIE10), providing both the signal rise time, with few ns resolution, and the charge integrated over one bunch crossing. The backend electronics uses microTCA technology and receives data via a high-speed 5 Gbps asynchronous link. It records histograms with sub-bunch crossing timing resolution and is readout by IPbus using the newly designed CMS data acquisition for non-event based data. The data is processed in real time and published to CMS and the LHC, providi...

  6. CMS Young Researchers Award 2013 and Fundamental Physics Scholars Award from the CMS Experiment

    CERN Multimedia

    Lapka, Marzena

    2014-01-01

    Photo 2: CMS Fundamental Physics Scholars (FPSs) 1st prize: Joosep Pata, from Estonian National Institue of Chemical Physics and Biophysics / Photo 1 and 3: CMS Young Researchers Award. From left to right: Guido Tonelli, Colin Bernet, Andre David, Oliver Gutsche, Dmytro Kovalskyi, Andrea Petrucci, Joe Incandela and Jim Virdee

  7. 42 CFR 422.510 - Termination of contract by CMS.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Termination of contract by CMS. 422.510 Section 422... Advantage Organizations § 422.510 Termination of contract by CMS. (a) Termination by CMS. CMS may at any time terminate a contract if CMS determines that the MA organization meets any of the following: (1...

  8. 9th International Symposium on Intelligent Distributed Computing

    CERN Document Server

    Camacho, David; Analide, Cesar; Seghrouchni, Amal; Badica, Costin

    2016-01-01

    This book represents the combined peer-reviewed proceedings of the ninth International Symposium on Intelligent Distributed Computing – IDC’2015, of the Workshop on Cyber Security and Resilience of Large-Scale Systems – WSRL’2015, and of the International Workshop on Future Internet and Smart Networks – FI&SN’2015. All the events were held in Guimarães, Portugal during October 7th-9th, 2015. The 46 contributions published in this book address many topics related to theory and applications of intelligent distributed computing, including: Intelligent Distributed Agent-Based Systems, Ambient Intelligence and Social Networks, Computational Sustainability, Intelligent Distributed Knowledge Representation and Processing, Smart Networks, Networked Intelligence and Intelligent Distributed Applications, amongst others.

  9. Verification steps for the CMS event-builder software

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    The CMS event-builder software is used to assemble event fragments into complete events at 100 kHz. The data originates at the detector front-end electronics, passes through several computers and is transported from the underground to the high-level trigger farm on the surface. I will present the testing and verifications steps a new software version has to pass before it is deployed in production. I will discuss the current practice and possible improvements.

  10. 42 CFR 423.509 - Termination of contract by CMS.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Termination of contract by CMS. 423.509 Section 423... Contracts with Part D plan sponsors § 423.509 Termination of contract by CMS. (a) Termination by CMS. CMS may at any time terminate a contract if CMS determines that the Part D plan sponsor meets any of the...

  11. 42 CFR 422.210 - Assurances to CMS.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Assurances to CMS. 422.210 Section 422.210 Public...) MEDICARE PROGRAM MEDICARE ADVANTAGE PROGRAM Relationships With Providers § 422.210 Assurances to CMS. (a) Assurances to CMS. Each organization will provide assurance satisfactory to the Secretary that the...

  12. Soft QCD at CMS and ATLAS

    CERN Document Server

    Starovoitov, Pavel; The ATLAS collaboration

    2018-01-01

    A short overview of the recent soft QCD results from the ATLAS and CMS collaborations is presented. The inelastic cross section measurement by CMS at 13 TeV is summarised. The contribution of the diffractive processes to the very forward photon spectra studied by ATLAS and LHCf is discussed. The ATLAS measurements of the exclusive two-photon production of the muon pairs is presented and compared to the previous ATLAS and CMS results.

  13. 23 CFR 500.109 - CMS.

    Science.gov (United States)

    2010-04-01

    ... 23 Highways 1 2010-04-01 2010-04-01 false CMS. 500.109 Section 500.109 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION TRANSPORTATION INFRASTRUCTURE MANAGEMENT MANAGEMENT AND MONITORING SYSTEMS Management Systems § 500.109 CMS. (a) For purposes of this part, congestion means the level at...

  14. ATLAS Distributed Computing: Experience and Evolution

    CERN Document Server

    Nairz, A; The ATLAS collaboration

    2013-01-01

    The ATLAS experiment has just concluded its first running period which commenced in 2010. After two years of remarkable performance from the LHC and ATLAS, the experiment has accumulated more than 25 fb-1 of data. The total volume of beam and simulated data products exceeds 100 PB distributed across more than 150 computing centers around the world, managed by the experiment's distributed data management system. These sites have provided up to 150,000 computing cores to ATLAS's global production and analysis processing system, enabling a rich physics program including the discovery of the Higgs-like boson in 2012. The wealth of accumulated experience in global data-intensive computing at this massive scale, and the considerably more challenging requirements of LHC computing from 2014 when the LHC resumes operation, are driving a comprehensive design and development cycle to prepare a revised computing model together with data processing and management systems able to meet the demands of higher trigger rates, e...

  15. ATLAS distributed computing: experience and evolution

    CERN Document Server

    Nairz, A; The ATLAS collaboration

    2014-01-01

    The ATLAS experiment has just concluded its first running period which commenced in 2010. After two years of remarkable performance from the LHC and ATLAS, the experiment has accumulated more than 25/fb of data. The total volume of beam and simulated data products exceeds 100~PB distributed across more than 150 computing centres around the world, managed by the experiment's distributed data management system. These sites have provided up to 150,000 computing cores to ATLAS's global production and analysis processing system, enabling a rich physics programme including the discovery of the Higgs-like boson in 2012. The wealth of accumulated experience in global data-intensive computing at this massive scale, and the considerably more challenging requirements of LHC computing from 2015 when the LHC resumes operation, are driving a comprehensive design and development cycle to prepare a revised computing model together with data processing and management systems able to meet the demands of higher trigger rates, e...

  16. CMS - The Compact Muon Solenoid

    CERN Multimedia

    Bergauer, T; Waltenberger, W; Kratschmer, I; Treberer-treberspurg, W; Escalante del valle, A; Andreeva, I; Innocente, V; Camporesi, T; Malgeri, L; Marchioro, A; Moneta, L; Weingarten, W; Beni, N T; Cimmino, A; Rovere, M; Jafari, A; Lange, C G; Vartak, A P; Gilbert, A J; Pantaleo, F; Reis, T; Cucciati, G; Alipour tehrani, N; Stakia, A; Fallavollita, F; Pizzichemi, M; Rauco, G; Zhang, S; Hu, T; Yazgan, E; Zhang, H; Thomas-wilsker, J; Reithler, H K V; Philipps, B; Merschmeyer, M K; Heidemann, C A; Mukherjee, S; Geenen, H; Kuessel, Y; Weingarten, S; Gallo, E; Schwanenberger, C; Walsh bastos rangel, R; Beernaert, K S; De wit, A M; Elwood, A C; Connor, P; Lelek, A A; Wichmann, K H; Myronenko, V; Kovalchuk, N; Bein, S L; Dreyer, T; Scharf, C; Quast, G; Dierlamm, A H; Barth, C; Mol, X; Kudella, S; Schafer, D; Schimassek, R R; Matorras, F; Calderon tazon, A; Garcia ferrero, J; Bercher, M J; Sirois, Y; Callier, S; Depasse, P; Laktineh, I B; Grenier, G; Boudoul, G; Heath, G P; Hartley, D A; Quinton, S; Tomalin, I R; Harder, K; Francis, V B; Thea, A; Zhang, Z; Loukas, D; Hernath, S T; Naskar, K; Colaleo, A; Maggi, G P; Maggi, M; Loddo, F; Calabria, C; Campanini, R; Cuffiani, M; D'antone, I; Grandi, C; Navarria, F; Guiducci, L; Battilana, C; Tosi, N; Gulmini, M; Meola, S; Longo, E; Meridiani, P; Marzocchi, B; Schizzi, A; Cho, S; Ha, S; Kim, D H; Kim, G N; Md halid, M F B; Yusli, M N B; Dominik, W M; Bunkowski, K; Olszewski, M; Byszuk, A P; Rasteiro da silva, J C; Varela, J; Leong, Q; Sulimov, V; Vorobyev, A; Denisov, A; Murzin, V; Egorov, A; Lukyanenko, S; Postoev, V; Pashenkov, A; Solovey, A; Rubakov, V; Troitsky, S; Kirpichnikov, D; Lychkovskaya, N; Safronov, G; Fedotov, A; Toms, M; Barniakov, M; Olimov, K; Fazilov, M; Umaraliev, A; Dumanoglu, I; Bakirci, N M; Dozen, C; Demiroglu, Z S; Isik, C; Zeyrek, M; Yalvac, M; Ozkorucuklu, S; Chang, Y; Dolgopolov, A; Gottschalk, E E; Maeshima, K; Heavey, A E; Kramer, T; Kwan, S W L; Taylor, L; Tkaczyk, S M; Mokhov, N; Marraffino, J M; Mrenna, S; Yarba, V; Banerjee, B; Elvira, V D; Gray, L A; Holzman, B; Dagenhart, W; Canepa, A; Ryu, S C; Strobbe, N C; Adelman-mc carthy, J K; Contescu, A C; Andre, J O; Wu, J; Dittmer, S J; Bucinskaite, I; Zhang, J; Karchin, P E; Thapa, P; Zaleski, S G; Gran, J L; Wang, S; Zilizi, G; Raics, P P; Bhardwaj, A; Naimuddin, M; Smiljkovic, N; Stojanovic, M; Brandao malbouisson, H; De oliveira martins, C P; Tonelli manganote, E J; Medina jaime, M; Thiel, M; Laurila, S H; Graehling, P; Tonon, N; Blekman, F; Postiau, N J S; Leroux, P J; Van remortel, N; Janssen, X J; Di croce, D; Aleksandrov, A; Shopova, M F; Dogra, S M; Shinoda, A A; Arce, P; Daniel, M; Navarrete marin, J J; Redondo fernandez, I; Guirao elias, A; Cela ruiz, J M; Lottin, J; Gras, P; Kircher, F; Levesy, B; Payn, A; Guilloux, F; Negro, G; Leloup, C; Pasztor, G; Panwar, L; Bhatnagar, V; Bruzzi, M; Sciortino, S; Starodubtsev, O; Azzi, P; Conti, E; Lacaprara, S; Margoni, M; Rossin, R; Tosi, M; Fano', L; Lucaroni, A; Biino, C; Dattola, D; Rotondo, F; Ballestrero, A; Obertino, M M; Kiani, M B; Paterno, A; Magana villalba, R; Ramirez garcia, M; Reyes almanza, R; Gorski, M; Wrochna, G; Bluj, M J; Zarubin, A; Nozdrin, M; Ladygin, V; Malakhov, A; Golunov, A; Skrypnik, A; Sotnikov, A; Evdokimov, N; Tiurin, V; Lokhtin, I; Ershov, A; Platonova, M; Tyurin, N; Slabospitskii, S; Talov, V; Belikov, N; Ryazanov, A; Chao, Y; Tsai, J; Foord, A; Wood, D R; Orimoto, T J; Luckey, P D; Jaditz, S H; Stephans, G S; Darlea, G L; Di matteo, L; Maier, B; Trovato, M; Bhattacharya, S; Roberts, J B; Padley, P B; Tu, Z; Rorie, J T; Clarida, W J; Tiras, E; Khristenko, V; Cerizza, G; Pieri, M; Krutelyov, V; Saiz santos, M D; Klein, D S; Derdzinski, M; Murray, M J; Gray, J A; Minafra, N; Castle, J R; Bowen, J L S; Buterbaugh, K; Morrow, S I; Bunn, J; Newman, H; Spiropulu, M; Balcas, J; Lawhorn, J M; Thomas, S D; Panwalkar, S M; Kyriacou, S; Xie, Z; Ojalvo, I R; Salfeld-nebgen, J; Laird, E M; Wimpenny, S J; Yates, B R; Perry, T M; Schiber, C C; Diaz, D C; Uniyal, R; Mesic, B; Kolosova, M; Snow, G R; Lundstedt, C; Johnston, D; Zvada, M; Weitzel, D J; Damgov, J V; Cowden, C S; Giammanco, A; David, P N Y; Zobec, J; Cabrera jamoulle, J B; Daubie, E; Nash, J A; Evans, L; Hall, G; Nikitenko, A; Ryan, M J; Huffman, M A J; Styliaris, E; Evangelou, I; Sharan, M K; Roy, A; Rout, P K; Kalbhor, P N; Bagliesi, G; Braccini, P L; Ligabue, F; Boccali, T; Rizzi, A; Minuti, M; Oh, S; Kim, J; Sen, S; Boz evinay, M; Xiao, M; Hung, W T; Jensen, F O; Mulholland, T D; Kumar, A; Jones, M; Roozbahani, B H; Neu, C C; Thacker, H B; Wolfe, E M; Jabeen, S; Gilmore, J; Winer, B L; Rush, C J; Luo, W; Alimena, J M; Ko, W; Lander, R; Broadley, W H; Shi, M; Furic, I K; Low, J F; Bortignon, P; Alexander, J P; Zientek, M E; Conway, J V; Padilla fuentes, Y L; Florent, A H; Bravo, C B; Crotty, I M; Wenman, D L; Sarangi, T R; Ghabrous larrea, C; Gomber, B; Smith, N C; Long, K D; Roberts, J M; Hildreth, M D; Jessop, C P; Karmgard, D J; Loukas, N; Ferbel, T; Zielinski, M A; Cooper, S I; Jung, A; Van driessche, W G M; Fagot, A; Vermassen, B; Valchkova-georgieva, F K; Dimitrov, D S; Roumenin, T S; Podrasky, V; Re, V; Zucca, S; De canio, F; Romaniuk, R; Teodorescu, L; Krofcheck, D; Anderson, N G; Bell, S T; Salazar ibarguen, H A; Kudinov, V; Onishchenko, S; Naujikas, R; Lyubynskiy, V; Sobolev, O; Khan, M S; Adeel-ur-rehman, A; Hassan, Q U; Ali, I; Kreuzer, P K; Robson, A J; Gadrat, S G; Ivanov, A; Mendis, D; Da silva di calafiori, D R; Zeinali, M; Behnamian, H; Moroni, L; Malvezzi, S; Park, I; Pastika, N J; Oropeza barrera, C; Elkhateeb, E A A; Elmetenawee, W; Mohammed, Y; Tayel, E S A; Mcclatchey, R H; Kovacs, Z; Munir, K; Odeh, M; Magradze, E; Oikashvili, B; Shingade, P; Shukla, R A; Banerjee, S; Kumar, S; Jashal, B K; Grzanka, L; Adam, W; Ero, J; Fabjan, C; Jeitler, M; Rad, N K; Auffray hillemanns, E; Charkiewicz, A; Fartoukh, S; Garcia de enterria adan, D; Girone, M; Glege, F; Loos, R; Mannelli, M; Meijers, F; Sciaba, A; Meschi, E; Ricci, D; Petrucciani, G; Daguin, J; Vazquez velez, C; Karavakis, E; Nourbakhsh, S; Rabady, D S; Ceresa, D; Karacheban, O; Beguin, M; Kilminster, B J; Ke, Z; Meng, X; Zhang, Y; Tao, J; Romeo, F; Spiezia, A; Cheng, L; Zhukov, V; Feld, L W; Autermann, C T; Fischer, R; Erdweg, S; Kress, T H; Dziwok, C; Hansen, K; Schoerner-sadenius, T M; Marfin, I; Keaveney, J M; Diez pardos, C; Muhl, C W; Asawatangtrakuldee, C; Defranchis, M M; Asmuss, J P; Poehlsen, J A; Stober, F M H; Vormwald, B R; Kripas, V; Gonzalez vazquez, D; Kurz, S T; Niemeyer, C; Rieger, J O; Borovkov, A; Shvetsov, I; Sieber, G; Caspart, R; Iqbal, M A; Sander, O; Metzler, M B; Ardila perez, L E; Ruiz jimeno, A; Fernandez garcia, M; Scodellaro, L; Gonzalez sanchez, J F; Curras rivera, E; Semeniouk, I; Ochando, C; Bedjidian, M; Giraud, N A; Mathez, H; Zoccarato, Y D; Ianigro, J; Galbit, G C; Flacher, H U; Shepherd-themistocleous, C H; French, M J; Hill, J A; Jones, L L; Markou, A; Bencze, G L; Mishra, D K; Netrakanti, P K; Jha, V; Chudasama, R; Katta, S; Venditti, R; Cristella, L; Braibant-giacomelli, S; Dallavalle, G; Fabbri, F; Codispoti, G; Borgonovi, L; Caponero, M A; Berti, L; Fienga, F; Dafinei, I; Organtini, G; Del re, D; Pettinacci, V; Park, S K; Lee, K S; Kang, M; Kim, B; Park, H K; Kong, D J; Lee, S; Pak, S I; Zolkapli, Z B; Konecki, M A; Walczak, M B; Bargassa, P; Viegas guerreiro leonardo, N T; Levchenko, P; Orishchin, E; Suvorov, V; Uvarov, L; Gruzinskii, N; Pristavka, A; Kozlov, V; Radovskaia, A; Solovey, A; Kolosov, V; Vlassov, E; Parygin, P; Tumasyan, A; Topakli, H; Boran, F; Akin, I V; Oz, C; Gulmez, E; Atakisi, I O; Bakken, J A; Govi, G M; Lewis, J D; Shaw, T M; Bailleux, D; Stoynev, S E; Sexton-kennedy, E M; Huang, C; Lincoln, D W; Roser, R; Ito, A; Adams, M R; Apanasevich, L; Varelas, N; Sandoval gonzalez, I D; Hangal, D A; Yoo, J H; Ovcharova, A K; Bradmiller-feld, J W; Amin, N J; Miller, M P; Patterson, A S; Sharma, R K; Santoro, A; Lassila-perini, K M; Tuominiemi, J; Voutilainen, M A; Wu, X; Gross, L O; Le bihan, A; Fuks, B; Kieffer, E; Pansanel, J; Jansova, M; D'hondt, J; Abuzeid hassan, S A; Bilin, B; Beghin, D; Soultanov, G; Vankov, I D; Konstantinov, P B; Marra da silva, J; De souza santos, A; Arruda ramalho, L; Renker, D; Erdmann, W; Molinero vela, A; Fernandez bedoya, C; Bachiller perea, I; Chipaux, R; Faure, J D; Hamel de monchenault, G; Mandjavidze, I; Rander, J; Ferri, F; Leroy, C L; Machet, M; Nagy, M I; Felcini, M; Kaur, S; Saizu, M A; Civinini, C; Latino, G; Checchia, P; Ronchese, P; Vanini, S; Fantinel, S; Cecchi, C; Leonardi, R; Arneodo, M; Ruspa, M; Pacher, L; Rabadan trejo, R I; Mondragon herrera, C A; Golutvin, I; Zhiltsov, V; Melnichenko, I; Mjavia, D; Cheremukhin, A; Zubarev, E; Kalagin, V; Alexakhin, V; Mitsyn, V; Shulha, S; Vishnevskiy, A; Gavrilenko, M; Boos, E E; Obraztsov, S; Dubinin, M; Demiyanov, A; Dudko, L; Azhgirey, I; Chikilev, O; Turchanovich, L; Rurua, L; Hou, G W; Wang, M; Chang, P; Kumar, A; Liau, J; Lazic, D; Lawson, P D; Zou, D; Wisecarver, A L; Sumorok, K C; Klute, M; Lee, Y; Iiyama, Y; Velicanu, D A; Mc ginn, C; Abercrombie, D R; Tatar, K; Hahn, K A; Nussbaum, T W; Southwick, D C; Cittolin, S; Martin, T; Welke, C V; Wilson, G W; Baringer, P S; Sanders, S J; Mcbrayer, W J; Engh, D J; Sheldon, P D; Gurrola, A; Velkovska, J A; Melo, A M; Padeken, K O; Johnson, C N; Ni, H; Montalvo, R J; Heindl, M D; Ferguson, T; Vogel, H; Mudholkar, T K; Elmer, P; Tully, C; Luo, J; Hanson, G; Jandir, P S; Askew, A W; Kadija, K; Dimovasili, E; Attikis, A; Vasilas, I; Chen, G; Bockelman, B P; Kamalieddin, R; Barrefors, B P; Farleigh, B S; Akchurin, N; Demin, P; Pavlov, B A; Petkov, P S; Goranova, R; Tomsa, J; Lyons, L; Buchmuller, O; Magnan, A; Laner ogilvy, C; Di maria, R; Dutta, S; Thakur, S; Bettarini, S; Bosi, F; Giassi, A; Massa, M; Calzolari, F; Androsov, K; Lee, H; Komurcu, Y; Kim, D W; Wagner, S R; Perloff, A S; Rappoccio, S R; Harrington, C I; Baden, A R; Ricci-tam, F; Kamon, T; Rathjens, D; Pernie, L; Larsen, D; Ji, W; Pellett, D E; Smith, J; Acosta, D E; Field, R D; Yelton, J M; Kotov, K; Wang, S; Smolenski, K W; Mc coll, N W; Dasu, S R; Lanaro, A; Cook, J R; Gorski, T A; Buchanan, J J; Jain, S; Musienko, Y; Taroni, S; Meng, H; Siddireddy, P K; Xie, W; Rott, C; Benedetti, D; Everett, A A; Schulte, J; Mahakud, B; Ryckbosch, D D E; Crucy, S; Cornelis, T G M; Betev, B; Dimov, H; Raykov, P A; Uzunova, D G; Mihovski, K T; Mechinsky, V; Makarenko, V; Yermak, D; Yevarouskaya, U; Salvini, P; Manghisoni, M; Fontaine, J; Agram, J; Palinkas, J; Reid, I D; Bell, A J; Clyne, M N; Zavodchikov, S; Veelken, C; Kannike, K; Dewanjee, R K; Skarupelov, V; Piibeleht, M; Ehataht, K; Chang, S; Kuchinski, P; Bukauskas, L; Zhmurin, P; Kamal, A; Mubarak, M; Asghar, M I; Ahmad, N; Muhammad, S; Mansoor-ul-islam, S; Saddique, A; Waqas, M; Irshad, A; Veckalns, V; Toda, S; Choi, Y K; Yu, I; Hwang, C; Yumiceva, F X; Djambazov, L; Meinhard, M T; Becker, R J U; Grimm, O; Wallny, R S; Tavolaro, V R; Eller, P D; Meister, D; Paktinat mehdiabadi, S; Chenarani, S; Dini, P; Leporini, R; Dinardo, M; Brianza, L; Hakkarainen, U T; Parashar, N; Malik, S; Ramirez vargas, J E; Dharmaratna, W; Noh, S; Uang, A J; Kim, J H; Lee, J S H; Jeon, D; You, Z; Assran, Y; Elgammal, S; Ellithi kamel, A Y; Nayak, A K; Dash, D; Koca, N; Kothekar, K K; Karnam, R; Patil, M R; Torims, T; Hoch, M; Schieck, J R; Valentan, M; Spitzbart, D; Lucio alves, F L; Blanchot, G; Gill, K A; Orsini, L; Petrilli, A; Sharma, A; Tsirou, A; Deile, M; Hudson, D A; Gutleber, J; Folch, R; Tropea, P; Cerminara, G; Vichoudis, P; Pardo, T; Sabba, H; Selvaggi, M; Verzetti, M; Ngadiuba, J; Kornmayer, A; Niedziela, J; Aarrestad, T K; He, K; Li, B; Huang, Q; Pierschel, G; Esch, T; Louis, D; Quast, T; Nowack, A S; Beissel, F; Borras, K A; Mankel, R; Pitzl, D D; Kemp, Y; Meyer, A B; Krucker, D B; Mittag, G; Burgmeier, A; Lenz, T; Arndt, T M; Pflitsch, S K; Danilov, V; Dominguez damiani, D; Cardini, A; Kogler, R; Troendle, D C; Aggleton, R C; Lange, J; Reimers, A C; De boer, W; Weber, M M; Theel, A; Mozer, M U; Wayand, S; Harrendorf, M A; Harbaum, T R; El morabit, K; Marco, J; Rodrigo, T; Vila alvarez, I; Lopez garcia, A; Rembser, J; Mathieu, A; Kurca, T; Mirabito, L; Verdier, P; Combaret, C; Newbold, D M; Smith, V; Brooke, J J; Metson, S; Coughlan, J A; Torbet, M J; Belyaev, A; Kyriakis, A; Horvath, D; Veszpremi, V; Topkar, A; Selvaggi-maggi, G; Nuzzo, S V; Romano, F; Marangelli, B; Spinoso, V; Lezki, S; Castro, A; Rovelli, T; Brigliadori, L; Bianco, S; Fabbricatore, P; Farinon, S; Musenich, R; Ferro, F; Gozzelino, A; Buontempo, S; Casolaro, P; Paramatti, R; Vignati, M; Belforte, S; Hong, B; Roh, Y J; Choi, S Y; Son, D; Yang, Y C; Butanov, K; Kotobi, A; Krolikowski, J; Pozniak, K T; Misiura, M; Seixas, J C; Jain, A K; Nemallapudi, M V; Shchipunov, L; Lebedev, V; Skorobogatov, V; Klimenko, K; Terkulov, A; Kirakosyan, M; Azarkin, M; Krasnikov, N; Stepanova, L; Gavrilov, V; Spiridonov, A; Semenov, S; Krokhotin, A; Rusinov, V; Chistov, R; Zhemchugov, E; Nishonov, M; Hmayakyan, G; Khachatryan, V; Ozdemir, K; Ozturk, S; Tali, B; Kangal, E E; Turkcapar, S; Zorbakir, I S; Aliyev, T; Demir, D A; Liu, W; Apollinari, G; Osborne, I; Genser, K; Lammel, S; Whitmore, J; Mommsen, R; Apyan, A; Badgett jr, W F; Atac, M; Joshi, U P; Vidal, R A; Giacchetti, L A; Merkel, P; Johnson, M E; Soha, A L; Tran, N V; Rapsevicius, V; Hirschauer, J F; Voirin, E; Altunay cheung, M; Liu, T T; Mosquera morales, J F; Gerber, C E; Chen, X; Clarke, C J; Stuart, D D; Franco sevilla, M; Marsh, B J; Shivpuri, R K; Adzic, P; De almeida pacheco, M A; Matos figueiredo, D; De queiroz franco, A B; Melo de almeida, M; Bernardo valadao, R; Linden, T; Tuovinen, E V; Jarvinen, T T; Siikonen, H J L; Ripp-baudot, I L; Richer, M; Vander velde, C; Randle-conde, A S; Dong, J; Van haevermaet, H J H; Dimitrov, L; De paula bianchini, C; Muller cascadan, A; Kotlinski, B; Alcaraz maestre, J; Josa mutuberria, M I; Gonzalez lopez, O; Marin munoz, J; Puerta pelayo, J; Rodriguez vazquez, J J; Denegri, D; Jarry, P; Rosowsky, A; Tsipolitis, G; Grunewald, M; Singh, J; Chawla, R; Gupta, R; Giordano, F; Parrini, G; Russo, L; Dosselli, U; Mazzucato, M; Verlato, M; Wulzer, A; Traldi, S; Bortolato, D; Biasini, M; Bilei, G M; Movileanu, M; Santocchia, A; Mariani, V; Mariotti, C; Monaco, V; Accomando, E; Pinna angioni, G L; Boimska, B; Yuldashev, B; Kamenev, A; Belotelov, I; Filozova, I; Bunin, P; Golovanov, G; Gribushin, A; Kaminskiy, A; Volkov, P; Vorotnikov, G; Bityukov, S; Kryshkin, V; Petrov, V; Volkov, A; Troshin, S; Levin, A; Sumaneev, O V; Kalinin, A; Kulagin, N; Mandrik, P; Lin, C; Kovalskyi, D; Demiragli, Z; Hsu, D G; Michlin, B A; Fountain, M; Debbins, P A; Durgut, S; Tadel, M; White, A; Molina-perez, J A; Dost, J M; Boren, S S; Klein, A; Bhatti, A; Mesropian, C; Wilkinson, R; Xie, S; Marlow, D R; Jindal, P; Palmer, C A; Narain, M; Berry, E A; Usai, E; Korotkov, A L; Strossman, W; Kennedy, E; Burt, K F; Saha, A; Starodumov, A; Mavromanolakis, G; Nicolaou, C; Mao, Y; Claes, D R; Sill, A F; Lamichhane, K; Antunovic, Z; Piotrzkowski, K; Bondu, O; Dimitrov, A A; Albajar, C; Torga teixeira, R F; Iles, G M; Borg, J; Cripps, N A; Uchida, K; Fayer, S W; Wright, J C; Kokkas, P; Manthos, N; Bhattacharya, S; Nandan, S; Bellazzini, R; Carboni, A; Arezzini, S; Yang, U K; Roskes, J; Corcodilos, L A; Nauenberg, U; Johnson, D; Kharchilava, A; Mc lean, C A; Cox, B B; Hirosky, R J; Cummings, G E; Skuja, A; Bard, R L; Mueller, R D; Puigh, D M; Chertok, M B; Calderon de la barca sanchez, M; Gunion, J F; Vogt, R; Conway, R T; Gearhart, J W; Band, R E; Kukral, O; Korytov, A; Fu, Y; Madorsky, A; Brinkerhoff, A W; Rinkevicius, A; Mcdermott, K P; Tao, Z; Bellis, M; Gronberg, J B; Hauser, J; Bachtis, M; Kubic, J; Nash, W A; Greenler, L S; Caillol, C S; Woods, N; De jesus pardal vicente, M; Trembath-reichert, S; Singovski, A; Wolf, M; Smith, G N; Bucci, R E; Reinsvold, A C; Rupprecht, N C; Taus, R A; Buccilli, A T; Kroeger, R S; Reidy, J J; Barnes, V E; Kress, M K; Thieman, J R; Mccartin, J W; Gul, M; Khvastunov, I; Georgiev, I G; Biselli, A; Berzano, U; Vai, I; Braghieri, A; Cardoso lopes, R; Cuevas maestro, J F; Palencia cortezon, J E; Reucroft, S; Bheesette, S; Butler, A; Ivanov, A; Mizelkov, M; Kashpydai, O; Kim, J; Janulis, M; Zemleris, V; Ali, A; Ahmed, U S; Awan, M I; Lee, J; Dissertori, G; Pauss, F; Musella, P; Gomez espinosa, T A; Pigazzini, S; Vesterbacka olsson, M L; Klijnsma, T; Khakzad, M; Arfaei, H; Bonesini, M; Ciriolo, V; Gomez moreno, B; Linares garcia, L E; Bae, S; Ko, B; Hatakeyama, K; Mahmoud mohammed, M A; Aly, A; Ahmad, A; Bahinipati, S; Kim, T J; Goh, J; Fang, W; Kemularia, O; Melkadze, A; Sharma, S; Rane, A P; Ayala amaya, E R; Akle, B; Palomo pinto, F R; Madlener, T; Spanring, M; Pol, M E; Alda junior, W L; Rodrigues simoes moreira, P; Kloukinas, K; Onnela, A T O; Passardi, G; Perez, E F; Postema, W J; Petagna, P; Gaddi, A; Vieira de castro ferreira da silva, P M; Gastal, M; Dabrowski, A E; Mersi, S; Bianco, M; Alandes pradillo, M; Chen, Y; Kieseler, J; Bawej, T A; Roedne, L T; Hugo, G; Baschiera, M; Loiseau, T L; Donato, S; Wang, Y; Liu, Z; Yue, X; Teng, C; Wang, Z; Liao, H; Zhang, X; Chen, Y; Ahmad, M; Zhao, H; Qi, F; Li, B; Raupach, F; Tonutti, M P; Radziej, M; Fluegge, G; Haj ahmad, W; Kunsken, A; Roy, D M; Ziemons, T; Behrens, U; Henschel, H M; Kleinwort, C H; Dammann, D J; Van onsem, G P; Contreras campana, C J; Penno, M; Haranko, M; Singh, A; Turkot, O; Scheurer, V; Schleper, P; Schwandt, J; Schwarz, D; Hartmann, F; Muller, T; Mallows, S; Funke, D; Baselga bacardit, M; Mitra, S; Martinez rivero, C; Moya martin, D; Hidalgo villena, S; Chazin quero, B; Mine, P M G; Poilleux, P R; Salerno, R A; Martin perez, C; Amendola, C; Caponetto, L; Pugnere, D Y; Giraud, Y A N; Sordini, V; Grimes, M A; Burns, D J P; Harper, S J; Hajdu, C; Vami, T A; Dutta, D; Pant, L M; Kumar, V; Sarin, P; Di florio, A; Giacomelli, P; Montanari, A; Siroli, G P; Robutti, E; Maron, G; Fabozzi, F; Galati, G; Rovelli, C I; Della ricca, G; Vazzoler, F; Oh, Y D; Park, W H; Kwon, K H; Choi, J; Kalinowski, A; Santos amaral, L C; Di francesco, A; Velichko, G; Smirnov, I; Kozlov, V; Vavilov, S; Kirianov, A; Dremin, I; Rusakov, S; Nechitaylo, V; Kovzelev, A; Toropin, A; Anisimov, A; Barniakov, A; Gasanov, E; Eskut, E; Polatoz, A; Karaman, T; Zorbilmez, C; Bat, A; Tok, U G; Dag, H; Kaya, O; Tekten, S; Lin, T; Abdoulline, S; Bauerdick, L; Denisov, D; Gingu, C; Green, D; Nahn, S C; Prokofiev, O E; Strait, J B; Los, S; Bowden, M; Tanenbaum, W M; Guo, Y; Dykstra, D W; Mason, D A; Chlebana, F; Cooper, W E; Anderson, J M K; Weber, H A; Christian, D C; Alyari, M F; Diaz cruz, J A; Wang, M; Berry, D R; Siehl, K F; Poudyal, N; Kyre, S A; Mullin, S D; George, C; Szabo, Z; Malhotra, S; Milosevic, J; Prado da silva, W L; Martins mundim filho, L; Sanchez rosas, L J; Karimaki, V J; Toor, S Z; Karadzhinova, A G; Maazouzi, C; Van hove, P J; Hosselet, J; Goorens, R; Brun, H L; Kalsi, A K; Wang, Q; Vannerom, D; Antchev, G; Iaydjiev, P S; Mitev, G M; Amadio, G; Langenegger, U; Kaestli, H C; Meier, B; Fernandez ramos, J P; Besancon, M; Fabbro, B; Ganjour, S; Locci, E; Gevin, O; Suranyi, O; Bansal, S; Kumar, R; Sharma, S; Tuve, C N; Tricomi, A; Meschini, M; Paoletti, S; Sguazzoni, G; Gori, V; Carlin, R; Dal corso, F; Simonetto, F; Torassa, E; Zumerle, G; Borsato, E; Gonella, F; Dorigo, A; Larsen, H; Peroni, C; Trapani, P P; Buarque franzosi, D; Tamponi, U; Mejia guisao, J A; Zepeda fernandez, C H; Szleper, M; Zalewski, P D; Rybka, D K; Gorbunov, I; Perelygin, V; Kozlov, G; Semenov, R; Khvedelidze, A; Kodolova, O; Klyukhin, V; Snigirev, A; Kryukov, A; Ukhanov, M; Sobol, A; Bayshev, I; Akimenko, S; Lei, Y; Chang, Y; Kao, K; Lin, S; Yu, P; Li, Y; Fantasia, C; Gastler, D E; Paus, C; Wyslouch, B; Knuteson, B O; Azzolini, V; Goncharov, M; Brandt, S; Chen, Z; Liu, J; Chen, Z; Freed, S M; Zhang, A; Nachtman, J M; Penzo, A; Akgun, U; Yi, K; Rahmat, R; Gandrajula, R P; Dilsiz, K; Letts, J; Sharma, V A; Holzner, A G; Wuerthwein, F K; Padhi, S; Suarez silva, I M; Tapia takaki, D J; Stringer, R W; Kropivnitskaya, A; Majumder, D; Al-bataineh, A A; Gabella, W E; Johns, W E; Mora, J G; Shi, Z; Ciesielski, R A; Bornheim, A; Bartz, E H; Doroshenko, J; Halkiadakis, E; Salur, S; Robles, J A; Gray, R C; Saka, H; Osherson, M A; Hughes, E J; Paulini, M G; Russ, J S; Jang, D W; Piroue, P; Olsen, J D; Sands, W; Saluja, S; Cutts, D; Hadley, M H; Hakala, J C; Clare, R; Luthra, A P; Paneva, M I; Seto, R K; Mac intire, D A; Tentindo, S; Wahl, H; Chokheli, D; Micanovic, S; Razis, P; Mousa, J; Pantelides, S; Qian, S; Li, W; Stieger, B B; Lee, S W; Michotte de welle, D; De favereau de jeneret, J; Bakhshiansohi, H; Krintiras, G; Caputo, C; Sabev, C; Batinkov, A I; Zenz, S C; Pesaresi, M F; Summers, S P; Saoulidou, N; Koraka, C K; Ghosh, S; Sikdar, A K; Castaldi, R; Dell'orso, R; Palmonari, F; Rolandi, L; Moggi, A; Fedi, G; Coscetti, S; Seo, S H; Cankocak, K; Cumalat, J P; Smith, J G; Iashvili, I; Gallo, S M; Parker, A M; Ledovskoy, A; Hung, P Q; Vaman, D; Goodell, J D; Gomez, J A; Celik, A; Luo, S; Hill, C S; Francis, B P; Tripathi, S M; Squires, M K; Thomson, J A; Brainerd, C; Tuli, S; Bourilkov, D; Mitselmakher, G; Patterson, J R; Kuznetsov, V Y; Tan, S M; Strohman, C R; Rebassoo, F O; Valouev, V; Zelepukin, S; Lusin, S; Vuosalo, C O U; Ruggles, T H; Rusack, R; Woodard, A E; Meng, F; Dev, N; Vishnevskiy, D; Cremaldi, L M; Oliveros tautiva, S J; Jones, T M; Wang, F; Zaganidis, N; Tytgat, M G; Fedorov, A; Korjik, M; Panov, V; Montagna, P; Vitulo, P; Traversi, G; Gonzalez caballero, I; Eysermans, J; Logatchev, O; Orlov, A; Tikhomirov, A; Kulikova, T; Strumia, A; Nam, S K; Soric, I; Padimanskas, M; Siddiqi, H M; Qazi, S F; Ahmad, M; Makouski, M; Chakaberia, I; Mitchell, T B; Baarmand, M; Hits, D; Theofilatos, K; Mohr, N; Jimenez estupinan, R; Micheli, F; Pata, J; Corrodi, S; Mohammadi najafabadi, M; Menasce, D L; Pedrini, D; Malberti, M; Linn, S L; Mesa, D; Tuuva, T; Carrillo montoya, C A; Roque romero, G A; Suwonjandee, N; Kim, H; Khalil ibrahim, S S; Mahrous mohamed kassem, A M; Trojman, L; Sarkar, U; Bhattacharya, S; Babaev, A; Okhotnikov, V; Nakad, Z S; Fruhwirth, R; Majerotto, W; Mikulec, I; Rohringer, H; Strauss, J; Krammer, N; Hartl, C; Pree, E; Rebello teles, P; Ball, A; Bialas, W; Brachet, S B; Gerwig, H; Lourenco, C; Mulders, M P; Vasey, F; Wilhelmsson, M; Dobson, M; Botta, C; Dunser, M F; Pol, A A; Suthakar, U; Takahashi, Y; De cosa, A; Hreus, T; Chen, G; Chen, H; Jiang, C; Yu, T; Klein, K; Schulz, J; Preuten, M; Millet, P N; Keller, H C; Pistone, C; Eckerlin, G; Jung, J; Mnich, J; Jansen, H; Wissing, C; Savitskyi, M; Eichhorn, T V; Harb, A; Botta, V; Martens, I; Knolle, J; Eren, E; Reichelt, O; Schutze, P J; Saibel, A; Schettler, H H; Schumann, S; Kutzner, V G; Husemann, U; Giffels, M; Akbiyik, M; Friese, R M; Baur, S S; Faltermann, N; Kuhn, E; Gottmann, A I D; Muller, D; Balzer, M N; Maier, S; Schnepf, M J; Wassmer, M; Renner, C W; Tcherniakhovski, D; Piedra gomez, J; Vilar cortabitarte, R; Trevisani, N; Boudry, V; Charlot, C P; Tran, T H; Thiant, F; Lethuillier, M M; Perries, S O; Popov, A; Morrissey, Q; Brummitt, A J; Bell, S J; Assiouras, P; Sikler, F; De palma, M; Fiore, L; Pompili, A; Marzocca, C; Errico, F; Soldani, E; Cavallo, F R; Rossi, A M; Torromeo, G; Masetti, G; Virgilio, S; Thyssen, F D M; Iorio, A O M; Montecchi, M; Santanastasio, F; Bulfon, C; Zanetti, A M; Casarsa, M; Han, D; Song, J; Ibrahim, Z A B; Faccioli, P; Gallinaro, M; Beirao da cruz e silva, C; Kuznetsova, E; Levchuk, L; Andreev, V; Toropin, A; Dermenev, A; Karpikov, I; Epshteyn, V; Uliyanov, A; Polikarpov, S; Markin, O; Cagil, A; Karapinar, G; Isildak, B; Yu, S; Banicz, K B; Cheung, H W K; Butler, J N; Quigg, D E; Hufnagel, D; Rakness, G L; Spalding, W J; Bhat, P; Kreis, B J; Jensen, H B; Chetluru, V; Albert, M; Hu, Z; Mishra, K; Vernieri, C; Larson, K E; Zejdl, P; Matulik, M; Cremonesi, M; Doualot, N; Ye, Z; Wu, Z; Geffert, P B; Dutta, V; Heller, R E; Dorsett, A L; Choudhary, B C; Arora, S; Ranjeet, R; Melo da costa, E; Torres da silva de araujo, F; Da silveira, G G; Alves coelho, E; Belchior batista das chagas, E; Buss, N H; Luukka, P R; Tuominen, E M; Havukainen, J J; Tigerstedt, U B S; Goerlach, U; Patois, Y; Collard, C; Mathieu, C; Lowette, S R J; Python, Q P; Moortgat, S; Vanlaer, P; De lentdecker, G W P; Rugovac, S; Tavernier, F F; Beaumont, W; Van de klundert, M; Vankov, P H; Verguilov, V Z; Hadjiiska, R M; De moraes gregores, E; Iope, R L; Ruiz vargas, J C; Barcala riveira, M J; Hernandez calama, J M; Oller, J C; Flix molina, J; Navarro tobar, A; Sastre alvaro, J; Redondo ferrero, D D; Titov, M; Bausson, P; Major, P; Bala, S; Dhingra, N; Kumari, P; Costa, S; Pelli, S; Meneguzzo, A T; Passaseo, M; Pegoraro, M; Montecassiano, F; Dorigo, T; Silvestrin, L; Del duca, V; Demaria, N; Ferrero, M I; Mussa, R; Cartiglia, N; Mazza, G; Maina, E; Dellacasa, G; Covarelli, R; Cotto, G; Sola, V; Monteil, E; Shchelina, K; Castilla-valdez, H; De la cruz burelo, E; Kazana, M; Gorbunov, N; Kosarev, I; Smirnov, V; Korenkov, V; Savina, M; Lanev, A; Semenyushkin, I; Kashunin, I; Krouglov, N; Markina, A; Bunichev, V; Zotov, N; Miagkov, I; Nazarova, E; Uzunyan, A; Riutin, R; Tsverava, N; Paganis, E; Chen, K; Lu, R; Psallidas, A; Gorodetzky, P P; Hazen, E S; Avetisyan, A; Richardson, C A; Busza, W; Roland, C E; Cali, I A; Marini, A C; Wang, T; Schmitt, M H; Geurts, F; Ecklund, K M; Repond, J O; Schmidt, I; George, N; Ingram, F D; Wetzel, J W; Ogul, H; Spanier, S M; Mrak tadel, A; Zevi della porta, G J; Maguire, C F; Janjam, R K; Chevtchenko, S; Zhu, R; Voicu, B R; Mao, J; Stone, R L; Schnetzer, S R; Nash, K C; Kunnawalkam elayavalli, R; Laflotte, I; Weinberg, M G; Mc cracken, M E; Kalogeropoulos, A; Raval, A H; Cooperstein, S B; Landsberg, G; Kwok, K H M; Ellison, J A; Gary, J W; Si, W; Hagopian, V; Hagopian, S L; Bertoldi, M; Brigljevic, V; Ptochos, F; Ather, M W; Konstantinou, S; Yang, D; Li, Q; Attebury, G; Siado castaneda, J E; Lemaitre, V; Caebergs, T P M; Litov, L B; Fernandez de troconiz, J; Colling, D J; Davies, G J; Raymond, D M; Virdee, T S; Bainbridge, R J; Lewis, P; Rose, A W; Bauer, D U; Sotiropoulos, S; Papadopoulos, I; Triantis, F; Aslanoglou, X; Majumdar, N; Devadula, S; Ciocci, M A; Messineo, A; Palla, F; Grippo, M T; Yu, G B; Willemse, T; Lamsa, J; Blumenfeld, B J; Maksimovic, P; Gritsan, A; Cocoros, A A; Arnold, P; Tonwar, S C; Eno, S C; Mignerey, A L C; Nabili, S; Dalchenko, M; Maghrbi, Y; Huang, T; Sheharyar, A; Durkin, L S; Wang, Z; Tos, K M; Kim, B J; Guo, Y; Ma, P; Rosenzweig, D J; Reeder, D D; Smith, W; Surkov, A; Mohapatra, A K; Maurisset, A; Mans, J M; Kubota, Y; Frahm, E J; Chatterjee, R M; Ruchti, R; Mc cauley, T P; Ivie, P A; Betchart, B A; Hindrichs, O H; Sultana, M; Henderson, C; Sanders, D; Summers, D; Perera, L; Miller, D H; Miyamoto, J; Peng, C; Zahariev, R Z; Peynekov, M M; Ratti, L; Ressegotti, M; Czellar, S; Molnar, J; Khan, A; Morton, A; Vischia, P; Erice cid, C F; Carpinteyro bernardino, S; Chmelev, D; Smetannikov, V; Hektor, A; Kadastik, M; Godinovic, N; Simelevicius, D; Alvi, O I; Hoorani, H U R; Shahzad, H; Shah, M A; Shoaib, M; Rao, M A S; Sidwell, R; Roettger, T J; Corkill, S; Lustermann, W; Roeser, U H; Backhaus, M; Perrin, G L; Naseri, M; Rapuano, F; Redaelli, N; Carbone, L; Spiga, F; Brivio, F; Monti, F; Markowitz, P E; Rodriguez, J L; Morelos pineda, A; Norberg, S R; Ryu, M S; Jeng, Y G; Esteban lallana, M C; Trabelsi, A; Dittmann, J R; Elsayed, E; Khan, Z A; Soomro, K; Janikashvili, M; Kapoor, A; Rastogi, A; Remnev, G; Hrubec, J; Wulz, C; Fichtinger, S K; Abbaneo, D; Janot, P; Racz, A; Roche, J; Ryjov, V; Sphicas, P; Treille, D; Wertelaers, P; Cure, B R; Fulcher, J R; Moortgat, F W; Bocci, A; Giordano, D; Hegeman, J G; Hegner, B; Gallrapp, C; Cepeda hermida, M L; Riahi, H; Chapon, E; Orfanelli, S; Guilbaud, M R J; Seidel, M; Merlin, J A; Heidegger, C; Schneider, M A; Robmann, P W; Salerno, D N; Galloni, C; Neutelings, I W; Shi, J; Li, J; Zhao, J; Pandoulas, D; Rauch, M P; Schael, S; Hoepfner, K; Weber, M K; Teyssier, D F; Thuer, S; Rieger, M; Albert, A; Muller, T; Sert, H; Lohmann, W F; Ntomari, E; Grohsjean, A J; Wen, Y; Ron alvarez, E; Hampe, J; Bin anuar, A A; Blobel, V; Mattig, S; Haller, J; Sonneveld, J M; Malara, A; Rabbertz, K H; Freund, B; Schell, D B; Savoiu, D; Geerebaert, Y; Becheva, E L; Nguyen, M A; Stahl leiton, A G; Magniette, F B; Fay, J; Gascon-shotkin, S M; Ille, B; Viret, S; Finco, L; Brown, R; Cockerill, D; Williams, T S; Markou, C; Anagnostou, G; Mohanty, A K; Creanza, D M; De robertis, G; Verwilligen, P O J; Perrotta, A; Fanfani, A; Ciocca, C; Ravera, F; Toniolo, N; Badoer, S; Paolucci, P; Khan, W A; Voevodina, E; De iorio, A; Cavallari, F; Bellini, F; Cossutti, F; La licata, C; Da rold, A; Lee, K; Go, Y; Park, J; Kim, M S; Wan abdullah, W; Toldaiev, O; Golovtcov, V; Oreshkin, V; Sosnov, D; Soroka, D; Gninenko, S; Pivovarov, G; Erofeeva, M; Pozdnyakov, I; Danilov, M; Tarkovskii, E; Chadeeva, M; Philippov, D; Bychkova, O; Kardapoltsev, L; Onengut, G; Cerci, S; Vergili, M; Dolek, F; Sever, R; Gamsizkan, H; Ocalan, K; Dogan, H; Kaya, M; Kuo, C; Chang, Y; Albrow, M G; Banerjee, S; Berryhill, J W; Chevenier, G; Freeman, J E; Green, C H; O'dell, V R; Wenzel, H; Lukhanin, G; Di luca, S; Spiegel, L G; Deptuch, G W; Ratnikova, N; Paterno, M F; Burkett, K A; Jones, C D; Klima, B; Fagan, D; Hasegawa, S; Thompson, R; Gecse, Z; Liu, M; Pedro, K J; Jindariani, S; Zimmerman, T; Skirvin, T M; Hofman, D J; Evdokimov, O; Jung, K E; Trauger, H C; Gouskos, L; Karancsi, J; Kumar, A; Garg, R B; Keshri, S; Nogima, H; Sznajder, A; Vilela pereira, A; Eerola, P A; Pekkanen, J T K; Guldmyr, J H; Gele, D; Charles, L; Bonnin, C; Bourgatte, G; De clercq, J T; Favart, L; Grebenyuk, A; Yang, Y; Allard, Y; Genchev, V I; Galli mercadante, P; Tomei fernandez, T R; Ahuja, S; Ingram, Q; Rohe, T V; Colino, N; Ferrando, A; Garcia-abia, P; Calvo alamillo, E; Goy lopez, S; Delgado peris, A; Alvarez fernandez, A; Couderc, F; Moudden, Y; Potenza, R; D'alessandro, R; Landi, G; Viliani, L; Bisello, D; Gasparini, F; Michelotto, M; Benettoni, M; Bellato, M A; Fanzago, F; De castro manzano, P; Mantovani, G; Menichelli, M; Passeri, D; Placidi, P; Manoni, E; Storchi, L; Cirio, R; Romero, A; Staiano, A; Pastrone, N; Solano, A M; Argiro, S; Bellan, R; Duran osuna, M C; Ershov, Y; Zamyatin, N; Palchik, V; Afanasyev, S; Nikonov, E; Miller, M; Baranov, A; Ivanov, V; Petrushanko, S; Perfilov, M; Eyyubova, G; Baskakov, A; Kachanov, V; Korablev, A; Bordanovskiy, A; Kepuladze, Z; Hsiung, Y B; Wu, S; Rankin, D S; Jacob, C J; Alverson, G; Hortiangtham, A; Roland, G M; Gomez ceballos retuerto, G; Innocenti, G M; Allen, B L; Baty, A A; Narayanan, S M; Hu, M; Bi, R; Sung, K K H; Gunter, T K; Bueghly, J D; Yepes stork, P P; Mestvirishvili, A; Miller, M J; Norbeck, J E; Snyder, C M; Branson, J G; Sfiligoi, I; Rogan, C S; Edwards-bruner, C R; Young, R W; Verweij, M; Goulianos, K; Galvez, P D; Zhu, K; Lapadatescu, V; Dutta, I; Somalwar, S V; Park, M; Kaplan, S M; Feld, D B; Vorobiev, I; Lange, D; Zuranski, A M; Mei, K; Knight iii, R R; Spencer, E; Hogan, J M; Syarif, R; Olmedo negrete, M A; Ghiasi shirazi, S; Erodotou, E; Ban, Y; Xue, Z; Kravchenko, I; Keller, J D; Knowlton, D P; Wigmans, M E J; Volobouev, I; Peltola, T H T; Kovac, M; Bruno, G L; Gregoire, G; Delaere, C; Bodlak, M; Della negra, M J; James, T O; Shtipliyski, A M; Tziaferi, E; Karageorgos, V W; Karasavvas, D; Fountas, K; Mukhopadhyay, S; Basti, A; Raffaelli, F; Spandre, G; Mazzoni, E; Manca, E; Mandorli, G; Yoo, H D; Aerts, A; Eminizer, N C; Amram, O; Stenson, K M; Ford, W T; Green, M L; Kellogg, R; Jeng, G; Kunkle, J M; Baron, O; Feng, Y; Wong, K; Toufique, Y; Sehgal, V; Breedon, R E; Cox, P T; Mulhearn, M J; Gerhard, R M; Taylor, D N; Konigsberg, J; Sperka, D M; Lo, K H; Carnes, A M; Quach, D M; Li, T; Andreev, V; Herve, L A M; Klabbers, P R; Svetek, A; Hussain, U; Evans, A C; Lannon, K P; Fedorov, S; Bodek, A; Demina, R; Khukhunaishvili, A; West, C A; Perez, C U; Godang, R; Meier, M; Neumeister, N; Gruchala, M M; Zagurski, K B; Prosolovich, V; Kuhn, J; Ratti, S P; Riccardi, C M; Vacchi, C; Szekely, G; Hobson, P R; Fernandez menendez, J; Rodriguez bouza, V; Butler, P; Pedraza morales, M I; Barakat, N; Sakharov, V; Lavrenov, P; Ahmed, I; Kim, T Y; Pac, M Y; Sculac, T; Gajdosik, T; Tamosiunas, K; Juodagalvis, A; Dudenas, V; Barannik, S; Bashir, A; Khan, F; Saeed, F; Khan, M T; Maravin, Y; Mohammadi, A; Noonan, D C; Saunders, M D; Dittmar, M; Donega, M; Perrozzi, L; Nageli, C; Dorfer, C; Zhu, D H; Spirig, Y A; Ruini, D; Alishahiha, M; Ardalan, F; Saramad, S; Mansouri, R; Eskandari tadavani, E; Ragazzi, S; Tabarelli de fatis, T; Govoni, P; Ghezzi, A; Stringhini, G; Sevilla moreno, A C; Smith, C J; Abdelalim, A A; Hassan, A F A; Swain, S K; Sahoo, D K; Carrera jarrin, E F; Chauhan, S; Munoz chavero, F; Ambrogi, F; Hensel, C; Alves, G A; Baechler, J; Christiansen, J; De roeck, A; Gayde, J; Hansen, M; Kienzle, W; Reynaud, S; Schwick, C; Troska, J; Zeuner, W D; Osborne, J A; Moll, M; Franzoni, G; Tinoco mendes, A D; Milenovic, P; Garai, Z; Bendavid, J L; Dupont, N A; Gulhan, D C; Daponte, V; Martinez turtos, R; Giuffredi, R; Rapacz, K J; Otiougova, P; Zhu, G; Leggat, D A; Kiesel, M K; Lipinski, M; Wallraff, W; Meyer, A; Pook, T; Pooth, O; Behnke, O; Eckstein, D; Fischer, D J; Garay garcia, J; Vagnerini, A; Klanner, R; Stadie, H; Perieanu, A; Benecke, A; Abbas, S M; Schroeder, M; Lobelle pardo, P; Chwalek, T; Heidecker, C; Floh, K M; Gomez, G; Cabrillo bartolome, I J; Orviz fernandez, P; Duarte campderros, J; Busson, P; Dobrzynski, L; Fontaine, G R R; Granier de cassagnac, R; Paganini, P R J; Arleo, F P; Balagura, V; Martin blanco, J; Ortona, G; Kucher, I; Contardo, D C; Lumb, N; Baulieu, G; Lagarde, F; Shchablo, K; Heath, H F; Kreczko, L; Clement, E J; Paramesvaran, S; Bologna, S; Bell, K W; Petyt, D A; Moretti, S; Durkin, T J; Daskalakis, G; Kataria, S K; Iaselli, G; Pugliese, G; My, S; Sharma, A; Abbiendi, G; Taneja, S; Benussi, L; Fabbri, F; Calvelli, V; Frizziero, E; Barone, L M; De notaristefani, F; D'imperio, G; Gobbo, B; Yusupov, H; Liew, C S; Zabolotny, W M; Sobolev, S; Gavrikov, Y; Kozlov, I; Golubev, N; Andreev, Y; Tlisov, D; Zaytsev, V; Stepennov, A; Popova, E; Kolchanova, A; Shtol, D; Sirunyan, A; Gokbulut, G; Kara, O; Damarseckin, S; Guler, A M; Ozpineci, A; Hayreter, A; Li, S; Gruenendahl, S; Yarba, J; Para, A; Ristori, L F; Rubinov, P M; Reichanadter, M A; Churin, I; Beretvas, A; Muzaffar, S M; Lykken, J D; Gutsche, O; Baldin, B; Uplegger, L A; Lei, C M; Wu, W; Derylo, G E; Ruschman, M K; Lipton, R J; Whitbeck, A J; Schmitt, R; Contreras pasuy, L C; Olsen, J T; Cavanaugh, R J; Betts, R R; Wang, H; Sturdy, J T; Gutierrez jr, A; Campagnari, C F; White, D T; Brewer, F D; Qu, H; Ranjan, K; Lalwani, K; Md, H; Shah, A H; Fonseca de souza, S; De jesus damiao, D; Revoredo, E A; Chinellato, J A; Amadei marques da costa, C; Lampen, P T; Wendland, L A; Brom, J; Andrea, J; Tavernier, S; Van doninck, W K; Van mulders, P K A; Clerbaux, B; Rougny, R; Rashevski, G D; Rodozov, M N; Padula, S; Bernardes, C A; Dias maciel, C; Deiters, K; Feichtinger, D; Wiederkehr, S A; Cerrada, M; Fouz iglesias, M; Senghi soares, M; Pasquetto, E; Ferry, S C; Georgette, Z; Malcles, J; Csanad, M; Lal, M K; Walia, G; Kaur, A; Ciulli, V; Lenzi, P; Zanetti, M; Costa, M; Dughera, G; Bartosik, N; Ramirez sanchez, G; Frueboes, T M; Karjavine, V; Skachkov, N; Litvinenko, A; Petrosyan, A; Teryaev, O; Trofimov, V; Makankin, A; Golunov, A; Savrin, V; Korotkikh, V; Vardanyan, I; Lukina, O; Belyaev, A; Korneeva, N; Petukhov, V; Skvortsov, V; Konstantinov, D; Efremov, V; Smirnov, N; Shiu, J; Chen, P; Rohlf, J; Sulak, L R; St john, J M; Morse, D M; Krajczar, K F; Mironov, C M; Niu, X; Wang, J; Charaf, O; Matveev, M; Eppley, G W; Mccliment, E R; Ozok, F; Bilki, B; Zieser, A J; Olivito, D J; Wood, J G; Hashemi, B T; Bean, A L; Wang, Q; Tuo, S; Xu, Q; Roberts, J W; Anderson, D J; Lath, A; Jacques, P; Sun, M; Andrews, M B; Svyatkovskiy, A; Hardenbrook, J R; Heintz, U; Lee, J; Wang, L; Prosper, H B; Adams, J R; Liu, S; Wang, D; Swanson, D; Thiltges, J F; Undleeb, S; Finger, M; Beuselinck, R; Rand, D T; Tapper, A D; Malik, S A; Lane, R C; Panagiotou, A; Diamantopoulou, M; Vourliotis, E; Mallios, S; Mondal, K; Bhattacharya, R; Bhowmik, D; Libby, J F; Azzurri, P; Foa, L; Tenchini, R; Verdini, P G; Ciampa, A; Radburn-smith, B C; Park, J; Swartz, M L; Sarica, U; Borcherding, F O; Barria, P; Goadhouse, S D; Xia, F; Joyce, M L; Belloni, A; Bouhali, O; Toback, D; Osipenkov, I L; Almes, G T; Walker, J W; Bylsma, B G; Lefeld, A J; Conway, J S; Flores, C S; Avery, P R; Terentyev, N; Barashko, V; Ryd, A P E; Tucker, J M; Heltsley, B K; Wittich, P; Riley, D S; Skinnari, L A; Chu, J Y; Ignatenko, M; Lindgren, M A; Saltzberg, D P; Peck, A N; Herve, A A M; Savin, A; Herndon, M F; Mason, W P; Martirosyan, S; Grahl, J; Hansen, P D; Saradhy, R; Mueller, C N; Planer, M D; Suh, I S; Hurtado anampa, K P; De barbaro, P J; Garcia-bellido alvarez de miranda, A A; Korjenevski, S K; Moolekamp, F E; Fallon, C T; Acosta castillo, J G; Gutay, L; Barker, A W; Gough, E; Poyraz, D; Verbeke, W L M; Beniozef, I S; Krasteva, R L; Winn, D R; Fenyvesi, A C; Makovec, A; Munro, C G; Sanchez cruz, S; Bernardino rodrigues, N A; Lokhovitskiy, A; Uribe estrada, C; Rebane, L; Racioppi, A; Kim, H; Kim, T; Puljak, I; Boyaryntsev, A; Saeed, M; Tanwir, S; Butt, U; Hussain, A; Nawaz, A; Khurshid, T; Imran, M; Sultan, A; Naeem, M; Kaadze, K; Modak, A; Taylor, R D; Kim, D; Grab, C; Nessi-tedaldi, F; Fischer, J; Manzoni, R A; Zagozdzinska-bochenek, A A; Berger, P; Reichmann, M P; Hashemi, M; Rezaei hosseinabadi, F; Paganoni, M; Farina, F M; Joshi, Y R; Avila bernal, C A; Cabrera mora, A L; Segura delgado, M A; Gonzalez hernandez, C F; Asavapibhop, B; U-ruekolan, S; Kim, G; Choi, M; Aly, S; El sawy, M; Castaneda hernandez, A M; Pinna, D; Shamdasani, J; Tavkhelidze, D; Hegde, V; Aziz, T; Sur, N; Sutar, B J; Karmakar, S; Ghete, V M; Dragicevic, M G; Brandstetter, J; Marques moraes, A; Molina insfran, J A; Aspell, P; Baillon, P; Barney, D; Honma, A; Pape, L; Sakulin, H; Macpherson, A L; Bangert, N; Guida, R; Steggemann, J; Voutsinas, G G; Da silva gomes, D; Ben mimoun bel hadj, F; Bonnaud, J Y R; Canelli, F M; Bai, J; Qiu, J; Bian, J; Cheng, Y; Kukulies, C; Teroerde, M; Erdmann, M; Hebbeker, T; Zantis, F; Scheuch, F; Erdogan, Y; Campbell, A J; Kasemann, M; Lange, W; Raspiareza, A; Melzer-pellmann, I; Aldaya martin, M; Lewendel, B; Schmidt, R S; Lipka, E; Missiroli, M; Grados luyando, J M; Shevchenko, R; Babounikau, I; Steinbrueck, G; Vanhoefer, A; Ebrahimi, A; Pena rodriguez, K J; Niedziela, M A; Eich, M M; Froehlich, A; Simonis, H J; Katkov, I; Wozniewski, S; Marco de lucas, R J; Lopez virto, A M; Jaramillo echeverria, R W; Hennion, P; Zghiche, A; Chiron, A; Romanteau, T; Beaudette, F; Lobanov, A; Grasseau, G J; Pierre-emile, T B; El mamouni, H; Gouzevitch, M; Goldstein, J; Cussans, D G; Seif el nasr, S A; Titterton, A S; Ford, P J W; Olaiya, E O; Salisbury, J G; Paspalaki, G; Asenov, P; Hidas, P; Kiss, T N; Zalan, P; Shukla, P; Abbrescia, M; De filippis, N; Donvito, G; Radogna, R; Miniello, G; Gelmi, A; Capiluppi, P; Marcellini, S; Odorici, F; Bonacorsi, D; Genta, C; Ferri, G; Saviano, G; Ferrini, M; Minutoli, S; Tosi, S; Lista, L; Passeggio, G; Breglio, G; Merola, M; Diemoz, M; Rahatlou, S; Baccaro, S; Bartoloni, A; Talamo, I G; Cipriani, M; Kim, J Y; Oh, G; Lim, J H; Lee, J; Mohamad idris, F B; Gani, A B; Cwiok, M; Doroba, K; Martins galinhas, B E; Kim, V; Krivshich, A; Vorobyev, A; Ivanov, Y; Tarakanov, V; Lobodenko, A; Obikhod, T; Isayev, O; Kurov, O; Leonidov, A; Lvova, N; Kirsanov, M; Suvorova, O; Karneyeu, A; Demidov, S; Konoplyannikov, A; Popov, V; Pakhlov, P; Vinogradov, S; Klemin, S; Blinov, V; Skovpen, I; Chatrchyan, S; Grigorian, N; Kayis topaksu, A; Sunar cerci, D; Hos, I; Guler, Y; Kiminsu, U; Serin, M; Deniz, M; Turan, I; Eryol, F; Pozdnyakov, A; Liu, Z; Doan, T H; Hanlon, J E; Mcbride, P L; Pal, I; Garren, L; Oleynik, G; Harris, R M; Bolla, G; Kowalkowski, J B; Evans, D E; Vaandering, E W; Patrick, J F; Rechenmacher, R; Prosser, A G; Messer, T A; Tiradani, A R; Rivera, R A; Jayatilaka, B A; Duarte, J M; Todri, A; Harr, R F; Richman, J D; Bhandari, R; Dordevic, M; Cirkovic, P; Mora herrera, C; Rosa lopes zachi, A; De paula carvalho, W; Kinnunen, R L A; Lehti, S T; Maeenpaeae, T H; Bloch, D; Chabert, E C; Rudolf, N G; Devroede, O; Skovpen, K; Lontkovskyi, D; De wolf, E A; Van mechelen, P; Van spilbeeck, A B E; Georgiev, L S; Novaes, S F; Costa, M A; Costa leal, B; Horisberger, R P; De la cruz, B; Willmott, C; Perez-calero yzquierdo, A M; Dejardin, M M; Mehta, A; Barbagli, G; Focardi, E; Bacchetta, N; Gasparini, U; Pantano, D; Sgaravatto, M; Ventura, S; Zotto, P; Candelori, A; Pozzobon, N; Boletti, A; Servoli, L; Postolache, V; Rossi, A; Ciangottini, D; Alunni solestizi, L; Maselli, S; Migliore, E; Amapane, N C; Lopez fernandez, R; Sanchez hernandez, A; Heredia de la cruz, I; Matveev, V; Kracikova, T; Shmatov, S; Vasilev, S; Kurenkov, A; Oleynik, D; Verkheev, A; Voytishin, N; Proskuryakov, A; Bogdanova, G; Petrova, E; Bagaturia, I; Tsamalaidze, Z; Zhao, Z; Arcaro, D J; Barberis, E; Wamorkar, T; Wang, B; Ralph, D K; Velasco, M M; Odell, N J; Sevova, S; Li, W; Merlo, J; Onel, Y; Mermerkaya, H; Moeller, A R; Haytmyradov, M; Dong, R; Bugg, W M; Ragghianti, G C; Delannoy sotomayor, A G; Thapa, K; Yagil, A; Gerosa, R A; Masciovecchio, M; Schmitz, E J; Kapustinsky, J S; Greene, S V; Zhang, L; Vlimant, J V; Mughal, A; Cury siqueira, S; Gershtein, Y; Arora, S R R; Lin, W X; Stickland, D P; Mc donald, K T; Pivarski, J M C; Lucchini, M T; Higginbotham, S L; Rosenfield, M; Long, O R; Johnson, K F; Adams, T; Susa, T; Rykaczewski, H; Ioannou, A; Ge, Y; Levin, A M; Li, J; Li, L; Bloom, K A; Monroy montanez, J A; Kunori, S; Wang, Z; Favart, D; Maltoni, F; Vidal marono, M; Delcourt, M; Markov, S I; Seez, C; Richards, A J; Ferguson, W; Chatziangelou, M; Karathanasis, G; Kontaxakis, P; Jones, J A; Strologas, J; Katsoulis, P; Dutt, S; Roy chowdhury, S; Bhardwaj, R; Purohit, A; Singh, B; Behera, P K; Sharma, A; Spagnolo, P; Tonelli, G E; Giannini, L; Poulios, S; Groote, J F; Untuc, B; Oztirpan, F O; Koseoglu, I; Luiggi lopez, E E; Hadley, N J; Shin, Y H; Safonov, A; Eusebi, R; Rose, A K; Overton, D A; Erbacher, R D; Funk, G N; Pilot, J R; Regnery, B J; Klimenko, S; Matchev, K; Gleyzer, S; Wang, J; Cadamuro, L; Sun, W M; Soffi, L; Lantz, S R; Wright, D; Cline, D; Cousins jr, R D; Erhan, S; Yang, X; Schnaible, C J; Dasgupta, A; Loveless, R; Bradley, D C; Monzat, D; Dodd, L M; Tikalsky, J L; Kapusta, J; Gilbert, W J; Lesko, Z J; Marinelli, N; Wayne, M R; Heering, A H; Galanti, M; Duh, Y; Roy, A; Arabgol, M; Hacker, T J; Salva, S; Petrov, V; Barychevski, V; Drobychev, G; Lobko, A; Gabusi, M; Fabris, L; Conte, E R E; Kasprowicz, G H; Kyberd, P; Cole, J E; Lopez, J M; Salazar gonzalez, C A; Benzon, A M; Pelagio, L; Walsh, M F; Postnov, A; Lelas, D; Vaitkus, J V; Jurciukonis, D; Sulmanas, B; Ahmad, A; Ahmed, W; Jalil, S H; Kahl, W E; Taylor, D R; Choi, Y I; Jeong, Y; Roy, T; Schoenenberger, M A; Khateri, P; Etesami, S M; Fiorini, E; Pullia, A; Magni, S; Gennai, S; Fiorendi, S; Zuolo, D; Sanabria arenas, J C; Florez bustos, C A; Holguin coral, A; Mendez, H; Srimanobhas, N; Jaikar, A H; Arteche gonzalez, F J; Call, K R; Vazquez valencia, E F; Calderon monroy, M A; Abdelmaguid, A; Mal, P K; Yuan, L; Lomidze, I; Prangishvili, I; Adamov, G; Dube, S S; Dugad, S; Mohanty, G B; Bhat, M A; Bheesette, S; Malawski, M L; Abou kors, D J

    CMS is a general purpose proton-proton detector designed to run at the highest luminosity at the LHC. It is also well adapted for studies at the initially lower luminosities. The CMS Collaboration consists of over 1800 scientists and engineers from 151 institutes in 31 countries. The main design goals of CMS are: \\begin{enumerate} \\item a highly performant muon system, \\item the best possible electromagnetic calorimeter \\item high quality central tracking \\item hermetic calorimetry \\item a detector costing less than 475 MCHF. \\end{enumerate} All detector sub-systems have started construction. Engineering Design Reviews of parts of these sub-systems have been successfully carried-out. These are held prior to granting authorization for purchase. The schedule for the LHC machine and the experiments has been revised and CMS will be ready for first collisions now expected in April 2006. \\\\\\\\ ~~~~$\\bullet$ Magnet \\\\ The detector (see Figure) will be built around a long (13~m) and large bore ($\\phi$=5.9~m) high...

  17. Fast emulation of track reconstruction in the CMS simulation

    CERN Document Server

    Komm, Matthias

    2017-01-01

    Simulated samples of various physics processes are a key ingredient within analyses to unlock the physics behind LHC collision data. Samples with more and more statistics are required to keep up with the increasing amounts of recorded data. During sample generation, significant computing time is spent on the reconstruction of charged particle tracks from energy deposits which additionally scales with the pileup conditions. In CMS, the FastSimulation package is developed for providing a fast alternative to the standard simulation and reconstruction workflow. It employs various techniques to emulate track reconstruction effects in particle collision events. Several analysis groups in CMS are utilizing the package, in particular those requiring many samples to scan the parameter space of physics models (e.g. SUSY) or for the purpose of estimating systematic uncertainties. The strategies for and recent developments in this emulation are presented, including a novel, flexible implementation of tracking emulation w...

  18. Critical services in the LHC computing

    International Nuclear Information System (INIS)

    Sciaba, A

    2010-01-01

    The LHC experiments (ALICE, ATLAS, CMS and LHCb) rely for the data acquisition, processing, distribution, analysis and simulation on complex computing systems, running using a variety of services, provided by the experiments, the Worldwide LHC Computing Grid and the different computing centres. These services range from the most basic (network, batch systems, file systems) to the mass storage services or the Grid information system, up to the different workload management systems, data catalogues and data transfer tools, often internally developed in the collaborations. In this contribution we review the status of the services most critical to the experiments by quantitatively measuring their readiness with respect to the start of the LHC operations. Shortcomings are identified and common recommendations are offered.

  19. Computational strategies for three-dimensional flow simulations on distributed computer systems

    Science.gov (United States)

    Sankar, Lakshmi N.; Weed, Richard A.

    1995-08-01

    This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.

  20. Computational strategies for three-dimensional flow simulations on distributed computer systems

    Science.gov (United States)

    Sankar, Lakshmi N.; Weed, Richard A.

    1995-01-01

    This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.

  1. Distributed computer systems theory and practice

    CERN Document Server

    Zedan, H S M

    2014-01-01

    Distributed Computer Systems: Theory and Practice is a collection of papers dealing with the design and implementation of operating systems, including distributed systems, such as the amoeba system, argus, Andrew, and grapevine. One paper discusses the concepts and notations for concurrent programming, particularly language notation used in computer programming, synchronization methods, and also compares three classes of languages. Another paper explains load balancing or load redistribution to improve system performance, namely, static balancing and adaptive load balancing. For program effici

  2. Distributed computing for real-time petroleum reservoir monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Ayodele, O. R. [University of Alberta, Edmonton, AB (Canada)

    2004-05-01

    Computer software architecture is presented to illustrate how the concept of distributed computing can be applied to real-time reservoir monitoring processes, permitting the continuous monitoring of the dynamic behaviour of petroleum reservoirs at much shorter intervals. The paper describes the fundamental technologies driving distributed computing, namely Java 2 Platform Enterprise edition (J2EE) by Sun Microsystems, and the Microsoft Dot-Net (Microsoft.Net) initiative, and explains the challenges involved in distributed computing. These are: (1) availability of permanently placed downhole equipment to acquire and transmit seismic data; (2) availability of high bandwidth to transmit the data; (3) security considerations; (4) adaptation of existing legacy codes to run on networks as downloads on demand; and (5) credibility issues concerning data security over the Internet. Other applications of distributed computing in the petroleum industry are also considered, specifically MWD, LWD and SWD (measurement-while-drilling, logging-while-drilling, and simulation-while-drilling), and drill-string vibration monitoring. 23 refs., 1 fig.

  3. The CMS conductor

    CERN Document Server

    Horváth, I L; Marti, H P; Neuenschwander, J; Smith, R P; Fabbricatore, P; Musenich, R; Calvo, A; Campi, D; Curé, B; Desirelli, Alberto; Favre, G; Riboni, P L; Sgobba, Stefano; Tardy, T; Sequeira-Lopes-Tavares, S

    2000-01-01

    The Compact Muon Solenoid (CMS) is one of the experiments, which are being designed in the framework of the Large Hadron Collider (LHC) project at CERN, the design field of the CMS magnet is 4 T, the magnetic length is 13 m and the aperture is 6 m. This high magnetic field is achieved by means of a 4 layer, 5 modules superconducting coil. The coil is wound from an Al-stabilized Rutherford type conductor. The nominal current of the magnet is 20 kA at 4.5 K. In the CMS coil the structural function is ensured, unlike in other existing Al-stabilized thin solenoids, both by the Al-alloy reinforced conductor and the external former. In this paper the retained manufacturing process of the 50-km long reinforced conductor is described. In general the Rutherford type cable is surrounded by high purity aluminium in a continuous co-extrusion process to produce the Insert. Thereafter the reinforcement is joined by Electron Beam Welding to the pure Al of the insert, before being machined to the final dimensions. During the...

  4. Sensitivity of the Top quark mass measurement with the CMS experiment at LHC using t-tbar multijet simulated events

    CERN Document Server

    Codispoti, Giuseppe

    This thesis comes after a strong contribution on the realization of the CMS computing system, which can be seen as a relevant part of the experiment itself. A physics analysis completes the road from Monte Carlo production and analysis tools realization to the final physics study which is the actual goal of the experiment. The topic of physics work of this thesis is the study of tt events fully hadronic decay in the CMS experiment. A multi-jet trigger has been provided to fix a reasonable starting point, reducing the multi-jet sample to the nominal trigger rate. An offline selection has been provided to reduce the S/B ratio. The b-tag is applied to provide a further S/B improvement. The selection is applied to the background sample and to the samples generated at different top quark masses. The top quark mass candidate is reconstructed for all those samples using a kinematic fitter. The resulting distributions are used to build p.d.f.’s, interpolating them with a continuous arbitrary curve. These curves are...

  5. Decomposing energy balance contributions for quenched jets with CMS

    Energy Technology Data Exchange (ETDEWEB)

    Evdokimov, Olga

    2016-12-15

    Modification of energy balance in dijet events from heavy ion collisions, measured by CMS, was among the first jet quenching observations in the LHC energy domain. Here we further study the spatial extent of medium-induced modifications for such dijets, as well as potential medium response to propagating partons, using two-dimensional angular correlations of charged hadrons measured with respect to jets. New differential measurements of charged particle energy flow about the jet direction as a function of relative azimuth and relative pseudorapidity from 2.76 TeV PbPb collisions are compared with the reference pp data recorded by the CMS at the same energy. Modifications of correlated charged hadron distributions for both the leading and the subleading sides of the dijet are reported, together with comparisons of the long-range asymmetry of the underlying event in PbPb vs pp collisions.

  6. Stability and Scalability of the CMS Global Pool: Pushing HTCondor and GlideinWMS to New Limits

    Energy Technology Data Exchange (ETDEWEB)

    Balcas, J. [Caltech; Bockelman, B. [Nebraska U.; Hufnagel, D. [Fermilab; Hurtado Anampa, K. [Notre Dame U.; Aftab Khan, F. [NCP, Islamabad; Larson, K. [Fermilab; Letts, J. [UC, San Diego; Marra da Silva, J. [Sao Paulo, IFT; Mascheroni, M. [Fermilab; Mason, D. [Fermilab; Perez-Calero Yzquierdo, A. [Madrid, CIEMAT; Tiradani, A. [Fermilab

    2017-11-22

    The CMS Global Pool, based on HTCondor and glideinWMS, is the main computing resource provisioning system for all CMS workflows, including analysis, Monte Carlo production, and detector data reprocessing activities. The total resources at Tier-1 and Tier-2 grid sites pledged to CMS exceed 100,000 CPU cores, while another 50,000 to 100,000 CPU cores are available opportunistically, pushing the needs of the Global Pool to higher scales each year. These resources are becoming more diverse in their accessibility and configuration over time. Furthermore, the challenge of stably running at higher and higher scales while introducing new modes of operation such as multi-core pilots, as well as the chaotic nature of physics analysis workflows, places huge strains on the submission infrastructure. This paper details some of the most important challenges to scalability and stability that the CMS Global Pool has faced since the beginning of the LHC Run II and how they were overcome.

  7. The CMS Higgs Boson Goose Game

    CERN Document Server

    Cavallo, Francesca Romana

    2015-01-01

    Building and operating the CMS Detector is a complicated endeavour! Now, more than 20 years after the detector was conceived, the CMS Bologna group proposes to follow the steps of this challenging project by playing The Higgs Boson Goose Game, illustrating CMS activities and goals.The concept of the game is inspired by the traditional Game of the Goose. The underlying idea is that the progress of building and operating a detector at the LHC is similar to the progress of the pawns on the game board it is fast at times, bringing rewards and satisfaction, while sometimes unexpected problems cause delays or even a step back requiring CMS scientists to use all of their skill and creativity to devise new solutions.

  8. Validation of software releases for CMS

    International Nuclear Information System (INIS)

    Gutsche, Oliver

    2010-01-01

    The CMS software stack currently consists of more than 2 Million lines of code developed by over 250 authors with a new version being released every week. CMS has setup a validation process for quality assurance which enables the developers to compare the performance of a release to previous releases and references. The validation process provides the developers with reconstructed datasets of real data and MC samples. The samples span the whole range of detector effects and important physics signatures to benchmark the performance of the software. They are used to investigate interdependency effects of all CMS software components and to find and fix bugs. The release validation process described here is an integral part of CMS software development and contributes significantly to ensure stable production and analysis. It represents a sizable contribution to the overall MC production of CMS. Its success emphasizes the importance of a streamlined release validation process for projects with a large code basis and significant number of developers and can function as a model for future projects.

  9. 42 CFR 438.724 - Notice to CMS.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Notice to CMS. 438.724 Section 438.724 Public...) MEDICAL ASSISTANCE PROGRAMS MANAGED CARE Sanctions § 438.724 Notice to CMS. (a) The State must give the CMS Regional Office written notice whenever it imposes or lifts a sanction for one of the violations...

  10. Towards distributed multiscale computing for the VPH

    NARCIS (Netherlands)

    Hoekstra, A.G.; Coveney, P.

    2010-01-01

    Multiscale modeling is fundamental to the Virtual Physiological Human (VPH) initiative. Most detailed three-dimensional multiscale models lead to prohibitive computational demands. As a possible solution we present MAPPER, a computational science infrastructure for Distributed Multiscale Computing

  11. Scaling CMS data transfer system for LHC start-up

    International Nuclear Information System (INIS)

    Tuura, L; Bockelman, B; Bonacorsi, D; Egeland, R; Feichtinger, D; Metson, S; Rehn, J

    2008-01-01

    The CMS experiment will need to sustain uninterrupted high reliability, high throughput and very diverse data transfer activities as the LHC operations start. PhEDEx, the CMS data transfer system, will be responsible for the full range of the transfer needs of the experiment. Covering the entire spectrum is a demanding task: from the critical high-throughput transfers between CERN and the Tier-1 centres, to high-scale production transfers among the Tier-1 and Tier-2 centres, to managing the 24/7 transfers among all the 170 institutions in CMS and to providing straightforward access to handful of files to individual physicists. In order to produce the system with confirmed capability to meet the objectives, the PhEDEx data transfer system has undergone rigourous development and numerous demanding scale tests. We have sustained production transfers exceeding 1 PB/month for several months and have demonstrated core system capacity several orders of magnitude above expected LHC levels. We describe the level of scalability reached, and how we got there, with focus on the main insights into developing a robust, lock-free and scalable distributed database application, the validation stress test methods we have used, and the development and testing tools we found practically useful

  12. Forward physics with CMS

    CERN Document Server

    Grothe, Monika

    2008-01-01

    Forward physics with CMS at the LHC covers a wide range of physics subjects, including very low-x_Bj QCD, underlying event and multiple interactions characteristics, gamma-mediated processes, shower development at the energy scale of primary cosmic ray interactions with the atmosphere, diffraction in the presence of a hard scale and even MSSM Higgs discovery in central exclusive production. Selected feasibility studies to illustrate the forward physics potential of CMS are presented.

  13. CMS: Present status, limitations, and upgrade plans

    International Nuclear Information System (INIS)

    Cheung, H.W.K.

    2011-01-01

    An overview of the CMS upgrade plans will be presented. A brief status of the CMS detector will be given, covering some of the issues we have so far experienced. This will be followed by an overview of the various CMS upgrades planned, covering the main motivations for them, and the various R and D efforts for the possibilities under study. The CMS detector has been working extremely well since the start of data-taking at the LHC as is evidenced by the numerous excellent results published by CMS and presented at this workshop and recent conferences. Less well documented are the various issues that have been encountered with the detector. In the spirit of this workshop I will cover some of these issues with particular emphasis on problems that motivate some of the upgrades to the CMS detector for this decade of data-taking. Though the CMS detector has been working extremely well and expectations are great for making the most of the LHC luminosity, there have been a number of issues encountered so far. Some of these have been described and while none currently presents a problem for physics performance, some of them are expected to become more problematic, especially at the highest Phase 1 luminosities for which the majority of the integrated luminosity will be collected. These motivate upgrades for various parts of the CMS detector so that the current excellent physics performance can be maintained or even surpassed in the realm of the highest Phase 1 luminosities.

  14. Computation of the efficiency distribution of a multichannel focusing collimator

    International Nuclear Information System (INIS)

    Balasubramanian, A.; Venkateswaran, T.V.

    1977-01-01

    This article describes two computer methods of calculating the point source efficiency distribution functions of a focusing collimator with round tapered holes. The first method which computes only the geometric efficiency distribution is adequate for low energy collimators while the second method which computes both geometric and penetration efficiencies can be made use of for medium and high energy collimators. The scatter contribution to the efficiency is not taken into account. In the first method the efficiency distribution of a single cone of the collimator is obtained and the data are used for computing the distribution of the whole collimator. For high energy collimator the entire detector region is imagined to be divided into elemental areas. Efficiency of the elemental area is computed after suitably weighting for the penetration within the collimator septa, which is determined by three dimensional geometric techniques. The method of computing the line source efficiency distribution from point source distribution is also explained. The formulations have been tested by computing the efficiency distribution of several commercial collimators and collimators fabricated by us. (Auth.)

  15. Trigger Algorithms for Alignment and Calibration at the CMS Experiment

    CERN Document Server

    Fernandez Perez Tomei, Thiago Rafael

    2017-01-01

    The data needs of the Alignment and Calibration group at the CMS experiment are reasonably different from those of the physics studies groups. Data are taken at CMS through the online event selection system, which is implemented in two steps. The Level-1 Trigger is implemented on custom-made electronics and dedicated to analyse the detector information at a coarse-grained scale, while the High Level Trigger (HLT) is implemented as a series of software algorithms, running in a computing farm, that have access to the full detector information. In this paper we describe the set of trigger algorithms that is deployed to address the needs of the Alignment and Calibration group, how it fits in the general infrastructure of the HLT, and how it feeds the Prompt Calibration Loop (PCL), allowing for a fast turnaround for the alignment and calibration constants.

  16. A Weibull distribution accrual failure detector for cloud computing.

    Science.gov (United States)

    Liu, Jiaxi; Wu, Zhibo; Wu, Jin; Dong, Jian; Zhao, Yao; Wen, Dongxin

    2017-01-01

    Failure detectors are used to build high availability distributed systems as the fundamental component. To meet the requirement of a complicated large-scale distributed system, accrual failure detectors that can adapt to multiple applications have been studied extensively. However, several implementations of accrual failure detectors do not adapt well to the cloud service environment. To solve this problem, a new accrual failure detector based on Weibull Distribution, called the Weibull Distribution Failure Detector, has been proposed specifically for cloud computing. It can adapt to the dynamic and unexpected network conditions in cloud computing. The performance of the Weibull Distribution Failure Detector is evaluated and compared based on public classical experiment data and cloud computing experiment data. The results show that the Weibull Distribution Failure Detector has better performance in terms of speed and accuracy in unstable scenarios, especially in cloud computing.

  17. Award for the best CMS thesis

    CERN Multimedia

    2003-01-01

    The 2002 CMS PhD Thesis Award for has been presented to Giacomo Luca Bruno for his thesis defended at the University of Pavia in Italy and entitled "The RPC detectors and the muon system for the CMS experiment at the LHC". His work was supervised by Sergio P. Ratti from the University of Pavia. Since April 2002, Giacomo has been employed as a research fellow by CERN's EP Division. He continues to work on CMS in the areas of data acquisition and physics reconstruction and selection. Last Monday he received a commemorative engraved plaque from Lorenzo Foà, chairman of the CMS Collaboration Board. He will also receive expenses paid to an international physics conference to present his thesis results. Giacomo Luca Bruno with Lorenzo Foà

  18. The CMS High Level Trigger System

    CERN Document Server

    Afaq, A; Bauer, G; Biery, K; Boyer, V; Branson, J; Brett, A; Cano, E; Carboni, A; Cheung, H; Ciganek, M; Cittolin, S; Dagenhart, W; Erhan, S; Gigi, D; Glege, F; Gómez-Reino, Robert; Gulmini, M; Gutiérrez-Mlot, E; Gutleber, J; Jacobs, C; Kim, J C; Klute, M; Kowalkowski, J; Lipeles, E; Lopez-Perez, Juan Antonio; Maron, G; Meijers, F; Meschi, E; Moser, R; Murray, S; Oh, A; Orsini, L; Paus, C; Petrucci, A; Pieri, M; Pollet, L; Rácz, A; Sakulin, H; Sani, M; Schieferdecker, P; Schwick, C; Sexton-Kennedy, E; Sumorok, K; Suzuki, I; Tsirigkas, D; Varela, J

    2007-01-01

    The CMS Data Acquisition (DAQ) System relies on a purely software driven High Level Trigger (HLT) to reduce the full Level-1 accept rate of 100 kHz to approximately 100 Hz for archiving and later offline analysis. The HLT operates on the full information of events assembled by an event builder collecting detector data from the CMS front-end systems. The HLT software consists of a sequence of reconstruction and filtering modules executed on a farm of O(1000) CPUs built from commodity hardware. This paper presents the architecture of the CMS HLT, which integrates the CMS reconstruction framework in the online environment. The mechanisms to configure, control, and monitor the Filter Farm and the procedures to validate the filtering code within the DAQ environment are described.

  19. The CMS Data Analysis School Experience

    Energy Technology Data Exchange (ETDEWEB)

    De Filippis, N. [INFN, Bari; Bauerdick, L. [Fermilab; Chen, J. [Taiwan, Natl. Taiwan U.; Gallo, E. [DESY; Klima, B. [Fermilab; Malik, S. [Puerto Rico U., Mayaguez; Mulders, M. [CERN; Palla, F. [INFN, Pisa; Rolandi, G. [Pisa, Scuola Normale Superiore

    2017-11-21

    The CMS Data Analysis School is an official event organized by the CMS Collaboration to teach students and post-docs how to perform a physics analysis. The school is coordinated by the CMS schools committee and was first implemented at the LHC Physics Center at Fermilab in 2010. As part of the training, there are a number of “short” exercises on physics object reconstruction and identification, Monte Carlo simulation, and statistical analysis, which are followed by “long” exercises based on physics analyses. Some of the long exercises go beyond the current state of the art of the corresponding CMS analyses. This paper describes the goals of the school, the preparations for a school, the structure of the training, and student satisfaction with the experience as measured by surveys.

  20. CMS-G from Beta vulgaris ssp. maritima is maintained in natural populations despite containing an atypical cytochrome c oxidase.

    Science.gov (United States)

    Meyer, Etienne H; Lehmann, Caroline; Boivin, Stéphane; Brings, Lea; De Cauwer, Isabelle; Bock, Ralph; Kühn, Kristina; Touzet, Pascal

    2018-02-23

    While mitochondrial mutants of the respiratory machinery are rare and often lethal, cytoplasmic male sterility (CMS), a mitochondrially inherited trait that results in pollen abortion, is frequently encountered in wild populations. It generates a breeding system called gynodioecy. In Beta vulgaris ssp. maritima , a gynodioecious species, we found CMS-G to be widespread across the distribution range of the species. Despite the sequencing of the mitochondrial genome of CMS-G, the mitochondrial sterilizing factor causing CMS-G is still unknown. By characterizing biochemically CMS-G, we found that the expression of several mitochondrial proteins is altered in CMS-G plants. In particular, Cox1, a core subunit of the cytochrome c oxidase (complex IV), is larger but can still assemble into complex IV. However, the CMS-G-specific complex IV was only detected as a stabilized dimer. We did not observe any alteration of the affinity of complex IV for cytochrome c ; however, in CMS-G, complex IV capacity is reduced. Our results show that CMS-G is maintained in many natural populations despite being associated with an atypical complex IV. We suggest that the modified complex IV could incur the associated cost predicted by theoretical models to maintain gynodioecy in wild populations. © 2018 The Author(s). Published by Portland Press Limited on behalf of the Biochemical Society.

  1. Studies of the electromagnetic calorimeter and direct photon production at the CMS detector

    International Nuclear Information System (INIS)

    Reid, E.C.

    1999-04-01

    This thesis describes work carried out on the Compact Muon Solenoid (CMS) experiment for the Large Hadron Collider at CERN. Studies of the prototype of the electromagnetic calorimeter are described. The energy resolution has been evaluated for the 7 x 7 lead tungstate crystal matrix . The energy resolution achieved was: σ/E = 4.5%/√E ± 0.31%. An investigation of the response of eight prototype crystals to irradiation is also presented. This thesis describes in detail the first study of direct photon production at CMS. Event simulation and methods of reducing the background to the direct photon signal are presented. This work demonstrates that this process may he used to distinguish between different parameterisations of the gluon distribution. The sensitivity is such that a few days worth of data taking at low luminosity will be sufficient for this type of analysis at CMS. (author)

  2. The Principals and Practice of Distributed High Throughput Computing

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The potential of Distributed Processing Systems to deliver computing capabilities with qualities ranging from high availability and reliability to easy expansion in functionality and capacity were recognized and formalized in the 1970’s. For more three decade these principals Distributed Computing guided the development of the HTCondor resource and job management system. The widely adopted suite of software tools offered by HTCondor are based on novel distributed computing technologies and are driven by the evolving needs of High Throughput scientific applications. We will review the principals that underpin our work, the distributed computing frameworks and technologies we developed and the lessons we learned from delivering effective and dependable software tools in an ever changing landscape computing technologies and needs that range today from a desktop computer to tens of thousands of cores offered by commercial clouds. About the speaker Miron Livny received a B.Sc. degree in Physics and Mat...

  3. 7th International Symposium on Intelligent Distributed Computing

    CERN Document Server

    Jung, Jason; Badica, Costin

    2014-01-01

    This book represents the combined peer-reviewed proceedings of the Seventh International Symposium on Intelligent Distributed Computing - IDC-2013, of the Second Workshop on Agents for Clouds - A4C-2013, of the Fifth International Workshop on Multi-Agent Systems Technology and Semantics - MASTS-2013, and of the International Workshop on Intelligent Robots - iR-2013. All the events were held in Prague, Czech Republic during September 4-6, 2013. The 41 contributions published in this book address many topics related to theory and applications of intelligent distributed computing and multi-agent systems, including: agent-based data processing, ambient intelligence, bio-informatics, collaborative systems, cryptography and security, distributed algorithms, grid and cloud computing, information extraction, intelligent robotics, knowledge management, linked data, mobile agents, ontologies, pervasive computing, self-organizing systems, peer-to-peer computing, social networks and trust, and swarm intelligence.  .

  4. QCD physics with the CMS experiment

    CERN Document Server

    Cerci, Salim

    2017-01-01

    Jets which are the signatures of quarks and gluons in the detector can be described by Quantum Chromodynamics (QCD) in terms of parton-parton scattering. Jets are abundantly produced at the LHC's high energy scales. Measurements of inclusive jets, dijets and multijets can be used to test perturbative QCD predictions and to constrain parton distribution functions (PDF), as well as to measure the strong coupling constant $\\alpha_{S}$. The measurements use the samples of proton-proton collisions collected with the CMS detector at the LHC at various center-of-mass energies of 7, 8 and 13 TeV.

  5. QCD Physics with the CMS Experiment

    Science.gov (United States)

    Cerci, S.

    2017-12-01

    Jets which are the signatures of quarks and gluons in the detector can be described by Quantum Chromodynamics (QCD) in terms of parton-parton scattering. Jets are abundantly produced at the LHC's high energy scales. Measurements of inclusive jets, dijets and multijets can be used to test perturbative QCD predictions and to constrain parton distribution functions (PDF), as well as to measure the strong coupling constant αS . The measurements use the samples of proton-proton collisions collected with the CMS detector at the LHC at various center-of-mass energies of 7, 8 and 13 TeV.

  6. ATLAS distributed computing: experience and evolution

    International Nuclear Information System (INIS)

    Nairz, A

    2014-01-01

    The ATLAS experiment has just concluded its first running period which commenced in 2010. After two years of remarkable performance from the LHC and ATLAS, the experiment has accumulated more than 25 fb −1 of data. The total volume of beam and simulated data products exceeds 100 PB distributed across more than 150 computing centres around the world, managed by the experiment's distributed data management system. These sites have provided up to 150,000 computing cores to ATLAS's global production and analysis processing system, enabling a rich physics programme including the discovery of the Higgs-like boson in 2012. The wealth of accumulated experience in global data-intensive computing at this massive scale, and the considerably more challenging requirements of LHC computing from 2015 when the LHC resumes operation, are driving a comprehensive design and development cycle to prepare a revised computing model together with data processing and management systems able to meet the demands of higher trigger rates, energies and event complexities. An essential requirement will be the efficient utilisation of current and future processor technologies as well as a broad range of computing platforms, including supercomputing and cloud resources. We will report on experience gained thus far and our progress in preparing ATLAS computing for the future

  7. CMS launches new educational tools

    CERN Document Server

    Corinne Pralavorio

    2014-01-01

    On 5 and 11 November, almost 90 pupils from the Fermi scientific high school in Livorno, Italy, took part in two Masterclass sessions organised by CMS.   CMS Masterclass participants.  The pupils took over a hall at CERN for an afternoon to test a new software tool called CIMA (CMS Instrument for Masterclass Analysis) for the first time. The software simplifies the process of recording results and reduces the number of steps required to enter data. During the exercise, each group of pupils had to analyse about a hundred events from the LHC. For each event, the budding physicists determined whether what they saw was a candidate W boson, Z boson or Higgs boson, identified the decay mode and entered key data. At the end of the analysis, they used the results to reconstruct a mass diagram. CIMA was developed by a team of scientists from the University of Aachen, Germany, the University of Notre-Dame, United States, and CERN. CMS has also added yet another educational tool to its already l...

  8. Development of Web Tools for the automatic Upload of Calibration Data into the CMS Condition Data

    Science.gov (United States)

    di Guida, Salvatore; Innocente, Vincenzo; Pierro, Antonio

    2010-04-01

    This article explains the recent evolution of Condition Database Application Service. The Condition Database Application Service is part of the condition database system of the CMS experiment, and it is used for handling and monitoring the CMS detector condition data, and the corresponding computing resources like Oracle Databases, storage service and network devices. We deployed a service, the offline Dropbox service, that will be used by Alignment and Calibration Group in order to upload from the offline network (GPN) the calibration constants produced by running offline analysis.

  9. Distributed computing by oblivious mobile robots

    CERN Document Server

    Flocchini, Paola; Santoro, Nicola

    2012-01-01

    The study of what can be computed by a team of autonomous mobile robots, originally started in robotics and AI, has become increasingly popular in theoretical computer science (especially in distributed computing), where it is now an integral part of the investigations on computability by mobile entities. The robots are identical computational entities located and able to move in a spatial universe; they operate without explicit communication and are usually unable to remember the past; they are extremely simple, with limited resources, and individually quite weak. However, collectively the ro

  10. FF-EMU: a radiation tolerant ASIC for the distribution of timing, trigger and control signals in the CMS End-Cap Muon detector

    International Nuclear Information System (INIS)

    Campagnari, C; Costantino, N; Magazzù, G; Tongiani, Claudio

    2012-01-01

    A radiation tolerant integrated circuit for the distribution of clock, trigger and control signals in the Front-End electronics of the CMS End-Cap Muon detector has been developed in the IBM CMOS 130nm technology. The circuit houses transmitter and receiver interfaces to serial links implementing the FF-LYNX protocol that allows the integrated transmission of triggers and data frames with different latency constraints. Encoder and decoder modules associate signal transitions to FF-LYNX frames. The system and the ASIC architecture and behavior and the results of test and characterization of the ASIC prototypes will be presented.

  11. The CMS Electronic Logbook

    CERN Multimedia

    Bukowiec, S; Beccati, B; Behrens, U; Biery, K; Branson, J; Cano, E; Cheung, H; Ciganek, M; Cittolin, S; Coarasa Perez, J A; Deldicque, C; Erhan, S; Gigi, D; Glege, F; Gomez-Reino, R; Hatton, D; Hwong, Y L; Loizides, C; Ma, F; Masetti, L; Meijers, F; Meschi, E; Meyer, A; Mommsen, R K; Moser, R; O’Dell, V; Orsini, L; Paus, C; Petrucci, A; Pieri, M; Racz, A; Raginel, O; Sakulin, H; Sani, M; Schieferdecker, P; Schwick, C; Shpakov, D; Simon, M; Sumorok, K; Sungho Yoon, A

    2010-01-01

    The CMS ELogbook (ELog) is a collaborative tool, which provides a platform to share and store information about various events or problems occurring in the Compact Muon Solenoid (CMS) experiment at CERN during operation. The ELog is based on a Model–View–Controller (MVC) software architectural pattern and uses an Oracle database to store messages and attachments. The ELog is developed as a pluggable web component in Oracle Portal in order to provide better management, monitoring and security.

  12. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  13. Simulation model of load balancing in distributed computing systems

    Science.gov (United States)

    Botygin, I. A.; Popov, V. N.; Frolov, S. G.

    2017-02-01

    The availability of high-performance computing, high speed data transfer over the network and widespread of software for the design and pre-production in mechanical engineering have led to the fact that at the present time the large industrial enterprises and small engineering companies implement complex computer systems for efficient solutions of production and management tasks. Such computer systems are generally built on the basis of distributed heterogeneous computer systems. The analytical problems solved by such systems are the key models of research, but the system-wide problems of efficient distribution (balancing) of the computational load and accommodation input, intermediate and output databases are no less important. The main tasks of this balancing system are load and condition monitoring of compute nodes, and the selection of a node for transition of the user’s request in accordance with a predetermined algorithm. The load balancing is one of the most used methods of increasing productivity of distributed computing systems through the optimal allocation of tasks between the computer system nodes. Therefore, the development of methods and algorithms for computing optimal scheduling in a distributed system, dynamically changing its infrastructure, is an important task.

  14. Parton distributions for the LHC Run II

    CERN Document Server

    Ball, Richard D.; Carrazza, Stefano; Deans, Christopher S.; Del Debbio, Luigi; Forte, Stefano; Guffanti, Alberto; Hartland, Nathan P.; Latorre, José I.; Rojo, Juan; Ubiali, Maria

    2015-01-01

    We present NNPDF3.0, the first set of parton distribution functions (PDFs) determined with a methodology validated by a closure test. NNPDF3.0 uses a global dataset including HERA-II deep-inelastic inclusive cross-sections, the combined HERA charm data, jet production from ATLAS and CMS, vector boson rapidity and transverse momentum distributions from ATLAS, CMS and LHCb, W+c data from CMS and top quark pair production total cross sections from ATLAS and CMS. Results are based on LO, NLO and NNLO QCD theory and also include electroweak corrections. To validate our methodology, we show that PDFs determined from pseudo-data generated from a known underlying law correctly reproduce the statistical distributions expected on the basis of the assumed experimental uncertainties. This closure test ensures that our methodological uncertainties are negligible in comparison to the generic theoretical and experimental uncertainties of PDF determination. This enables us to determine with confidence PDFs at different pertu...

  15. Data Quality Monitoring of the CMS Tracker

    International Nuclear Information System (INIS)

    Dutta, Suchandra

    2011-01-01

    The Data Quality Monitoring system for the Tracker has been developed within the CMS Software framework. It has been designed to be used during online data taking as well as during offline reconstruction. The main goal of the online system is to monitor detector performance and identify problems very efficiently during data collection so that proper actions can be taken to fix it. On the other hand any issue with data reconstruction or calibration can be detected during offline processing using the same tool. The monitoring is performed using histograms which are filled with information from raw and reconstructed data computed at the level of individual detectors. Furthermore, statistical tests are performed on these histograms to check the quality and flags are generated automatically. Results are visualized with web based graphical user interfaces. Final data certification is done combining these automatic flags and manual inspection. The Tracker DQM system has been successfully used during cosmic data taking and it has been optimised to fulfill the condition of collision data taking. In this paper we describe the functionality of the CMS Tracker DQM system and the experience acquired during proton-proton collision.

  16. HammerCloud: A Stress Testing System for Distributed Analysis

    International Nuclear Information System (INIS)

    Ster, Daniel C van der; García, Mario Úbeda; Paladin, Massimo; Elmsheuser, Johannes

    2011-01-01

    Distributed analysis of LHC data is an I/O-intensive activity which places large demands on the internal network, storage, and local disks at remote computing facilities. Commissioning and maintaining a site to provide an efficient distributed analysis service is therefore a challenge which can be aided by tools to help evaluate a variety of infrastructure designs and configurations. HammerCloud is one such tool; it is a stress testing service which is used by central operations teams, regional coordinators, and local site admins to (a) submit arbitrary number of analysis jobs to a number of sites, (b) maintain at a steady-state a predefined number of jobs running at the sites under test, (c) produce web-based reports summarizing the efficiency and performance of the sites under test, and (d) present a web-interface for historical test results to both evaluate progress and compare sites. HammerCloud was built around the distributed analysis framework Ganga, exploiting its API for grid job management. HammerCloud has been employed by the ATLAS experiment for continuous testing of many sites worldwide, and also during large scale computing challenges such as STEP'09 and UAT'09, where the scale of the tests exceeded 10,000 concurrently running and 1,000,000 total jobs over multi-day periods. In addition, HammerCloud is being adopted by the CMS experiment; the plugin structure of HammerCloud allows the execution of CMS jobs using their official tool (CRAB).

  17. Proceedings of the second workshop of LHC Computing Grid, LCG-France; ACTES, 2e colloque LCG-France

    Energy Technology Data Exchange (ETDEWEB)

    Chollet, Frederique; Hernandez, Fabio; Malek, Fairouz; Gaelle, Shifrin (eds.) [Laboratoire de Physique Corpusculaire Clermont-Ferrand, Campus des Cezeaux, 24, avenue des Landais, Clermont-Ferrand (France)

    2007-03-15

    The second LCG-France Workshop was held in Clermont-Ferrand on 14-15 March 2007. These sessions organized by IN2P3 and DAPNIA were attended by around 70 participants working with the Computing Grid of LHC in France. The workshop was a opportunity of exchanges of information between the French and foreign site representatives on one side and delegates of experiments on the other side. The event allowed enlightening the place of LHC Computing Task within the frame of W-LCG world project, the undergoing actions and the prospects in 2007 and beyond. The following communications were presented: 1. The current status of the LHC computation in France; 2.The LHC Grid infrastructure in France and associated resources; 3.Commissioning of Tier 1; 4.The sites of Tier-2s and Tier-3s; 5.Computing in ALICE experiment; 6.Computing in ATLAS experiment; 7.Computing in the CMS experiments; 8.Computing in the LHCb experiments; 9.Management and operation of computing grids; 10.'The VOs talk to sites'; 11.Peculiarities of ATLAS; 12.Peculiarities of CMS and ALICE; 13.Peculiarities of LHCb; 14.'The sites talk to VOs'; 15. Worldwide operation of Grid; 16.Following-up the Grid jobs; 17.Surveillance and managing the failures; 18. Job scheduling and tuning; 19.Managing the site infrastructure; 20.LCG-France communications; 21.Managing the Grid data; 22.Pointing the net infrastructure and site storage. 23.ALICE bulk transfers; 24.ATLAS bulk transfers; 25.CMS bulk transfers; 26. LHCb bulk transfers; 27.Access to LHCb data; 28.Access to CMS data; 29.Access to ATLAS data; 30.Access to ALICE data; 31.Data analysis centers; 32.D0 Analysis Farm; 33.Some CMS grid analyses; 34.PROOF; 35.Distributed analysis using GANGA; 36.T2 set-up for end-users. In their concluding remarks Fairouz Malek and Dominique Pallin stressed that the current workshop was more close to users while the tasks for tightening the links between the sites and the experiments were definitely achieved. The IN2P3

  18. VIP visit to CERN P5 CMS of Pakistan Science Members

    CERN Multimedia

    Hoch, Michael

    2012-01-01

    VIP visit to CERN P5 CMS of PAEC & JCPC Science Members List of PAEC Visitors: Dr. Badar Suleman - Member Science PAEC & Member of JCPC Dr. Waqar M. Butt - Member Engineering (Head of HMC3) Dr. Maqsood Ahmad - Chief Scientist (Head of Accelerator Project) List of CMS participants: Prof. Joseph Incandela, CMS Spokesperson Dr. Austin Ball, CMS Technical Coordinator Mr Andrzej Charkiewicz, CMS Resources Manager Dr. Michael Hoch, CMS Outreach activities, CMS photographer and guide Dr. Achille Petrilli, CMS Team Leader

  19. SUSY searches in early CMS data

    International Nuclear Information System (INIS)

    Tricomi, A

    2008-01-01

    In the first year of data taking at LHC, the CMS experiment expects to collect about 1 fb -1 of data, which make possible the first searches for new phenomena. All such searches require however the measurement of the SM background and a detailed understanding of the detector performance, reconstruction algorithms and triggering. The CMS efforts are hence addressed to designing a realistic analysis plan in preparation to the data taking. In this paper, the CMS perspectives and analysis strategies for Supersymmetry (SUSY) discovery with early data are presented

  20. CP violation in CMS expected performance

    CERN Document Server

    Stefanescu, J

    1999-01-01

    The CMS experiment can contribute significantly to the measurement of the CP violation asymmetries. A recent evaluation of the expected precision on the CP violation parameter sin 2 beta in the channel B /sub d//sup 0/ to J/ psi $9 K/sub s//sup 0/ has been performed using a simulation of the CMS tracker including full pattern recognition. CMS has also studied the possibility to observe CP violation in the decay channel B/sub s//sup 0/ to J/ psi phi . The $9 results of these studies are reviewed. (7 refs).

  1. 42 CFR 426.517 - CMS' statement regarding new evidence.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false CMS' statement regarding new evidence. 426.517... DETERMINATIONS Review of an NCD § 426.517 CMS' statement regarding new evidence. (a) CMS may review any new... experts; and (5) Presented during any hearing. (b) CMS may submit a statement regarding whether the new...

  2. Set of CMS posters in Spanish

    CERN Multimedia

    Lapka, Marzena; Rao, Achintya

    2014-01-01

    14 A0 posters in English to be printed locally or displayed online. Purpose: science fairs, exhibitions, preparatory material for the CMS virtual visits, etc. Themes: CMS detector, sub-detectors, construction, lowering and installation, collaboration and physics. Available in many languages.

  3. Set of CMS posters in Greek

    CERN Multimedia

    Lapka, Marzena; Petrilli, Achille

    2015-01-01

    14 A0 posters in English to be printed locally or displayed online. Purpose: science fairs, exhibitions, preparatory material for the CMS virtual visits, etc. Themes: CMS detector, sub-detectors, construction, lowering and installation, collaboration and physics. Available in many languages.

  4. Set of CMS posters (multiple languages)

    CERN Multimedia

    Lapka, Marzena; Rao, Achintya

    2014-01-01

    14 A0 posters in English to be printed locally or displayed online. Purpose: science fairs, exhibitions, preparatory material for the CMS virtual visits, etc. Themes: CMS detector, sub-detectors, construction, lowering and installation, collaboration and physics. Available in many languages.

  5. Modeling Workflow Management in a Distributed Computing System ...

    African Journals Online (AJOL)

    Distributed computing is becoming increasingly important in our daily life. This is because it enables the people who use it to share information more rapidly and increases their productivity. A major characteristic feature or distributed computing is the explicit representation of process logic within a communication system, ...

  6. The Latest from CMS

    CERN Multimedia

    2009-01-01

    CMS is on track to be ready for physics one month in advance of the LHC restart. The final installations are being completed and tests are being run to ensure that the experiment is as well prepared as possible to exploit sustained LHC operation throughout 2010. Physics week in Bologna, Italy, was a valuable time for CMS collaborators to discuss preparations for numerous physics analyses, as well as the performance of the detector during the recent data-taking period with cosmics (CRAFT 09). During this five-week exercise, more than 300 million cosmic events were recorded with the magnetic field on. This large data-set is being used to further improve the sub-detector alignment, calibration and performance whilst awaiting p-p collisions. Meanwhile, in the experimental cavern, Wolfram Zeuner, Deputy Technical Coordinator of CMS, reports "We are now very nearly closed up again. We are just doing the final clean-up work and are ready t...

  7. CMS outreach event to close LS1

    CERN Multimedia

    Achintya Rao

    2015-01-01

    CMS opened its doors to about 700 students from schools near CERN, who visited the detector on 16 and 17 February during the last major CMS outreach event of LS1.   Pellentesque sapien mi, pharetra vitae, auctor eu, congue sed, turpis. Enthusiastic CMS guides spent a day and a half showing the equally enthusiastic visitors, aged 10 to 18, the beauty of CMS and particle physics. The recently installed wheelchair lift was called into action and enabled a visitor who arrived on crutches to access the detector cavern unimpeded.  The CMS collaboration had previously devoted a day to school visits after the successful “Neighbourhood Days” in May 2014 and, encouraged by the turnout, decided to extend an invitation to local schools once again. The complement of nearly 40 guides and crowd marshals was aided by a support team that coordinated the transportation of the young guests and received them at Point 5, where a dedicated safety team including first-aiders, security...

  8. Model unspecific search in CMS. First results at 13 TeV

    Energy Technology Data Exchange (ETDEWEB)

    Roemer, Jonas; Albert, Andreas; Duchardt, Deborah; Hebbeker, Thomas; Knutzen, Simon; Lieb, Jonas; Meyer, Arnd; Pook, Tobias [III. Physikalisches Institut A, RWTH Aachen University (Germany)

    2016-07-01

    Following an upgrade in center of mass energy from √(s) = 8 TeV to 13 TeV, the LHC delivered first proton-proton collisions at this unprecedented energy in 2015. The CMS experiment recorded data corresponding to an integrated luminosity of 3.7 fb{sup -1}. Since many theoretical models predict signal cross sections to increase strongly with the center of mass energy, the data taken at √(s) = 13 TeV are competitive to the previous data taking period even with a lower recorded integrated luminosity. The Model Unspecific Search in CMS (MUSiC) searches for physics beyond the standard model independent of theoretical models. Using an automatic method, kinematic distributions of the data are compared with the standard model expectation in every final state. Therefore, MUSiC reduces the chance of overlooking new physics, since even distributions not covered by dedicated analyses are investigated. This talk outlines changes to the analysis made necessary by the increased center of mass energy and first results with lepton triggered events.

  9. CMS Centres Worldwide - a New Collaborative Infrastructure

    CERN Document Server

    Taylor, Lucas

    2011-01-01

    Webcasts, and generic Web tools such as CMS-TV for broadcasting live monitoring and outreach information. Being Web-based and experiment-independent, these systems could easily be extended to other organizations. We describe the experiences of using CMS Centres Worldwide in the CMS data-taking operations as well as for major media events with several hundred TV channels, radio stations, and many more press journalists simultaneously around the world.

  10. 42 CFR 423.2264 - Guidelines for CMS review.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Guidelines for CMS review. 423.2264 Section 423....2264 Guidelines for CMS review. In reviewing marketing material or enrollment forms under § 423.2262, CMS determines (unless otherwise specified in additional guidance) that the marketing materials— (a...

  11. The CMS Electromagnetic Calorimeter: Construction, Commissioning and Calibration

    CERN Document Server

    Orimoto, Toyoko

    2009-01-01

    The Compact Muon Solenoid (CMS) detector at the Large Hadron Colider (LHC) is ready for first collisions. The Electromagnetic Calorimeter (ECAL) of CMS, a high resolution detector comprised of nearly 76000 lead tungstate crystals, will play a crucial role in the coming physics searches undertaken by CMS. The design and performance of the CMS ECAL with test beams, cosmic rays, and first single beam data will be presented. In addition, the status of the calorimeter and plans for calibration with first collisions will be discussed.

  12. Higher order QCD radiation in top pair production with the CMS detector

    International Nuclear Information System (INIS)

    Flossdorf, Alexander

    2009-10-01

    The Large Hadron Collider at CERN will collide protons with a centre-of-mass energy of up to √(s)=14 TeV, thereby offering the opportunity to explore a wide range of physics topics. In this thesis the effects of QCD radiation in top pair events are examined. Due to the large top mass, top pairs are well suited for an investigation of gluon emissions. An extensive study comparing different radiation models implemented in Monte Carlo event generators is presented. The transverse momentum distribution of the t anti t system is rather sensitive to radiation influences and therefore analysed in detail. As hard emissions can be associated with jets, a thorough investigation of these jets is performed. The transverse momentum of hard jets and the rapidity distribution of the hardest jet in the t anti t rest frame are examined. Moreover an analysis of samples incorporating different radiation models after the full CMS detector simulation is presented, studying the same observables as on generator level. The potential of the CMS experiment to distinguish between different models is estimated and a method to obtain the underlying transverse momentum distribution of the t anti t system is described. (orig.)

  13. Electronics and triggering challenges for the CMS High Granularity Calorimeter

    Science.gov (United States)

    Lobanov, A.

    2018-02-01

    The High Granularity Calorimeter (HGCAL), presently being designed by the CMS collaboration to replace the CMS endcap calorimeters for the High Luminosity phase of LHC, will feature six million channels distributed over 52 longitudinal layers. The requirements for the front-end electronics are extremely challenging, including high dynamic range (0.2 fC-10 pC), low noise (~2000 e- to be able to calibrate on single minimum ionising particles throughout the detector lifetime) and low power consumption (~20 mW/channel), as well as the need to select and transmit trigger information with a high granularity. Exploiting the intrinsic precision-timing capabilities of silicon sensors also requires careful design of the front-end electronics as well as the whole system, particularly clock distribution. The harsh radiation environment and requirement to keep the whole detector as dense as possible will require novel solutions to the on-detector electronics layout. Processing the data from the HGCAL imposes equally large challenges on the off-detector electronics, both for the hardware and incorporated algorithms. We present an overview of the complete electronics architecture, as well as the performance of prototype components and algorithms.

  14. The CMS CERN Analysis Facility (CAF)

    Energy Technology Data Exchange (ETDEWEB)

    Buchmueller, O [Imperial College (United Kingdom); Bonacorsi, D [Universita and INFN, Bologna (Italy); Fanzago, F [Universita and INFN, Padova (Italy); Gowdy, S; Malgeri, L; Panzer-Steindel, B; Schwickerath, U; Spiga, D; Toebbicke, Rainer [Conseil Europeen Recherche Nucl. (CERN) Switzerland (Switzerland); Kreuzer, P [Rheinisch-Westfaelische Tech. Hoch. (RWTH) (Germany); Mankel, R [Deutsches Elektronen-Synchrotron (DESY) (Germany); Metson, S [University of Bristol (United Kingdom); Sanches, J Afonso; Teodoro, D, E-mail: Peter.Kreuzer@cern.c [Universidade do Estado do Rio De Janeiro (UERJ) (Brazil)

    2010-04-01

    The CMS CERN Analysis Facility (CAF) was primarily designed to host a large variety of latency-critical workflows. These break down into alignment and calibration, detector commissioning and diagnosis, and high-interest physics analysis requiring fast-turnaround. In addition to the low latency requirement on the batch farm, another mandatory condition is the efficient access to the RAW detector data stored at the CERN Tier-0 facility. The CMS CAF also foresees resources for interactive login by a large number of CMS collaborators located at CERN, as an entry point for their day-by-day analysis. These resources will run on a separate partition in order to protect the high-priority use-cases described above. While the CMS CAF represents only a modest fraction of the overall CMS resources on the WLCG GRID, an appropriately sized user-support service needs to be provided. We will describe the building, commissioning and operation of the CMS CAF during the year 2008. The facility was heavily and routinely used by almost 250 users during multiple commissioning and data challenge periods. It reached a CPU capacity of 1.4MSI2K and a disk capacity at the Peta byte scale. In particular, we will focus on the performances in terms of networking, disk access and job efficiency and extrapolate prospects for the upcoming LHC first year data taking. We will also present the experience gained and the limitations observed in operating such a large facility, in which well controlled workflows are combined with more chaotic type analysis by a large number of physicists.

  15. The CMS CERN Analysis Facility (CAF)

    International Nuclear Information System (INIS)

    Buchmueller, O; Bonacorsi, D; Fanzago, F; Gowdy, S; Malgeri, L; Panzer-Steindel, B; Schwickerath, U; Spiga, D; Toebbicke, Rainer; Kreuzer, P; Mankel, R; Metson, S; Sanches, J Afonso; Teodoro, D

    2010-01-01

    The CMS CERN Analysis Facility (CAF) was primarily designed to host a large variety of latency-critical workflows. These break down into alignment and calibration, detector commissioning and diagnosis, and high-interest physics analysis requiring fast-turnaround. In addition to the low latency requirement on the batch farm, another mandatory condition is the efficient access to the RAW detector data stored at the CERN Tier-0 facility. The CMS CAF also foresees resources for interactive login by a large number of CMS collaborators located at CERN, as an entry point for their day-by-day analysis. These resources will run on a separate partition in order to protect the high-priority use-cases described above. While the CMS CAF represents only a modest fraction of the overall CMS resources on the WLCG GRID, an appropriately sized user-support service needs to be provided. We will describe the building, commissioning and operation of the CMS CAF during the year 2008. The facility was heavily and routinely used by almost 250 users during multiple commissioning and data challenge periods. It reached a CPU capacity of 1.4MSI2K and a disk capacity at the Peta byte scale. In particular, we will focus on the performances in terms of networking, disk access and job efficiency and extrapolate prospects for the upcoming LHC first year data taking. We will also present the experience gained and the limitations observed in operating such a large facility, in which well controlled workflows are combined with more chaotic type analysis by a large number of physicists.

  16. An Innovative Beam Halo Monitor system for the CMS experiment at the LHC: Design, Commissioning and First Beam Results

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00344917; Dabrowski, Anne

    The Compact Muon Solenoid (CMS) is a multi-purpose experiment situated at the Large Hadron Collider (LHC). The CMS has the mandate of searching new physics and making precise measurements of the already known mechanisms by using data produced by collisions of high-energy particles. To ensure high quality physics data taking, it is important to monitor and ensure the quality of the colliding particle beams. This thesis presents the research and design, the integration and the first commissioning results of a novel Beam Halo Monitor (BHM) that was designed and built for the CMS experiment. The BHM provides an online, bunch-by-bunch measurement of background particles created by interactions of the proton beam with residual gas molecules in the vacuum chamber or with collimator material upstream of the CMS, separately for each beam. The system consists of two arrays of twenty direction-sensitive detectors that are distributed azimuthally around the outer forward shielding of the CMS experiment. Each detector is ...

  17. Model of CMS Tracker

    CERN Multimedia

    Breuker

    1999-01-01

    A full scale CMS tracker mock-up exposed temporarily in the hall of building 40. The purpose of the mock-up is to study the routing of services, assembly and installation. The people in front are only a small fraction of the CMS tracker collaboration. Left to right : M. Atac, R. Castaldi, H. Breuker, D. Pandoulas,P. Petagna, A. Caner, A. Carraro, H. Postema, M. Oriunno, S. da Mota Silva, L. Van Lancker, W. Glessing, G. Benefice, A. Onnela, M. Gaspar, G. M. Bilei

  18. Measurement of pseudorapidity distributions of charged particles in proton-proton collisions at $\\sqrt{s}$ = 8 TeV by the CMS and TOTEM experiments

    CERN Document Server

    Chatrchyan, Serguei; Sirunyan, Albert M; Tumasyan, Armen; Adam, Wolfgang; Bergauer, Thomas; Dragicevic, Marko; Erö, Janos; Fabjan, Christian; Friedl, Markus; Fruehwirth, Rudolf; Ghete, Vasile Mihai; Hartl, Christian; Hörmann, Natascha; Hrubec, Josef; Jeitler, Manfred; Kiesenhofer, Wolfgang; Knünz, Valentin; Krammer, Manfred; Krätschmer, Ilse; Liko, Dietrich; Mikulec, Ivan; Rabady, Dinyar; Rahbaran, Babak; Rohringer, Herbert; Schöfbeck, Robert; Strauss, Josef; Taurok, Anton; Treberer-Treberspurg, Wolfgang; Waltenberger, Wolfgang; Wulz, Claudia-Elisabeth; Mossolov, Vladimir; Shumeiko, Nikolai; Suarez Gonzalez, Juan; Alderweireldt, Sara; Bansal, Monika; Bansal, Sunil; Cornelis, Tom; De Wolf, Eddi A; Janssen, Xavier; Knutsson, Albert; Luyckx, Sten; Mucibello, Luca; Ochesanu, Silvia; Roland, Benoit; Rougny, Romain; Van Haevermaet, Hans; Van Mechelen, Pierre; Van Remortel, Nick; Van Spilbeeck, Alex; Blekman, Freya; Blyweert, Stijn; D'Hondt, Jorgen; Heracleous, Natalie; Kalogeropoulos, Alexis; Keaveney, James; Kim, Tae Jeong; Lowette, Steven; Maes, Michael; Olbrechts, Annik; Strom, Derek; Tavernier, Stefaan; Van Doninck, Walter; Van Mulders, Petra; Van Onsem, Gerrit Patrick; Villella, Ilaria; Caillol, Cécile; Clerbaux, Barbara; De Lentdecker, Gilles; Favart, Laurent; Gay, Arnaud; Léonard, Alexandre; Marage, Pierre Edouard; Mohammadi, Abdollah; Perniè, Luca; Reis, Thomas; Seva, Tomislav; Thomas, Laurent; Vander Velde, Catherine; Vanlaer, Pascal; Wang, Jian; Adler, Volker; Beernaert, Kelly; Benucci, Leonardo; Cimmino, Anna; Costantini, Silvia; Dildick, Sven; Garcia, Guillaume; Klein, Benjamin; Lellouch, Jérémie; Mccartin, Joseph; Ocampo Rios, Alberto Andres; Ryckbosch, Dirk; Salva Diblen, Sinem; Sigamani, Michael; Strobbe, Nadja; Thyssen, Filip; Tytgat, Michael; Walsh, Sinead; Yazgan, Efe; Zaganidis, Nicolas; Basegmez, Suzan; Beluffi, Camille; Bruno, Giacomo; Castello, Roberto; Caudron, Adrien; Ceard, Ludivine; Da Silveira, Gustavo Gil; Delaere, Christophe; Du Pree, Tristan; Favart, Denis; Forthomme, Laurent; Giammanco, Andrea; Hollar, Jonathan; Jez, Pavel; Komm, Matthias; Lemaitre, Vincent; Liao, Junhui; Militaru, Otilia; Nuttens, Claude; Pagano, Davide; Pin, Arnaud; Piotrzkowski, Krzysztof; Popov, Andrey; Quertenmont, Loic; Selvaggi, Michele; Vidal Marono, Miguel; Vizan Garcia, Jesus Manuel; Beliy, Nikita; Caebergs, Thierry; Daubie, Evelyne; Hammad, Gregory Habib; Alves, Gilvan; Correa Martins Junior, Marcos; Dos Reis Martins, Thiago; Pol, Maria Elena; Henrique Gomes E Souza, Moacyr; Aldá Júnior, Walter Luiz; Carvalho, Wagner; Chinellato, Jose; Custódio, Analu; Melo Da Costa, Eliza; De Jesus Damiao, Dilson; De Oliveira Martins, Carley; Fonseca De Souza, Sandro; Malbouisson, Helena; Malek, Magdalena; Matos Figueiredo, Diego; Mundim, Luiz; Nogima, Helio; Prado Da Silva, Wanda Lucia; Santaolalla, Javier; Santoro, Alberto; Sznajder, Andre; Tonelli Manganote, Edmilson José; Vilela Pereira, Antonio; Bernardes, Cesar Augusto; De Almeida Dias, Flavia; Tomei, Thiago; De Moraes Gregores, Eduardo; Mercadante, Pedro G; Novaes, Sergio F; Padula, Sandra; Genchev, Vladimir; Iaydjiev, Plamen; Marinov, Andrey; Piperov, Stefan; Rodozov, Mircho; Sultanov, Georgi; Vutova, Mariana; Dimitrov, Anton; Glushkov, Ivan; Hadjiiska, Roumyana; Kozhuharov, Venelin; Litov, Leander; Pavlov, Borislav; Petkov, Peicho; Bian, Jian-Guo; Chen, Guo-Ming; Chen, He-Sheng; Chen, Mingshui; Du, Ran; Jiang, Chun-Hua; Liang, Dong; Liang, Song; Meng, Xiangwei; Plestina, Roko; Tao, Junquan; Wang, Xianyou; Wang, Zheng; Asawatangtrakuldee, Chayanit; Ban, Yong; Guo, Yifei; Li, Qiang; Li, Wenbo; Liu, Shuai; Mao, Yajun; Qian, Si-Jin; Wang, Dayong; Zhang, Linlin; Zou, Wei; Avila, Carlos; Carrillo Montoya, Camilo Andres; Chaparro Sierra, Luisa Fernanda; Florez, Carlos; Gomez, Juan Pablo; Gomez Moreno, Bernardo; Sanabria, Juan Carlos; Godinovic, Nikola; Lelas, Damir; Polic, Dunja; Puljak, Ivica; Antunovic, Zeljko; Kovac, Marko; Brigljevic, Vuko; Kadija, Kreso; Luetic, Jelena; Mekterovic, Darko; Morovic, Srecko; Sudic, Lucija; Attikis, Alexandros; Mavromanolakis, Georgios; Mousa, Jehad; Nicolaou, Charalambos; Ptochos, Fotios; Razis, Panos A; Finger, Miroslav; Finger Jr, Michael; Abdelalim, Ahmed Ali; Assran, Yasser; Elgammal, Sherif; Ellithi Kamel, Ali; Mahmoud, Mohammed; Radi, Amr; Kadastik, Mario; Müntel, Mait; Murumaa, Marion; Raidal, Martti; Rebane, Liis; Tiko, Andres; Eerola, Paula; Fedi, Giacomo; Voutilainen, Mikko; Härkönen, Jaakko; Karimäki, Veikko; Kinnunen, Ritva; Kortelainen, Matti J; Lampén, Tapio; Lassila-Perini, Kati; Lehti, Sami; Lindén, Tomas; Luukka, Panja-Riina; Mäenpää, Teppo; Peltola, Timo; Tuominen, Eija; Tuominiemi, Jorma; Tuovinen, Esa; Wendland, Lauri; Tuuva, Tuure; Besancon, Marc; Couderc, Fabrice; Dejardin, Marc; Denegri, Daniel; Fabbro, Bernard; Faure, Jean-Louis; Ferri, Federico; Ganjour, Serguei; Givernaud, Alain; Gras, Philippe; Hamel de Monchenault, Gautier; Jarry, Patrick; Locci, Elizabeth; Malcles, Julie; Nayak, Aruna; Rander, John; Rosowsky, André; Titov, Maksym; Baffioni, Stephanie; Beaudette, Florian; Busson, Philippe; Charlot, Claude; Daci, Nadir; Dahms, Torsten; Dalchenko, Mykhailo; Dobrzynski, Ludwik; Florent, Alice; Granier de Cassagnac, Raphael; Miné, Philippe; Mironov, Camelia; Naranjo, Ivo Nicolas; Nguyen, Matthew; Ochando, Christophe; Paganini, Pascal; Sabes, David; Salerno, Roberto; Sauvan, Jean-baptiste; Sirois, Yves; Veelken, Christian; Yilmaz, Yetkin; Zabi, Alexandre; Agram, Jean-Laurent; Andrea, Jeremy; Bloch, Daniel; Brom, Jean-Marie; Chabert, Eric Christian; Collard, Caroline; Conte, Eric; Drouhin, Frédéric; Fontaine, Jean-Charles; Gelé, Denis; Goerlach, Ulrich; Goetzmann, Christophe; Juillot, Pierre; Le Bihan, Anne-Catherine; Van Hove, Pierre; Gadrat, Sébastien; Beauceron, Stephanie; Beaupere, Nicolas; Boudoul, Gaelle; Brochet, Sébastien; Chasserat, Julien; Chierici, Roberto; Contardo, Didier; Depasse, Pierre; El Mamouni, Houmani; Fan, Jiawei; Fay, Jean; Gascon, Susan; Gouzevitch, Maxime; Ille, Bernard; Kurca, Tibor; Lethuillier, Morgan; Mirabito, Laurent; Perries, Stephane; Ruiz Alvarez, José David; Sgandurra, Louis; Sordini, Viola; Vander Donckt, Muriel; Verdier, Patrice; Viret, Sébastien; Xiao, Hong; Tsamalaidze, Zviad; Autermann, Christian; Beranek, Sarah; Bontenackels, Michael; Calpas, Betty; Edelhoff, Matthias; Feld, Lutz; Hindrichs, Otto; Klein, Katja; Ostapchuk, Andrey; Perieanu, Adrian; Raupach, Frank; Sammet, Jan; Schael, Stefan; Sprenger, Daniel; Weber, Hendrik; Wittmer, Bruno; Zhukov, Valery; Ata, Metin; Caudron, Julien; Dietz-Laursonn, Erik; Duchardt, Deborah; Erdmann, Martin; Fischer, Robert; Güth, Andreas; Hebbeker, Thomas; Heidemann, Carsten; Hoepfner, Kerstin; Klingebiel, Dennis; Knutzen, Simon; Kreuzer, Peter; Merschmeyer, Markus; Meyer, Arnd; Olschewski, Mark; Padeken, Klaas; Papacz, Paul; Reithler, Hans; Schmitz, Stefan Antonius; Sonnenschein, Lars; Teyssier, Daniel; Thüer, Sebastian; Weber, Martin; Cherepanov, Vladimir; Erdogan, Yusuf; Flügge, Günter; Geenen, Heiko; Geisler, Matthias; Haj Ahmad, Wael; Hoehle, Felix; Kargoll, Bastian; Kress, Thomas; Kuessel, Yvonne; Lingemann, Joschka; Nowack, Andreas; Nugent, Ian Michael; Perchalla, Lars; Pooth, Oliver; Stahl, Achim; Asin, Ivan; Bartosik, Nazar; Behr, Joerg; Behrenhoff, Wolf; Behrens, Ulf; Bell, Alan James; Bergholz, Matthias; Bethani, Agni; Borras, Kerstin; Burgmeier, Armin; Cakir, Altan; Calligaris, Luigi; Campbell, Alan; Choudhury, Somnath; Costanza, Francesco; Diez Pardos, Carmen; Dooling, Samantha; Dorland, Tyler; Eckerlin, Guenter; Eckstein, Doris; Eichhorn, Thomas; Flucke, Gero; Geiser, Achim; Grebenyuk, Anastasia; Gunnellini, Paolo; Habib, Shiraz; Hauk, Johannes; Hellwig, Gregor; Hempel, Maria; Horton, Dean; Jung, Hannes; Kasemann, Matthias; Katsas, Panagiotis; Kieseler, Jan; Kleinwort, Claus; Krämer, Mira; Krücker, Dirk; Lange, Wolfgang; Leonard, Jessica; Lipka, Katerina; Lohmann, Wolfgang; Lutz, Benjamin; Mankel, Rainer; Marfin, Ihar; Melzer-Pellmann, Isabell-Alissandra; Meyer, Andreas Bernhard; Mnich, Joachim; Mussgiller, Andreas; Naumann-Emme, Sebastian; Novgorodova, Olga; Nowak, Friederike; Perrey, Hanno; Petrukhin, Alexey; Pitzl, Daniel; Placakyte, Ringaile; Raspereza, Alexei; Ribeiro Cipriano, Pedro M; Riedl, Caroline; Ron, Elias; Sahin, Mehmet Özgür; Salfeld-Nebgen, Jakob; Saxena, Pooja; Schmidt, Ringo; Schoerner-Sadenius, Thomas; Schröder, Matthias; Stein, Matthias; Vargas Trevino, Andrea Del Rocio; Walsh, Roberval; Wissing, Christoph; Aldaya Martin, Maria; Blobel, Volker; Enderle, Holger; Erfle, Joachim; Garutti, Erika; Goebel, Kristin; Görner, Martin; Gosselink, Martijn; Haller, Johannes; Höing, Rebekka Sophie; Kirschenmann, Henning; Klanner, Robert; Kogler, Roman; Lange, Jörn; Lapsien, Tobias; Lenz, Teresa; Marchesini, Ivan; Ott, Jochen; Peiffer, Thomas; Pietsch, Niklas; Rathjens, Denis; Sander, Christian; Schettler, Hannes; Schleper, Peter; Schlieckau, Eike; Schmidt, Alexander; Seidel, Markus; Sibille, Jennifer; Sola, Valentina; Stadie, Hartmut; Steinbrück, Georg; Troendle, Daniel; Usai, Emanuele; Vanelderen, Lukas; Barth, Christian; Baus, Colin; Berger, Joram; Böser, Christian; Butz, Erik; Chwalek, Thorsten; De Boer, Wim; Descroix, Alexis; Dierlamm, Alexander; Feindt, Michael; Guthoff, Moritz; Hartmann, Frank; Hauth, Thomas; Held, Hauke; Hoffmann, Karl-Heinz; Husemann, Ulrich; Katkov, Igor; Kornmayer, Andreas; Kuznetsova, Ekaterina; Lobelle Pardo, Patricia; Martschei, Daniel; Mozer, Matthias Ulrich; Müller, Thomas; Niegel, Martin; Nürnberg, Andreas; Oberst, Oliver; Quast, Gunter; Rabbertz, Klaus; Ratnikov, Fedor; Röcker, Steffen; Schilling, Frank-Peter; Schott, Gregory; Simonis, Hans-Jürgen; Stober, Fred-Markus Helmut; Ulrich, Ralf; Wagner-Kuhr, Jeannine; Wayand, Stefan; Weiler, Thomas; Wolf, Roger; Zeise, Manuel; Anagnostou, Georgios; Daskalakis, Georgios; Geralis, Theodoros; Kesisoglou, Stilianos; Kyriakis, Aristotelis; Loukas, Demetrios; Markou, Athanasios; Markou, Christos; Ntomari, Eleni; Psallidas, Andreas; Topsis-Giotis, Iasonas; Gouskos, Loukas; Panagiotou, Apostolos; Saoulidou, Niki; Stiliaris, Efstathios; Aslanoglou, Xenofon; Evangelou, Ioannis; Flouris, Giannis; Foudas, Costas; Jones, John; Kokkas, Panagiotis; Manthos, Nikolaos; Papadopoulos, Ioannis; Paradas, Evangelos; Bencze, Gyorgy; Hajdu, Csaba; Hidas, Pàl; Horvath, Dezso; Sikler, Ferenc; Veszpremi, Viktor; Vesztergombi, Gyorgy; Zsigmond, Anna Julia; Beni, Noemi; Czellar, Sandor; Molnar, Jozsef; Palinkas, Jozsef; Szillasi, Zoltan; Karancsi, János; Raics, Peter; Trocsanyi, Zoltan Laszlo; Ujvari, Balazs; Swain, Sanjay Kumar; Beri, Suman Bala; Bhatnagar, Vipin; Dhingra, Nitish; Gupta, Ruchi; Kaur, Manjit; Mehta, Manuk Zubin; Mittal, Monika; Nishu, Nishu; Sharma, Archana; Singh, Jasbir; Kumar, Ashok; Kumar, Arun; Ahuja, Sudha; Bhardwaj, Ashutosh; Choudhary, Brajesh C; Kumar, Ajay; Malhotra, Shivali; Naimuddin, Md; Ranjan, Kirti; Sharma, Varun; Shivpuri, Ram Krishen; Banerjee, Sunanda; Bhattacharya, Satyaki; Chatterjee, Kalyanmoy; Dutta, Suchandra; Gomber, Bhawna; Jain, Sandhya; Jain, Shilpi; Khurana, Raman; Modak, Atanu; Mukherjee, Swagata; Roy, Debarati; Sarkar, Subir; Sharan, Manoj; Singh, Anil; Abdulsalam, Abdulla; Dutta, Dipanwita; Kailas, Swaminathan; Kumar, Vineet; Mohanty, Ajit Kumar; Pant, Lalit Mohan; Shukla, Prashant; Topkar, Anita; Aziz, Tariq; Chatterjee, Rajdeep Mohan; Ganguly, Sanmay; Ghosh, Saranya; Guchait, Monoranjan; Gurtu, Atul; Kole, Gouranga; Kumar, Sanjeev; Maity, Manas; Majumder, Gobinda; Mazumdar, Kajari; Mohanty, Gagan Bihari; Parida, Bibhuti; Sudhakar, Katta; Wickramage, Nadeesha; Banerjee, Sudeshna; Dugad, Shashikant; Arfaei, Hessamaddin; Bakhshiansohi, Hamed; Behnamian, Hadi; Etesami, Seyed Mohsen; Fahim, Ali; Jafari, Abideh; Khakzad, Mohsen; Mohammadi Najafabadi, Mojtaba; Naseri, Mohsen; Paktinat Mehdiabadi, Saeid; Safarzadeh, Batool; Zeinali, Maryam; Grunewald, Martin; Abbrescia, Marcello; Barbone, Lucia; Calabria, Cesare; Chhibra, Simranjit Singh; Colaleo, Anna; Creanza, Donato; De Filippis, Nicola; De Palma, Mauro; Fiore, Luigi; Iaselli, Giuseppe; Maggi, Giorgio; Maggi, Marcello; Marangelli, Bartolomeo; My, Salvatore; Nuzzo, Salvatore; Pacifico, Nicola; Pompili, Alexis; Pugliese, Gabriella; Radogna, Raffaella; Selvaggi, Giovanna; Silvestris, Lucia; Singh, Gurpreet; Venditti, Rosamaria; Verwilligen, Piet; Zito, Giuseppe; Abbiendi, Giovanni; Benvenuti, Alberto; Bonacorsi, Daniele; Braibant-Giacomelli, Sylvie; Brigliadori, Luca; Campanini, Renato; Capiluppi, Paolo; Castro, Andrea; Cavallo, Francesca Romana; Codispoti, Giuseppe; Cuffiani, Marco; Dallavalle, Gaetano-Marco; Fabbri, Fabrizio; Fanfani, Alessandra; Fasanella, Daniele; Giacomelli, Paolo; Grandi, Claudio; Guiducci, Luigi; Marcellini, Stefano; Masetti, Gianni; Meneghelli, Marco; Montanari, Alessandro; Navarria, Francesco; Odorici, Fabrizio; Perrotta, Andrea; Primavera, Federica; Rossi, Antonio; Rovelli, Tiziano; Siroli, Gian Piero; Tosi, Nicolò; Travaglini, Riccardo; Albergo, Sebastiano; Cappello, Gigi; Chiorboli, Massimiliano; Costa, Salvatore; Giordano, Ferdinando; Potenza, Renato; Tricomi, Alessia; Tuve, Cristina; Barbagli, Giuseppe; Ciulli, Vitaliano; Civinini, Carlo; D'Alessandro, Raffaello; Focardi, Ettore; Gallo, Elisabetta; Gonzi, Sandro; Gori, Valentina; Lenzi, Piergiulio; Meschini, Marco; Paoletti, Simone; Sguazzoni, Giacomo; Tropiano, Antonio; Benussi, Luigi; Bianco, Stefano; Fabbri, Franco; Piccolo, Davide; Fabbricatore, Pasquale; Ferretti, Roberta; Ferro, Fabrizio; Lo Vetere, Maurizio; Musenich, Riccardo; Robutti, Enrico; Tosi, Silvano; Benaglia, Andrea; Dinardo, Mauro Emanuele; Fiorendi, Sara; Gennai, Simone; Gerosa, Raffaele; Ghezzi, Alessio; Govoni, Pietro; Lucchini, Marco Toliman; Malvezzi, Sandra; Manzoni, Riccardo Andrea; Martelli, Arabella; Marzocchi, Badder; Menasce, Dario; Moroni, Luigi; Paganoni, Marco; Pedrini, Daniele; Ragazzi, Stefano; Redaelli, Nicola; Tabarelli de Fatis, Tommaso; Buontempo, Salvatore; Cavallo, Nicola; Fabozzi, Francesco; Iorio, Alberto Orso Maria; Lista, Luca; Meola, Sabino; Merola, Mario; Paolucci, Pierluigi; Azzi, Patrizia; Bacchetta, Nicola; Branca, Antonio; Carlin, Roberto; Checchia, Paolo; Dorigo, Tommaso; Dosselli, Umberto; Galanti, Mario; Gasparini, Fabrizio; Gasparini, Ugo; Giubilato, Piero; Gozzelino, Andrea; Kanishchev, Konstantin; Lacaprara, Stefano; Lazzizzera, Ignazio; Margoni, Martino; Meneguzzo, Anna Teresa; Pazzini, Jacopo; Pegoraro, Matteo; Pozzobon, Nicola; Ronchese, Paolo; Simonetto, Franco; Torassa, Ezio; Tosi, Mia; Triossi, Andrea; Ventura, Sandro; Zotto, Pierluigi; Zucchetta, Alberto; Zumerle, Gianni; Gabusi, Michele; Ratti, Sergio P; Riccardi, Cristina; Vitulo, Paolo; Biasini, Maurizio; Bilei, Gian Mario; Fanò, Livio; Lariccia, Paolo; Mantovani, Giancarlo; Menichelli, Mauro; Romeo, Francesco; Saha, Anirban; Santocchia, Attilio; Spiezia, Aniello; Androsov, Konstantin; Azzurri, Paolo; Bagliesi, Giuseppe; Bernardini, Jacopo; Boccali, Tommaso; Broccolo, Giuseppe; Castaldi, Rino; Ciocci, Maria Agnese; Dell'Orso, Roberto; Fiori, Francesco; Foà, Lorenzo; Giassi, Alessandro; Grippo, Maria Teresa; Kraan, Aafke; Ligabue, Franco; Lomtadze, Teimuraz; Martini, Luca; Messineo, Alberto; Moon, Chang-Seong; Palla, Fabrizio; Rizzi, Andrea; Savoy-Navarro, Aurore; Serban, Alin Titus; Spagnolo, Paolo; Squillacioti, Paola; Tenchini, Roberto; Tonelli, Guido; Venturi, Andrea; Verdini, Piero Giorgio; Vernieri, Caterina; Barone, Luciano; Cavallari, Francesca; Del Re, Daniele; Diemoz, Marcella; Grassi, Marco; Jorda, Clara; Longo, Egidio; Margaroli, Fabrizio; Meridiani, Paolo; Micheli, Francesco; Nourbakhsh, Shervin; Organtini, Giovanni; Paramatti, Riccardo; Rahatlou, Shahram; Rovelli, Chiara; Soffi, Livia; Traczyk, Piotr; Amapane, Nicola; Arcidiacono, Roberta; Argiro, Stefano; Arneodo, Michele; Bellan, Riccardo; Biino, Cristina; Cartiglia, Nicolo; Casasso, Stefano; Costa, Marco; Degano, Alessandro; Demaria, Natale; Mariotti, Chiara; Maselli, Silvia; Migliore, Ernesto; Monaco, Vincenzo; Musich, Marco; Obertino, Maria Margherita; Ortona, Giacomo; Pacher, Luca; Pastrone, Nadia; Pelliccioni, Mario; Potenza, Alberto; Romero, Alessandra; Ruspa, Marta; Sacchi, Roberto; Solano, Ada; Staiano, Amedeo; Tamponi, Umberto; Belforte, Stefano; Candelise, Vieri; Casarsa, Massimo; Cossutti, Fabio; Della Ricca, Giuseppe; Gobbo, Benigno; La Licata, Chiara; Marone, Matteo; Montanino, Damiana; Penzo, Aldo; Schizzi, Andrea; Umer, Tomo; Zanetti, Anna; Chang, Sunghyun; Kim, Tae Yeon; Nam, Soon-Kwon; Kim, Dong Hee; Kim, Gui Nyun; Kim, Ji Eun; Kim, Min Suk; Kong, Dae Jung; Lee, Sangeun; Oh, Young Do; Park, Hyangkyu; Son, Dong-Chul; Kim, Jae Yool; Kim, Zero Jaeho; Song, Sanghyeon; Choi, Suyong; Gyun, Dooyeon; Hong, Byung-Sik; Jo, Mihee; Kim, Hyunchul; Kim, Yongsun; Lee, Kyong Sei; Park, Sung Keun; Roh, Youn; Choi, Minkyoo; Kim, Ji Hyun; Park, Chawon; Park, Inkyu; Park, Sangnam; Ryu, Geonmo; Choi, Young-Il; Choi, Young Kyu; Goh, Junghwan; Kwon, Eunhyang; Lee, Byounghoon; Lee, Jongseok; Seo, Hyunkwan; Yu, Intae; Juodagalvis, Andrius; Komaragiri, Jyothsna Rani; Castilla-Valdez, Heriberto; De La Cruz-Burelo, Eduard; Heredia-de La Cruz, Ivan; Lopez-Fernandez, Ricardo; Martínez-Ortega, Jorge; Sánchez Hernández, Alberto; Villasenor-Cendejas, Luis Manuel; Carrillo Moreno, Salvador; Vazquez Valencia, Fabiola; Salazar Ibarguen, Humberto Antonio; Casimiro Linares, Edgar; Morelos Pineda, Antonio; Krofcheck, David; Butler, Philip H; Doesburg, Robert; Reucroft, Steve; Ahmad, Muhammad; Asghar, Muhammad Irfan; Butt, Jamila; Hoorani, Hafeez R; Khan, Wajid Ali; Khurshid, Taimoor; Qazi, Shamona; Shah, Mehar Ali; Shoaib, Muhammad; Bialkowska, Helena; Bluj, Michal; Boimska, Bożena; Frueboes, Tomasz; Górski, Maciej; Kazana, Malgorzata; Nawrocki, Krzysztof; Romanowska-Rybinska, Katarzyna; Szleper, Michal; Wrochna, Grzegorz; Zalewski, Piotr; Brona, Grzegorz; Bunkowski, Karol; Cwiok, Mikolaj; Dominik, Wojciech; Doroba, Krzysztof; Kalinowski, Artur; Konecki, Marcin; Krolikowski, Jan; Misiura, Maciej; Wolszczak, Weronika; Bargassa, Pedrame; Beirão Da Cruz E Silva, Cristóvão; Faccioli, Pietro; Ferreira Parracho, Pedro Guilherme; Gallinaro, Michele; Nguyen, Federico; Rodrigues Antunes, Joao; Seixas, Joao; Varela, Joao; Vischia, Pietro; Afanasiev, Serguei; Bunin, Pavel; Golutvin, Igor; Gorbunov, Ilya; Kamenev, Alexey; Karjavin, Vladimir; Konoplyanikov, Viktor; Kozlov, Guennady; Lanev, Alexander; Malakhov, Alexander; Matveev, Viktor; Moisenz, Petr; Palichik, Vladimir; Perelygin, Victor; Shmatov, Sergey; Skatchkov, Nikolai; Smirnov, Vitaly; Zarubin, Anatoli; Golovtsov, Victor; Ivanov, Yury; Kim, Victor; Levchenko, Petr; Murzin, Victor; Oreshkin, Vadim; Smirnov, Igor; Sulimov, Valentin; Uvarov, Lev; Vavilov, Sergey; Vorobyev, Alexey; Vorobyev, Andrey; Andreev, Yuri; Dermenev, Alexander; Gninenko, Sergei; Golubev, Nikolai; Kirsanov, Mikhail; Krasnikov, Nikolai; Pashenkov, Anatoli; Tlisov, Danila; Toropin, Alexander; Epshteyn, Vladimir; Gavrilov, Vladimir; Lychkovskaya, Natalia; Popov, Vladimir; Safronov, Grigory; Semenov, Sergey; Spiridonov, Alexander; Stolin, Viatcheslav; Vlasov, Evgueni; Zhokin, Alexander; Andreev, Vladimir; Azarkin, Maksim; Dremin, Igor; Kirakosyan, Martin; Leonidov, Andrey; Mesyats, Gennady; Rusakov, Sergey V; Vinogradov, Alexey; Belyaev, Andrey; Bogdanova, Galina; Boos, Edouard; Khein, Lev; Klyukhin, Vyacheslav; Kodolova, Olga; Lokhtin, Igor; Lukina, Olga; Obraztsov, Stepan; Petrushanko, Sergey; Proskuryakov, Alexander; Savrin, Viktor; Volkov, Vladimir; Azhgirey, Igor; Bayshev, Igor; Bitioukov, Sergei; Kachanov, Vassili; Kalinin, Alexey; Konstantinov, Dmitri; Krychkine, Victor; Petrov, Vladimir; Ryutin, Roman; Sobol, Andrei; Tourtchanovitch, Leonid; Troshin, Sergey; Tyurin, Nikolay; Uzunian, Andrey; Volkov, Alexey; Adzic, Petar; Dordevic, Milos; Ekmedzic, Marko; Milosevic, Jovan; Aguilar-Benitez, Manuel; Alcaraz Maestre, Juan; Battilana, Carlo; Calvo, Enrique; Cerrada, Marcos; Chamizo Llatas, Maria; Colino, Nicanor; De La Cruz, Begona; Delgado Peris, Antonio; Domínguez Vázquez, Daniel; Fernandez Bedoya, Cristina; Fernández Ramos, Juan Pablo; Ferrando, Antonio; Flix, Jose; Fouz, Maria Cruz; Garcia-Abia, Pablo; Gonzalez Lopez, Oscar; Goy Lopez, Silvia; Hernandez, Jose M; Josa, Maria Isabel; Merino, Gonzalo; Navarro De Martino, Eduardo; Puerta Pelayo, Jesus; Quintario Olmeda, Adrián; Redondo, Ignacio; Romero, Luciano; Senghi Soares, Mara; Willmott, Carlos; Albajar, Carmen; de Trocóniz, Jorge F; Missiroli, Marino; Brun, Hugues; Cuevas, Javier; Fernandez Menendez, Javier; Folgueras, Santiago; Gonzalez Caballero, Isidro; Lloret Iglesias, Lara; Brochero Cifuentes, Javier Andres; Cabrillo, Iban Jose; Calderon, Alicia; Duarte Campderros, Jordi; Fernandez, Marcos; Gomez, Gervasio; Gonzalez Sanchez, Javier; Graziano, Alberto; Lopez Virto, Amparo; Marco, Jesus; Marco, Rafael; Martinez Rivero, Celso; Matorras, Francisco; Munoz Sanchez, Francisca Javiela; Piedra Gomez, Jonatan; Rodrigo, Teresa; Rodríguez-Marrero, Ana Yaiza; Ruiz-Jimeno, Alberto; Scodellaro, Luca; Vila, Ivan; Vilar Cortabitarte, Rocio; Abbaneo, Duccio; Auffray, Etiennette; Auzinger, Georg; Bachtis, Michail; Baillon, Paul; Ball, Austin; Barney, David; Bendavid, Joshua; Benhabib, Lamia; Benitez, Jose F; Bernet, Colin; Bianchi, Giovanni; Bloch, Philippe; Bocci, Andrea; Bonato, Alessio; Bondu, Olivier; Botta, Cristina; Breuker, Horst; Camporesi, Tiziano; Cerminara, Gianluca; Christiansen, Tim; Coarasa Perez, Jose Antonio; Colafranceschi, Stefano; D'Alfonso, Mariarosaria; D'Enterria, David; Dabrowski, Anne; David Tinoco Mendes, Andre; De Guio, Federico; De Roeck, Albert; De Visscher, Simon; Di Guida, Salvatore; Dobson, Marc; Dupont-Sagorin, Niels; Elliott-Peisert, Anna; Eugster, Jürg; Franzoni, Giovanni; Funk, Wolfgang; Giffels, Manuel; Gigi, Dominique; Gill, Karl; Giordano, Domenico; Girone, Maria; Giunta, Marina; Glege, Frank; Gomez-Reino Garrido, Robert; Gowdy, Stephen; Guida, Roberto; Hammer, Josef; Hansen, Magnus; Harris, Philip; Innocente, Vincenzo; Janot, Patrick; Karavakis, Edward; Kousouris, Konstantinos; Krajczar, Krisztian; Lecoq, Paul; Lourenco, Carlos; Magini, Nicolo; Malgeri, Luca; Mannelli, Marcello; Masetti, Lorenzo; Meijers, Frans; Mersi, Stefano; Meschi, Emilio; Moortgat, Filip; Mulders, Martijn; Musella, Pasquale; Orsini, Luciano; Palencia Cortezon, Enrique; Perez, Emmanuelle; Perrozzi, Luca; Petrilli, Achille; Petrucciani, Giovanni; Pfeiffer, Andreas; Pierini, Maurizio; Pimiä, Martti; Piparo, Danilo; Plagge, Michael; Racz, Attila; Reece, William; Rolandi, Gigi; Rovere, Marco; Sakulin, Hannes; Santanastasio, Francesco; Schäfer, Christoph; Schwick, Christoph; Sekmen, Sezen; Sharma, Archana; Siegrist, Patrice; Silva, Pedro; Simon, Michal; Sphicas, Paraskevas; Spiga, Daniele; Steggemann, Jan; Stieger, Benjamin; Stoye, Markus; Tsirou, Andromachi; Veres, Gabor Istvan; Vlimant, Jean-Roch; Wöhri, Hermine Katharina; Zeuner, Wolfram Dietrich; Bertl, Willi; Deiters, Konrad; Erdmann, Wolfram; Horisberger, Roland; Ingram, Quentin; Kaestli, Hans-Christian; König, Stefan; Kotlinski, Danek; Langenegger, Urs; Renker, Dieter; Rohe, Tilman; Bachmair, Felix; Bäni, Lukas; Bianchini, Lorenzo; Bortignon, Pierluigi; Buchmann, Marco-Andrea; Casal, Bruno; Chanon, Nicolas; Deisher, Amanda; Dissertori, Günther; Dittmar, Michael; Donegà, Mauro; Dünser, Marc; Eller, Philipp; Grab, Christoph; Hits, Dmitry; Lustermann, Werner; Mangano, Boris; Marini, Andrea Carlo; Martinez Ruiz del Arbol, Pablo; Meister, Daniel; Mohr, Niklas; Nägeli, Christoph; Nef, Pascal; Nessi-Tedaldi, Francesca; Pandolfi, Francesco; Pape, Luc; Pauss, Felicitas; Peruzzi, Marco; Quittnat, Milena; Ronga, Frederic Jean; Rossini, Marco; Starodumov, Andrei; Takahashi, Maiko; Tauscher, Ludwig; Theofilatos, Konstantinos; Treille, Daniel; Wallny, Rainer; Weber, Hannsjoerg Artur; Amsler, Claude; Chiochia, Vincenzo; De Cosa, Annapaola; Favaro, Carlotta; Hinzmann, Andreas; Hreus, Tomas; Ivova Rikova, Mirena; Kilminster, Benjamin; Millan Mejias, Barbara; Ngadiuba, Jennifer; Robmann, Peter; Snoek, Hella; Taroni, Silvia; Verzetti, Mauro; Yang, Yong; Cardaci, Marco; Chen, Kuan-Hsin; Ferro, Cristina; Kuo, Chia-Ming; Li, Syue-Wei; Lin, Willis; Lu, Yun-Ju; Volpe, Roberta; Yu, Shin-Shan; Bartalini, Paolo; Chang, Paoti; Chang, You-Hao; Chang, Yu-Wei; Chao, Yuan; Chen, Kai-Feng; Chen, Po-Hsun; Dietz, Charles; Grundler, Ulysses; Hou, George Wei-Shu; Hsiung, Yee; Kao, Kai-Yi; Lei, Yeong-Jyi; Liu, Yueh-Feng; Lu, Rong-Shyang; Majumder, Devdatta; Petrakou, Eleni; Shi, Xin; Shiu, Jing-Ge; Tzeng, Yeng-Ming; Wang, Minzu; Wilken, Rachel; Asavapibhop, Burin; Suwonjandee, Narumon; Adiguzel, Aytul; Bakirci, Mustafa Numan; Cerci, Salim; Dozen, Candan; Dumanoglu, Isa; Eskut, Eda; Girgis, Semiray; Gokbulut, Gul; Gurpinar, Emine; Hos, Ilknur; Kangal, Evrim Ersin; Kayis Topaksu, Aysel; Onengut, Gulsen; Ozdemir, Kadri; Ozturk, Sertac; Polatoz, Ayse; Sogut, Kenan; Sunar Cerci, Deniz; Tali, Bayram; Topakli, Huseyin; Vergili, Mehmet; Akin, Ilina Vasileva; Aliev, Takhmasib; Bilin, Bugra; Bilmis, Selcuk; Deniz, Muhammed; Gamsizkan, Halil; Guler, Ali Murat; Karapinar, Guler; Ocalan, Kadir; Ozpineci, Altug; Serin, Meltem; Sever, Ramazan; Surat, Ugur Emrah; Yalvac, Metin; Zeyrek, Mehmet; Gülmez, Erhan; Isildak, Bora; Kaya, Mithat; Kaya, Ozlem; Ozkorucuklu, Suat; Bahtiyar, Hüseyin; Barlas, Esra; Cankocak, Kerem; Günaydin, Yusuf Oguzhan; Vardarlı, Fuat Ilkehan; Yücel, Mete; Levchuk, Leonid; Sorokin, Pavel; Brooke, James John; Clement, Emyr; Cussans, David; Flacher, Henning; Frazier, Robert; Goldstein, Joel; Grimes, Mark; Heath, Greg P; Heath, Helen F; Jacob, Jeson; Kreczko, Lukasz; Lucas, Chris; Meng, Zhaoxia; Newbold, Dave M; Paramesvaran, Sudarshan; Poll, Anthony; Senkin, Sergey; Smith, Vincent J; Williams, Thomas; Bell, Ken W; Belyaev, Alexander; Brew, Christopher; Brown, Robert M; Cockerill, David JA; Coughlan, John A; Harder, Kristian; Harper, Sam; Ilic, Jelena; Olaiya, Emmanuel; Petyt, David; Shepherd-Themistocleous, Claire; Thea, Alessandro; Tomalin, Ian R; Womersley, William John; Worm, Steven; Baber, Mark; Bainbridge, Robert; Buchmuller, Oliver; Burton, Darren; Colling, David; Cripps, Nicholas; Cutajar, Michael; Dauncey, Paul; Davies, Gavin; Della Negra, Michel; Ferguson, William; Fulcher, Jonathan; Futyan, David; Gilbert, Andrew; Guneratne Bryer, Arlo; Hall, Geoffrey; Hatherell, Zoe; Hays, Jonathan; Iles, Gregory; Jarvis, Martyn; Karapostoli, Georgia; Kenzie, Matthew; Lane, Rebecca; Lucas, Robyn; Lyons, Louis; Magnan, Anne-Marie; Marrouche, Jad; Mathias, Bryn; Nandi, Robin; Nash, Jordan; Nikitenko, Alexander; Pela, Joao; Pesaresi, Mark; Petridis, Konstantinos; Pioppi, Michele; Raymond, David Mark; Rogerson, Samuel; Rose, Andrew; Seez, Christopher; Sharp, Peter; Sparrow, Alex; Tapper, Alexander; Vazquez Acosta, Monica; Virdee, Tejinder; Wakefield, Stuart; Wardle, Nicholas; Cole, Joanne; Hobson, Peter R; Khan, Akram; Kyberd, Paul; Leggat, Duncan; Leslie, Dawn; Martin, William; Reid, Ivan; Symonds, Philip; Teodorescu, Liliana; Turner, Mark; Dittmann, Jay; Hatakeyama, Kenichi; Kasmi, Azeddine; Liu, Hongxuan; Scarborough, Tara; Charaf, Otman; Cooper, Seth; Henderson, Conor; Rumerio, Paolo; Avetisyan, Aram; Bose, Tulika; Fantasia, Cory; Heister, Arno; Lawson, Philip; Lazic, Dragoslav; Rohlf, James; Sperka, David; St John, Jason; Sulak, Lawrence; Alimena, Juliette; Bhattacharya, Saptaparna; Christopher, Grant; Cutts, David; Demiragli, Zeynep; Ferapontov, Alexey; Garabedian, Alex; Heintz, Ulrich; Jabeen, Shabnam; Kukartsev, Gennadiy; Laird, Edward; Landsberg, Greg; Luk, Michael; Narain, Meenakshi; Segala, Michael; Sinthuprasith, Tutanon; Speer, Thomas; Swanson, Joshua; Breedon, Richard; Breto, Guillermo; Calderon De La Barca Sanchez, Manuel; Chauhan, Sushil; Chertok, Maxwell; Conway, John; Conway, Rylan; Cox, Peter Timothy; Erbacher, Robin; Gardner, Michael; Ko, Winston; Kopecky, Alexandra; Lander, Richard; Miceli, Tia; Pellett, Dave; Pilot, Justin; Ricci-Tam, Francesca; Rutherford, Britney; Searle, Matthew; Shalhout, Shalhout; Smith, John; Squires, Michael; Tripathi, Mani; Wilbur, Scott; Yohay, Rachel; Andreev, Valeri; Cline, David; Cousins, Robert; Erhan, Samim; Everaerts, Pieter; Farrell, Chris; Felcini, Marta; Hauser, Jay; Ignatenko, Mikhail; Jarvis, Chad; Rakness, Gregory; Schlein, Peter; Takasugi, Eric; Valuev, Vyacheslav; Weber, Matthias; Babb, John; Clare, Robert; Ellison, John Anthony; Gary, J William; Hanson, Gail; Heilman, Jesse; Jandir, Pawandeep; Lacroix, Florent; Liu, Hongliang; Long, Owen Rosser; Luthra, Arun; Malberti, Martina; Nguyen, Harold; Shrinivas, Amithabh; Sturdy, Jared; Sumowidagdo, Suharyo; Wimpenny, Stephen; Andrews, Warren; Branson, James G; Cerati, Giuseppe Benedetto; Cittolin, Sergio; D'Agnolo, Raffaele Tito; Evans, David; Holzner, André; Kelley, Ryan; Kovalskyi, Dmytro; Lebourgeois, Matthew; Letts, James; Macneill, Ian; Padhi, Sanjay; Palmer, Christopher; Pieri, Marco; Sani, Matteo; Sharma, Vivek; Simon, Sean; Sudano, Elizabeth; Tadel, Matevz; Tu, Yanjun; Vartak, Adish; Wasserbaech, Steven; Würthwein, Frank; Yagil, Avraham; Yoo, Jaehyeok; Barge, Derek; Campagnari, Claudio; Danielson, Thomas; Flowers, Kristen; Geffert, Paul; George, Christopher; Golf, Frank; Incandela, Joe; Justus, Christopher; Magaña Villalba, Ricardo; Mccoll, Nickolas; Pavlunin, Viktor; Richman, Jeffrey; Rossin, Roberto; Stuart, David; To, Wing; West, Christopher; Apresyan, Artur; Bornheim, Adolf; Bunn, Julian; Chen, Yi; Di Marco, Emanuele; Duarte, Javier; Kcira, Dorian; Mott, Alexander; Newman, Harvey B; Pena, Cristian; Rogan, Christopher; Spiropulu, Maria; Timciuc, Vladlen; Wilkinson, Richard; Xie, Si; Zhu, Ren-Yuan; Azzolini, Virginia; Calamba, Aristotle; Carroll, Ryan; Ferguson, Thomas; Iiyama, Yutaro; Jang, Dong Wook; Paulini, Manfred; Russ, James; Vogel, Helmut; Vorobiev, Igor; Cumalat, John Perry; Drell, Brian Robert; Ford, William T; Gaz, Alessandro; Luiggi Lopez, Eduardo; Nauenberg, Uriel; Smith, James; Stenson, Kevin; Ulmer, Keith; Wagner, Stephen Robert; Alexander, James; Chatterjee, Avishek; Eggert, Nicholas; Gibbons, Lawrence Kent; Hopkins, Walter; Khukhunaishvili, Aleko; Kreis, Benjamin; Mirman, Nathan; Nicolas Kaufman, Gala; Patterson, Juliet Ritchie; Ryd, Anders; Salvati, Emmanuele; Sun, Werner; Teo, Wee Don; Thom, Julia; Thompson, Joshua; Tucker, Jordan; Weng, Yao; Winstrom, Lucas; Wittich, Peter; Winn, Dave; Abdullin, Salavat; Albrow, Michael; Anderson, Jacob; Apollinari, Giorgio; Bauerdick, Lothar AT; Beretvas, Andrew; Berryhill, Jeffrey; Bhat, Pushpalatha C; Burkett, Kevin; Butler, Joel Nathan; Chetluru, Vasundhara; Cheung, Harry; Chlebana, Frank; Cihangir, Selcuk; Elvira, Victor Daniel; Fisk, Ian; Freeman, Jim; Gao, Yanyan; Gottschalk, Erik; Gray, Lindsey; Green, Dan; Grünendahl, Stefan; Gutsche, Oliver; Hare, Daryl; Harris, Robert M; Hirschauer, James; Hooberman, Benjamin; Jindariani, Sergo; Johnson, Marvin; Joshi, Umesh; Kaadze, Ketino; Klima, Boaz; Kwan, Simon; Linacre, Jacob; Lincoln, Don; Lipton, Ron; Lykken, Joseph; Maeshima, Kaori; Marraffino, John Michael; Martinez Outschoorn, Verena Ingrid; Maruyama, Sho; Mason, David; McBride, Patricia; Mishra, Kalanand; Mrenna, Stephen; Musienko, Yuri; Nahn, Steve; Newman-Holmes, Catherine; O'Dell, Vivian; Prokofyev, Oleg; Ratnikova, Natalia; Sexton-Kennedy, Elizabeth; Sharma, Seema; Spalding, William J; Spiegel, Leonard; Taylor, Lucas; Tkaczyk, Slawek; Tran, Nhan Viet; Uplegger, Lorenzo; Vaandering, Eric Wayne; Vidal, Richard; Whitbeck, Andrew; Whitmore, Juliana; Wu, Weimin; Yang, Fan; Yun, Jae Chul; Acosta, Darin; Avery, Paul; Bourilkov, Dimitri; Cheng, Tongguang; Das, Souvik; De Gruttola, Michele; Di Giovanni, Gian Piero; Dobur, Didar; Field, Richard D; Fisher, Matthew; Fu, Yu; Furic, Ivan-Kresimir; Hugon, Justin; Kim, Bockjoo; Konigsberg, Jacobo; Korytov, Andrey; Kropivnitskaya, Anna; Kypreos, Theodore; Low, Jia Fu; Matchev, Konstantin; Milenovic, Predrag; Mitselmakher, Guenakh; Muniz, Lana; Rinkevicius, Aurelijus; Shchutska, Lesya; Skhirtladze, Nikoloz; Snowball, Matthew; Yelton, John; Zakaria, Mohammed; Gaultney, Vanessa; Hewamanage, Samantha; Linn, Stephan; Markowitz, Pete; Martinez, German; Rodriguez, Jorge Luis; Adams, Todd; Askew, Andrew; Bochenek, Joseph; Chen, Jie; Diamond, Brendan; Haas, Jeff; Hagopian, Sharon; Hagopian, Vasken; Johnson, Kurtis F; Prosper, Harrison; Veeraraghavan, Venkatesh; Weinberg, Marc; Baarmand, Marc M; Dorney, Brian; Hohlmann, Marcus; Kalakhety, Himali; Yumiceva, Francisco; Adams, Mark Raymond; Apanasevich, Leonard; Bazterra, Victor Eduardo; Betts, Russell Richard; Bucinskaite, Inga; Cavanaugh, Richard; Evdokimov, Olga; Gauthier, Lucie; Gerber, Cecilia Elena; Hofman, David Jonathan; Khalatyan, Samvel; Kurt, Pelin; Moon, Dong Ho; O'Brien, Christine; Silkworth, Christopher; Turner, Paul; Varelas, Nikos; Akgun, Ugur; Albayrak, Elif Asli; Bilki, Burak; Clarida, Warren; Dilsiz, Kamuran; Duru, Firdevs; Haytmyradov, Maksat; Merlo, Jean-Pierre; Mermerkaya, Hamit; Mestvirishvili, Alexi; Moeller, Anthony; Nachtman, Jane; Ogul, Hasan; Onel, Yasar; Ozok, Ferhat; Sen, Sercan; Tan, Ping; Tiras, Emrah; Wetzel, James; Yetkin, Taylan; Yi, Kai; Barnett, Bruce Arnold; Blumenfeld, Barry; Bolognesi, Sara; Fehling, David; Gritsan, Andrei; Maksimovic, Petar; Martin, Christopher; Swartz, Morris; Baringer, Philip; Bean, Alice; Benelli, Gabriele; Kenny III, Raymond Patrick; Murray, Michael; Noonan, Daniel; Sanders, Stephen; Sekaric, Jadranka; Stringer, Robert; Wang, Quan; Wood, Jeffrey Scott; Barfuss, Anne-Fleur; Chakaberia, Irakli; Ivanov, Andrew; Khalil, Sadia; Makouski, Mikhail; Maravin, Yurii; Saini, Lovedeep Kaur; Shrestha, Shruti; Svintradze, Irakli; Gronberg, Jeffrey; Lange, David; Rebassoo, Finn; Wright, Douglas; Baden, Drew; Calvert, Brian; Eno, Sarah Catherine; Gomez, Jaime; Hadley, Nicholas John; Kellogg, Richard G; Kolberg, Ted; Lu, Ying; Marionneau, Matthieu; Mignerey, Alice; Pedro, Kevin; Skuja, Andris; Temple, Jeffrey; Tonjes, Marguerite; Tonwar, Suresh C; Apyan, Aram; Barbieri, Richard; Bauer, Gerry; Busza, Wit; Cali, Ivan Amos; Chan, Matthew; Di Matteo, Leonardo; Dutta, Valentina; Gomez Ceballos, Guillelmo; Goncharov, Maxim; Gulhan, Doga; Klute, Markus; Lai, Yue Shi; Lee, Yen-Jie; Levin, Andrew; Luckey, Paul David; Ma, Teng; Paus, Christoph; Ralph, Duncan; Roland, Christof; Roland, Gunther; Stephans, George; Stöckli, Fabian; Sumorok, Konstanty; Velicanu, Dragos; Veverka, Jan; Wyslouch, Bolek; Yang, Mingming; Yoon, Sungho; Zanetti, Marco; Zhukova, Victoria; Dahmes, Bryan; De Benedetti, Abraham; Gude, Alexander; Kao, Shih-Chuan; Klapoetke, Kevin; Kubota, Yuichi; Mans, Jeremy; Pastika, Nathaniel; Rusack, Roger; Singovsky, Alexander; Tambe, Norbert; Turkewitz, Jared; Acosta, John Gabriel; Cremaldi, Lucien Marcus; Kroeger, Rob; Oliveros, Sandra; Perera, Lalith; Rahmat, Rahmat; Sanders, David A; Summers, Don; Avdeeva, Ekaterina; Bloom, Kenneth; Bose, Suvadeep; Claes, Daniel R; Dominguez, Aaron; Gonzalez Suarez, Rebeca; Keller, Jason; Knowlton, Dan; Kravchenko, Ilya; Lazo-Flores, Jose; Malik, Sudhir; Meier, Frank; Snow, Gregory R; Dolen, James; Godshalk, Andrew; Iashvili, Ia; Jain, Supriya; Kharchilava, Avto; Kumar, Ashish; Rappoccio, Salvatore; Alverson, George; Barberis, Emanuela; Baumgartel, Darin; Chasco, Matthew; Haley, Joseph; Massironi, Andrea; Nash, David; Orimoto, Toyoko; Trocino, Daniele; Wood, Darien; Zhang, Jinzhong; Anastassov, Anton; Hahn, Kristan Allan; Kubik, Andrew; Lusito, Letizia; Mucia, Nicholas; Odell, Nathaniel; Pollack, Brian; Pozdnyakov, Andrey; Schmitt, Michael Henry; Stoynev, Stoyan; Sung, Kevin; Velasco, Mayda; Won, Steven; Berry, Douglas; Brinkerhoff, Andrew; Chan, Kwok Ming; Drozdetskiy, Alexey; Hildreth, Michael; Jessop, Colin; Karmgard, Daniel John; Kellams, Nathan; Kolb, Jeff; Lannon, Kevin; Luo, Wuming; Lynch, Sean; Marinelli, Nancy; Morse, David Michael; Pearson, Tessa; Planer, Michael; Ruchti, Randy; Slaunwhite, Jason; Valls, Nil; Wayne, Mitchell; Wolf, Matthias; Woodard, Anna; Antonelli, Louis; Bylsma, Ben; Durkin, Lloyd Stanley; Flowers, Sean; Hill, Christopher; Hughes, Richard; Kotov, Khristian; Ling, Ta-Yung; Puigh, Darren; Rodenburg, Marissa; Smith, Geoffrey; Vuosalo, Carl; Winer, Brian L; Wolfe, Homer; Wulsin, Howard Wells; Berry, Edmund; Elmer, Peter; Halyo, Valerie; Hebda, Philip; Hegeman, Jeroen; Hunt, Adam; Jindal, Pratima; Koay, Sue Ann; Lujan, Paul; Marlow, Daniel; Medvedeva, Tatiana; Mooney, Michael; Olsen, James; Piroué, Pierre; Quan, Xiaohang; Raval, Amita; Saka, Halil; Stickland, David; Tully, Christopher; Werner, Jeremy Scott; Zenz, Seth Conrad; Zuranski, Andrzej; Brownson, Eric; Lopez, Angel; Mendez, Hector; Ramirez Vargas, Juan Eduardo; Alagoz, Enver; Benedetti, Daniele; Bolla, Gino; Bortoletto, Daniela; De Mattia, Marco; Everett, Adam; Hu, Zhen; Jha, Manoj; Jones, Matthew; Jung, Kurt; Kress, Matthew; Leonardo, Nuno; Lopes Pegna, David; Maroussov, Vassili; Merkel, Petra; Miller, David Harry; Neumeister, Norbert; Radburn-Smith, Benjamin Charles; Shipsey, Ian; Silvers, David; Svyatkovskiy, Alexey; Wang, Fuqiang; Xie, Wei; Xu, Lingshan; Yoo, Hwi Dong; Zablocki, Jakub; Zheng, Yu; Parashar, Neeti; Adair, Antony; Akgun, Bora; Ecklund, Karl Matthew; Geurts, Frank JM; Li, Wei; Michlin, Benjamin; Padley, Brian Paul; Redjimi, Radia; Roberts, Jay; Zabel, James; Betchart, Burton; Bodek, Arie; Covarelli, Roberto; de Barbaro, Pawel; Demina, Regina; Eshaq, Yossof; Ferbel, Thomas; Garcia-Bellido, Aran; Goldenzweig, Pablo; Han, Jiyeon; Harel, Amnon; Miner, Daniel Carl; Petrillo, Gianluca; Vishnevskiy, Dmitry; Zielinski, Marek; Bhatti, Anwar; Ciesielski, Robert; Demortier, Luc; Goulianos, Konstantin; Lungu, Gheorghe; Malik, Sarah; Mesropian, Christina; Arora, Sanjay; Barker, Anthony; Chou, John Paul; Contreras-Campana, Christian; Contreras-Campana, Emmanuel; Duggan, Daniel; Ferencek, Dinko; Gershtein, Yuri; Gray, Richard; Halkiadakis, Eva; Hidas, Dean; Lath, Amitabh; Panwalkar, Shruti; Park, Michael; Patel, Rishi; Rekovic, Vladimir; Robles, Jorge; Salur, Sevil; Schnetzer, Steve; Seitz, Claudia; Somalwar, Sunil; Stone, Robert; Thomas, Scott; Thomassen, Peter; Walker, Matthew; Rose, Keith; Spanier, Stefan; Yang, Zong-Chang; York, Andrew; Bouhali, Othmane; Eusebi, Ricardo; Flanagan, Will; Gilmore, Jason; Kamon, Teruki; Khotilovich, Vadim; Krutelyov, Vyacheslav; Montalvo, Roy; Osipenkov, Ilya; Pakhotin, Yuriy; Perloff, Alexx; Roe, Jeffrey; Safonov, Alexei; Sakuma, Tai; Suarez, Indara; Tatarinov, Aysen; Toback, David; Akchurin, Nural; Cowden, Christopher; Damgov, Jordan; Dragoiu, Cosmin; Dudero, Phillip Russell; Faulkner, James; Kovitanggoon, Kittikul; Kunori, Shuichi; Lee, Sung Won; Libeiro, Terence; Volobouev, Igor; Appelt, Eric; Delannoy, Andrés G; Greene, Senta; Gurrola, Alfredo; Johns, Willard; Maguire, Charles; Mao, Yaxian; Melo, Andrew; Sharma, Monika; Sheldon, Paul; Snook, Benjamin; Tuo, Shengquan; Velkovska, Julia; Arenton, Michael Wayne; Boutle, Sarah; Cox, Bradley; Francis, Brian; Goodell, Joseph; Hirosky, Robert; Ledovskoy, Alexander; Lin, Chuanzhe; Neu, Christopher; Wood, John; Gollapinni, Sowjanya; Harr, Robert; Karchin, Paul Edmund; Kottachchi Kankanamge Don, Chamath; Lamichhane, Pramod; Belknap, Donald; Borrello, Laura; Carlsmith, Duncan; Cepeda, Maria; Dasu, Sridhara; Duric, Senka; Friis, Evan; Grothe, Monika; Hall-Wilton, Richard; Herndon, Matthew; Hervé, Alain; Klabbers, Pamela; Klukas, Jeffrey; Lanaro, Armando; Levine, Aaron; Loveless, Richard; Mohapatra, Ajit; Ojalvo, Isabel; Perry, Thomas; Pierro, Giuseppe Antonio; Polese, Giovanni; Ross, Ian; Sakharov, Alexandre; Sarangi, Tapas; Savin, Alexander; Smith, Wesley H; Antchev, G.; Aspell, P.; Atanassov, I.; Avati, V.; Baechler, J.; Berardi, V.; Berretti, M.; Bossini, E.; Bottigli, U.; Bozzo, M.; Brucken, E.; Buzzo, A.; Cafagna, F.S.; Catanesi, M.G.; Covault, C.; Csanad, M.; Csorgo, T.; Deile, M.; Doubek, M.; Eggert, K.; Eremin, V.; Fiergolski, A.; Garcia, F.; Georgiev, V.; Giani, S.; Grzanka, L.; Hammerbauer, J.; Heino, J.; Hilden, T.; Karev, A.; Kaspar, J.; Kopal, J.; Kosinski, J.; Kundrat, V.; Lami, S.; Latino, G.; Lauhakangas, R.; Leszko, T.; Lippmaa, E.; Lippmaa, J.; Lokajicek, M.V.; Losurdo, L.; Lucas Rodriguez, F.; Macri, M.; Maki, T.; Mercadante, A.; Minafra, N.; Minutoli, S.; Nemes, F.; Niewiadomski, H.; Oliveri, E.; Oljemark, F.; Orava, R.; Oriunnof, M.; Osterberg, K.; Palazzi, P.; Peroutka, Z.; Prochazka, J.; Quinto, M.; Radermacher, E.; Radicioni, E.; Ravotti, F.; Ropelewski, L.; Ruggiero, G.; Saarikko, H.; Scribano, A.; Smajek, J.; Snoeys, W.; Sziklai, J.; Taylor, C.; Turini, N.; Vacek, V.; Welti, J.; Whitmoreh, J.; Wyszkowski, P.; Zielinski, K.

    2014-10-29

    Pseudorapidity ($\\eta$) distributions of charged particles produced in proton-proton collisions at a centre-of-mass energy of 8 TeV are measured in the ranges abs($\\eta$) < 2.2 and 5.3 < abs($\\eta$) < 6.4 covered by the CMS and TOTEM detectors, respectively. The data correspond to an integrated luminosity of 45 inverse microbarns. Measurements are presented for three event categories. The most inclusive category is sensitive to 91-96% of the total inelastic proton-proton cross section. The other two categories are disjoint subsets of the inclusive sample that are either enhanced or depleted in single diffractive dissociation events. The data are compared to models used to describe high-energy hadronic interactions. None of the models considered provide a consistent description of the measured distributions.

  19. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00291854; The ATLAS collaboration; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-01-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services. Being an intermediate middleware system between clients and external information sources (like central BDII, GOCDB, MyOSG), AGIS defines the relations between experiment specific used resources and physical distributed computing capabilities. Being in production during LHC Runl AGIS became the central information system for Distributed Computing in ATLAS and it is continuously evolving to fulfil new user requests, enable enhanced operations and follow the extension of the ATLAS Computing model. The ATLAS Computin...

  20. The CMS DBS query language

    International Nuclear Information System (INIS)

    Kuznetsov, Valentin; Riley, Daniel; Afaq, Anzar; Sekhri, Vijay; Guo Yuyi; Lueking, Lee

    2010-01-01

    The CMS experiment has implemented a flexible and powerful system enabling users to find data within the CMS physics data catalog. The Dataset Bookkeeping Service (DBS) comprises a database and the services used to store and access metadata related to CMS physics data. To this, we have added a generalized query system in addition to the existing web and programmatic interfaces to the DBS. This query system is based on a query language that hides the complexity of the underlying database structure by discovering the join conditions between database tables. This provides a way of querying the system that is simple and straightforward for CMS data managers and physicists to use without requiring knowledge of the database tables or keys. The DBS Query Language uses the ANTLR tool to build the input query parser and tokenizer, followed by a query builder that uses a graph representation of the DBS schema to construct the SQL query sent to underlying database. We will describe the design of the query system, provide details of the language components and overview of how this component fits into the overall data discovery system architecture.

  1. 42 CFR 460.18 - CMS evaluation of applications.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false CMS evaluation of applications. 460.18 Section 460... ELDERLY (PACE) PACE Organization Application and Waiver Process § 460.18 CMS evaluation of applications. CMS evaluates an application for approval as a PACE organization on the basis of the following...

  2. Forward energy measurement with CMS

    CERN Document Server

    Kheyn, Lev

    2016-01-01

    Energy flow is measured in the forward region of CMS at pseudorapidities up to 6.6 in pp interactions at 13 TeV with forward (HF) and very forward (CASTOR) calorimeters. The results are compared to model predictions. The CMS results at different center-of-mass energies are intercompared using pseudorapidity variable shifted by beam rapidity, thus studying applicability of hypothesis of limiting fragmentation.

  3. 42 CFR 405.1834 - CMS reviewing official procedure.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false CMS reviewing official procedure. 405.1834 Section... Determinations and Appeals § 405.1834 CMS reviewing official procedure. (a) Scope. A provider that is a party to... Administrator by a designated CMS reviewing official who considers whether the decision of the intermediary...

  4. 42 CFR 422.2264 - Guidelines for CMS review.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Guidelines for CMS review. 422.2264 Section 422... Guidelines for CMS review. In reviewing marketing material or election forms under § 422.2262 of this part, CMS determines that the marketing materials— (a) Provide, in a format (and, where appropriate, print...

  5. Reconstruction of electrons with the Gaussian-sum filter in the CMS tracker at the LHC

    International Nuclear Information System (INIS)

    Adam, W; Fruehwirth, R; Strandlie, A; Todorov, T

    2005-01-01

    The bremsstrahlung energy loss distribution of electrons propagating in matter is highly non-Gaussian. Because the Kalman filter relies solely on Gaussian probability density functions, it is not necessarily the optimal reconstruction algorithm for electron tracks. A Gaussian-sum filter (GSF) algorithm for electron reconstruction in the CMS tracker has therefore been developed and implemented. The basic idea is to model the bremsstrahlung energy loss distribution by a Gaussian mixture rather than by a single Gaussian. It is shown that the GSF is able to improve the momentum resolution of electrons compared to the standard Kalman filter. The momentum resolution and the quality of the error estimate are studied both with a fast simulation, modelling the radiative energy loss in a simplified detector, and the full CMS tracker simulation. (research note from collaboration)

  6. Reconstruction of Electrons with the Gaussian-Sum Filter in the CMS Tracker at the LHC

    CERN Document Server

    Adam, Wolfgang; Strandlie, Are; Todor, T

    2005-01-01

    The bremsstrahlung energy loss distribution of electrons propagating in matter is highly non-Gaussian. Because the Kalman filter relies solely on Gaussian probability density functions, it is not necessarily the optimal reconstruction algorithm for electron tracks. A Gaussian-sum filter (GSF) algorithm for electron reconstruction in the CMS tracker has therefore been developed and implemented. The basic idea is to model the bremsstrahlung energy loss distribution by a Gaussian mixture rather than by a single Gaussian. It is shown that the GSF is able to improve the momentum resolution of electrons compared to the standard Kalman filter. The momentum resolution and the quality of he error estimate are studied both with a fast simulation, modelling the radiative energy loss in a simplified detector, and the full CMS tracker simulation.

  7. CMS and ATLAS honour their suppliers

    CERN Multimedia

    2001-01-01

    In order to motivate the hundreds of companies building their detectors, the CMS and ATLAS collaborations have recently been handing out awards of excellence to their top suppliers. At its second ceremony of this kind, CMS honoured four of its suppliers, while ATLAS for the first time paid tribute to two of its contractors. The atmosphere in the Council Chamber was festive rather than formal at the start of CMS week on Monday 5 March. Before embarking upon a long series of seminars and presentations, the Collaboration held its second awards ceremony to honour its top suppliers. By paying tribute to the exceptional efforts of certain suppliers, the Collaboration's aim is to motivate all the firms, some 500 in total, taking part in the experiment's construction. The CMS Awards panel thus singles out contractors who have not only provided full satisfaction in terms of compliance with specifications, quality and deadlines, but have in addition provided original solutions to delicate problems. Four firms came away...

  8. CMS rewards eight of its suppliers

    CERN Multimedia

    2002-01-01

    At the third awards ceremony to honour its top suppliers, the CMS collaboration presented awards to eight firms. Seven of them are involved in the manufacture of the magnet. The winners of the third CMS suppliers' awards visit the assembly site for the detector. Unsurprisingly, the CMS magnet was once again in the limelight at the third awards ceremony in honour of the collaboration's top suppliers. 'Unsurprisingly', because this magnet, which must produce an intense field of 4 Tesla inside an enormous volume (12 metres in diameter and 13 metres in length) is the detector's key component. As a result, many firms are involved in its construction. The CMS suppliers' awards are an annual event aimed at rewarding the exceptional efforts of certain companies. Firms are only eligible once they have delivered at least 50% of their supplies. This year, the collaboration honoured eight firms at a ceremony held on Monday 4 March in the main auditorium. Seven of th...

  9. Monte Carlo Production Management at CMS

    CERN Document Server

    Boudoul, G.; Pol, A; Srimanobhas, P; Vlimant, J R; Franzoni, Giovanni

    2015-01-01

    The analysis of the LHC data at the Compact Muon Solenoid (CMS) experiment requires the production of a large number of simulated events.During the runI of LHC (2010-2012), CMS has produced over 12 Billion simulated events,organized in approximately sixty different campaigns each emulating specific detector conditions and LHC running conditions (pile up).In order toaggregate the information needed for the configuration and prioritization of the events production,assure the book-keeping and of all the processing requests placed by the physics analysis groups,and to interface with the CMS production infrastructure,the web-based service Monte Carlo Management (McM) has been developed and put in production in 2012.McM is based on recent server infrastructure technology (CherryPy + java) and relies on a CouchDB database back-end.This contribution will coverthe one and half year of operational experience managing samples of simulated events for CMS,the evolution of its functionalitiesand the extension of its capabi...

  10. Optical readout and control systems for the CMS tracker

    CERN Document Server

    Troska, Jan K; Faccio, F; Gill, K; Grabit, R; Jareno, R M; Sandvik, A M; Vasey, F

    2003-01-01

    The Compact Muon Solenoid (CMS) Experiment will be installed at the CERN Large Hadron Collider (LHC) in 2007. The readout system for the CMS Tracker consists of 10000000 individual detector channels that are time-multiplexed onto 40000 unidirectional analogue (40 MSample /s) optical links for transmission between the detector and the 65 m distant counting room. The corresponding control system consists of 2500 bi-directional digital (40 Mb/s) optical links based as far as possible upon the same components. The on-detector elements (lasers and photodiodes) of both readout and control links will be distributed throughout the detector volume in close proximity to the silicon detector elements. For this reason, strict requirements are placed on minimal package size, mass, power dissipation, immunity to magnetic field, and radiation hardness. It has been possible to meet the requirements with the extensive use of commercially available components with a minimum of customization. The project has now entered its vol...

  11. The CMS Outer Hadron Calorimeter

    CERN Document Server

    Acharya, Bannaje Sripathi; Banerjee, Sunanda; Banerjee, Sudeshna; Bawa, Harinder Singh; Beri, Suman Bala; Bhandari, Virender; Bhatnagar, Vipin; Chendvankar, Sanjay; Deshpande, Pandurang Vishnu; Dugad, Shashikant; Ganguli, Som N; Guchait, Monoranjan; Gurtu, Atul; Kalmani, Suresh Devendrappa; Kaur, Manjit; Kohli, Jatinder Mohan; Krishnaswamy, Marthi Ramaswamy; Kumar, Arun; Maity, Manas; Majumder, Gobinda; Mazumdar, Kajari; Mondal, Naba Kumar; Nagaraj, P; Narasimham, Vemuri Syamala; Patil, Mandakini Ravindra; Reddy, L V; Satyanarayana, B; Sharma, Seema; Singh, B; Singh, Jas Bir; Sudhakar, Katta; Tonwar, Suresh C; Verma, Piyush

    2006-01-01

    The CMS hadron calorimeter is a sampling calorimeter with brass absorber and plastic scintillator tiles with wavelength shifting fibres for carrying the light to the readout device. The barrel hadron calorimeter is complemented with a outer calorimeter to ensure high energy shower containment in CMS and thus working as a tail catcher. Fabrication, testing and calibrations of the outer hadron calorimeter are carried out keeping in mind its importance in the energy measurement of jets in view of linearity and resolution. It will provide a net improvement in missing $\\et$ measurements at LHC energies. The outer hadron calorimeter has a very good signal to background ratio even for a minimum ionising particle and can hence be used in coincidence with the Resistive Plate Chambers of the CMS detector for the muon trigger.

  12. Automating the CMS DAQ

    International Nuclear Information System (INIS)

    Bauer, G; Darlea, G-L; Gomez-Ceballos, G; Bawej, T; Chaze, O; Coarasa, J A; Deldicque, C; Dobson, M; Dupont, A; Gigi, D; Glege, F; Gomez-Reino, R; Hartl, C; Hegeman, J; Masetti, L; Behrens, U; Branson, J; Cittolin, S; Holzner, A; Erhan, S

    2014-01-01

    We present the automation mechanisms that have been added to the Data Acquisition and Run Control systems of the Compact Muon Solenoid (CMS) experiment during Run 1 of the LHC, ranging from the automation of routine tasks to automatic error recovery and context-sensitive guidance to the operator. These mechanisms helped CMS to maintain a data taking efficiency above 90% and to even improve it to 95% towards the end of Run 1, despite an increase in the occurrence of single-event upsets in sub-detector electronics at high LHC luminosity.

  13. The CMS "Higgs Boson Goose Game" Poster

    CERN Multimedia

    Davis, Siona Ruth

    Building and operating the CMS Detector is a complicated endeavour! Now, more than 20 years after the detector was conceived, the CMS Bologna group proposes to follow the steps of this challenging project by playing "The Higgs Boson Goose Game", illustrating CMS activities and goals. The concept of the game is inspired by the traditional "Game of the Goose". The underlying idea is that the progress of building and operating a detector at the LHC is similar to the progress of the pawns on the game board: it is fast at times, bringing rewards and satisfaction, while sometimes unexpected problems cause delays or even a step back requiring CMS scientists to use all of their skill and creativity to devise new solutions.

  14. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    Science.gov (United States)

    Anisenkov, Alexey; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-10-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services. Being an intermediate middleware system between clients and external information sources (like central BDII, GOCDB, MyOSG), AGIS defines the relations between experiment specific used resources and physical distributed computing capabilities. Being in production during LHC Runl AGIS became the central information system for Distributed Computing in ATLAS and it is continuously evolving to fulfil new user requests, enable enhanced operations and follow the extension of the ATLAS Computing model. The ATLAS Computing model and data structures used by Distributed Computing applications and services are continuously evolving and trend to fit newer requirements from ADC community. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing, like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others. The improvements of information model and general updates are also shown, in particular we explain how other collaborations outside ATLAS could benefit the system as a computing resources information catalogue. AGIS is evolving towards a common information system, not coupled to a specific experiment.

  15. A distributed computer system for digitising machines

    International Nuclear Information System (INIS)

    Bairstow, R.; Barlow, J.; Waters, M.; Watson, J.

    1977-07-01

    This paper describes a Distributed Computing System, based on micro computers, for the monitoring and control of digitising tables used by the Rutherford Laboratory Bubble Chamber Research Group in the measurement of bubble chamber photographs. (author)

  16. Performance of the CMS Event Builder

    Energy Technology Data Exchange (ETDEWEB)

    Andre, J.M.; et al.

    2017-11-22

    The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of to the high-level trigger farm. The DAQ architecture is based on state-of-the-art network technologies for the event building. For the data concentration, 10/40 Gbit/s Ethernet technologies are used together with a reduced TCP/IP protocol implemented in FPGA for a reliable transport between custom electronics and commercial computing hardware. A 56 Gbit/s Infiniband FDR Clos network has been chosen for the event builder. This paper presents the implementation and performance of the event-building system.

  17. Readiness of CMS simulation towards LHC startup

    International Nuclear Information System (INIS)

    Banerjee, S

    2008-01-01

    The CMS experiment has used detector simulation software in its conceptual as well as technical design. With the detector construction near its completion, the role of simulation has changed toward understanding collision data to be collected by CMS in near future. CMS simulation software is becoming a data driven, realistic and accurate Monte Carlo programme. The software architecture is described with some detail of the framework as well as detector specific components. Performance issues are discussed as well

  18. 42 CFR 482.74 - Condition of participation: Notification to CMS.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Condition of participation: Notification to CMS... participation: Notification to CMS. (a) A transplant center must notify CMS immediately of any significant... conditions of participation. Instances in which CMS should receive information for follow up, as appropriate...

  19. The CMS Electromagnetic Calorimeter: Construction, Commissioning and Calibration

    CERN Document Server

    ORIMOTO,Toyoko J.

    2009-01-01

    The Compact Muon Solenoid (CMS) detector at the Large Hadron Colider (LHC) is ready for first collisions. The Electromagnetic Calorimeter (ECAL) of CMS, a high resolution detector comprised of nearly 76000 lead tungstate crystals, will play a crucial role in the coming physics searches undertaken by CMS. The design and performance of the CMS ECAL with test beams, cosmic rays, and first single beam data will be presented. In addition, the status of the calorimeter and plans for calibration with first collisions will be discussed. European Physical Society Europhysics Conference on High Energy Physics July 16-22, 2009 Krakow, Poland ∗Speaker.

  20. A gLite FTS based solution for managing user output in CMS

    Energy Technology Data Exchange (ETDEWEB)

    Cinquilli, M. [CERN; Riahi, H. [INFN, Perugia; Spiga, D. [CERN; Grandi, C. [INFN, Bologna; Mancinelli, V. [CERN; Mascheroni, M. [CERN; Pepe, F. [INFN, Bologna; Vaandering, E. [Fermilab

    2012-01-01

    The CMS distributed data analysis workflow assumes that jobs run in a different location from where their results are finally stored. Typically the user output must be transferred across the network from one site to another, possibly on a different continent or over links not necessarily validated for high bandwidth/high reliability transfer. This step is named stage-out and in CMS was originally implemented as a synchronous step of the analysis job execution. However, our experience showed the weakness of this approach both in terms of low total job execution efficiency and failure rates, wasting precious CPU resources. The nature of analysis data makes it inappropriate to use PhEDEx, the core data placement system for CMS. As part of the new generation of CMS Workload Management tools, the Asynchronous Stage-Out system (AsyncStageOut) has been developed to enable third party copy of the user output. The AsyncStageOut component manages glite FTS transfers of data from the temporary store at the site where the job ran to the final location of the data on behalf of that data owner. The tool uses python daemons, built using the WMCore framework, and CouchDB, to manage the queue of work and FTS transfers. CouchDB also provides the platform for a dedicated operations monitoring system. In this paper, we present the motivations of the asynchronous stage-out system. We give an insight into the design and the implementation of key features, describing how it is coupled with the CMS workload management system. Finally, we show the results and the commissioning experience.

  1. CMS reconstruction improvements for the tracking in large pileup events

    CERN Document Server

    Rovere, M

    2015-01-01

    The CMS tracking code is organized in several levels, known as iterative steps, each optimized to reconstruct a class of particle trajectories, as the ones of particles originating from the primary vertex or displaced tracks from particles resulting from secondary vertices. Each iterative step consists of seeding, pattern recognition and fitting by a kalman filter, and a final filtering and cleaning. Each subsequent step works on hits not yet associated to a reconstructed particle trajectory.The CMS tracking code is continuously evolving to make the reconstruction computing load compatible with the increasing instantaneous luminosity of LHC, resulting in a large number of primary vertices and tracks per bunch crossing.The major upgrade put in place during the present LHC Long Shutdown will allow the tracking code to comply with the conditions expected during RunII and the much larger pileup. In particular, new algorithms that are intrinsically more robust in high occupancy conditions were developed, iteration...

  2. Highlights from CMS

    CERN Document Server

    Autermann, Christian

    2018-01-01

    This article summarizes the latest highlights from the CMS experiment as presented at the Lepton Photon conference 2017 in Guangzhou, China. A selection of the latest physics results, the latest detector upgrades, and the current detector status are discussed. CMS has analyzed the full dataset of proton-proton collision data delivered by the LHC in 2016 at a center-of-mass energy of $13$\\,TeV corresponding to an integrated luminosity of $40$\\,fb$^{-1}$. The leap in center-of-mass energy and in luminosity with respect to the $7$ and $8$\\,TeV runs enabled interesting and relevant new physics results. A new silicon pixel tracking detector was installed during the LHC shutdown 2016/17 and has successfully started operation.

  3. 23 CFR 971.214 - Federal lands congestion management system (CMS).

    Science.gov (United States)

    2010-04-01

    ... 23 Highways 1 2010-04-01 2010-04-01 false Federal lands congestion management system (CMS). 971... Federal lands congestion management system (CMS). (a) For purposes of this section, congestion means the...) Develop criteria to determine when a CMS is to be implemented for a specific FH; and (2) Have CMS coverage...

  4. The CMS detector before closure

    CERN Multimedia

    Patrice Loiez

    2006-01-01

    The CMS detector before testing using muon cosmic rays that are produced as high-energy particles from space crash into the Earth's atmosphere generating a cascade of energetic particles. After closing CMS, the magnets, calorimeters, trackers and muon chambers were tested on a small section of the detector as part of the magnet test and cosmic challenge. This test checked the alignment and functionality of the detector systems, as well as the magnets.

  5. Recent SUSY Results from CMS

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    We present a summary of the recent results of searches for supersymmetry conducted by the CMS experiment. Several searches are reported using complementary final states and methods. The results presented include searches for stops and sbottoms, production of charginos and neutralinos, and R-parity violating signatures. Several of them are the first results of their kind from CMS, while others increased the mass reach significantly over previously published results from the LHC.

  6. Inauguration of the CMS solenoid

    CERN Multimedia

    Maximilien Brice

    2005-01-01

    In early 2005 the final piece of the CMS solenoid magnet arrived, marked by this ceremony held in the CMS assembly hall at Cessy, France. The solenoid is made up of five pieces totaling 12.5 m in length and 6 m in diameter. Weighing 220 tonnes, it will produce a 4 T magnetic field, 100 000 times the strength of the Earth's magnetic field and store enough energy to melt 18 tonnes of gold.

  7. Operational experience with the GEM detector assembly lines for the CMS forward muon upgrade

    CERN Document Server

    Vai, Ilaria

    2017-01-01

    The CMS Collaboration has been developing large-area Triple-GEM detectors to be installed in the muon endcap regions of the CMS experiment in 2019 to maintain forward muon trigger and tracking performance at the HL-LHC. Ten pre-production detectors were built at CERN to commission the first assembly line and the quality controls. These were installed in the CMS detector in early 2017 and are currently participating in the 2017 LHC run. The collaboration has prepared several additional assembly and quality control lines for distributed mass production of 160 GEM detectors at various sites worldwide. During 2017, these additional production sites have been optimizing construction techniques and quality control procedures and validating them against common specifications by constructing additional pre-production detectors. Using the specific experience from one production site as an example, we discuss how the quality controls make use of independent hardware and trained personnel to ensure fast and reliable pro...

  8. PREP: Production and Reprocessing management tool for CMS

    International Nuclear Information System (INIS)

    Cossutti, F; Lenzi, P; Naziridis, N; Samyn, D; Stöckli, F

    2012-01-01

    The production of simulated samples for physics analysis at LHC represents a noticeable organization challenge, because it requires the management of several thousands different workflows. The submission of a workflow to the grid based computing infrastructure starts with the definition of the general characteristics of a given set of coherent samples (called a ‘campaign'), up to the definition of the physics settings to be used for each sample corresponding to a specific process to be simulated, both at hard event generation and detector simulation level. In order to have an organized control of the of the definition of the large number of MC samples needed by CMS, a dedicated management tool, called PREP, has been built. Its basic component is a database storing all the relevant information about the sample and the actions implied by the workflow definition, approval and production. A web based interface allows the database to be used from experts involved in production to trigger all the different actions needed, as well as by normal physicists involved in analyses to retrieve the relevant information. The tool is integrated through a set of dedicated APIs with the production agent and information storage utilities of CMS.

  9. A large-scale application of the Kalman alignment algorithm to the CMS tracker

    International Nuclear Information System (INIS)

    Widl, E; Fruehwirth, R

    2008-01-01

    The Kalman alignment algorithm has been specifically developed to cope with the demands that arise from the specifications of the CMS Tracker. The algorithmic concept is based on the Kalman filter formalism and is designed to avoid the inversion of large matrices. Most notably, the algorithm strikes a balance between conventional global and local track-based alignment algorithms, by restricting the computation of alignment parameters not only to alignable objects hit by the same track, but also to all other alignable objects that are significantly correlated. Nevertheless, this feature also comes with various trade-offs: Mechanisms are needed that affect which alignable objects are significantly correlated and keep track of these correlations. Due to the large amount of alignable objects involved at each update (at least compared to local alignment algorithms), the time spent for retrieving and writing alignment parameters as well as the required user memory becomes a significant factor. The large-scale test presented here applies the Kalman alignment algorithm to the (misaligned) CMS Tracker barrel, and demonstrates the feasibility of the algorithm in a realistic scenario. It is shown that both the computation time and the amount of required user memory are within reasonable bounds, given the available computing resources, and that the obtained results are satisfactory

  10. Mobile Agents in Networking and Distributed Computing

    CERN Document Server

    Cao, Jiannong

    2012-01-01

    The book focuses on mobile agents, which are computer programs that can autonomously migrate between network sites. This text introduces the concepts and principles of mobile agents, provides an overview of mobile agent technology, and focuses on applications in networking and distributed computing.

  11. Commissioning the CMS Alignment and Calibration Framework

    CERN Document Server

    Futyan, David

    2009-01-01

    The CMS experiment has developed a powerful framework to ensure the precise and prompt alignment and calibration of its components, which is a major prerequisite to achieve the optimal performance for physics analysis. The prompt alignment and calibration strategy harnesses computing resources both at the Tier-0 site and the CERN Analysis Facility (CAF) to ensure fast turnaround for updating the corresponding database payloads. An essential element is the creation of dedicated data streams concentrating the specific event information required by the various alignment and calibration workflows. The resulting low latency is required for feeding the resulting constants into the prompt reconstruction process, which is essential for achieving swift physics analysis of the LHC data. This report discusses the implementation and the computational aspects of the alignment and calibration framework. Recent commissioning campaigns with cosmic muons, beam halo and simulated data have been used to gain detailed experience...

  12. CMS: the first barrel ring completed!

    CERN Multimedia

    2000-01-01

    Seven years after design studies began, CERN and the German company DWE have erected the first of the five CMS yoke rings, a giant component weighing 1200 tonnes. The first ring of the CMS magnet yoke, a twelve-sided 15-metre-high colossus, has been erected in the new hall at Point 5 near Cessy. For the last few days it has stood unaided, no longer relying on the central structure required for its assembly. Its construction marks an important milestone in the CMS programme, the culmination of seven years of work at CERN and over two years of manufacturing at DWE. Awarded the contract by the Swiss Federal Institute of Technology (ETH), Zürich, the German manufacturer has produced and assembled the ring components in collaboration with a team from CERN. This feat of mechanical engineering was celebrated two weeks ago at a drink attended by the main protagonists, headed by Franz Kufner, divisional manager at DWE, Franz Leher, production engineer at DWE, Alain Hervé, CMS technical coordinator,...

  13. 42 CFR 426.415 - CMS' role in the LCD review.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false CMS' role in the LCD review. 426.415 Section 426... Review of an LCD § 426.415 CMS' role in the LCD review. CMS may provide to the ALJ, and all parties to the LCD review, information identifying the person who represents the contractor or CMS, if necessary...

  14. Distributed quantum computing with single photon sources

    International Nuclear Information System (INIS)

    Beige, A.; Kwek, L.C.

    2005-01-01

    Full text: Distributed quantum computing requires the ability to perform nonlocal gate operations between the distant nodes (stationary qubits) of a large network. To achieve this, it has been proposed to interconvert stationary qubits with flying qubits. In contrast to this, we show that distributed quantum computing only requires the ability to encode stationary qubits into flying qubits but not the conversion of flying qubits into stationary qubits. We describe a scheme for the realization of an eventually deterministic controlled phase gate by performing measurements on pairs of flying qubits. Our scheme could be implemented with a linear optics quantum computing setup including sources for the generation of single photons on demand, linear optics elements and photon detectors. In the presence of photon loss and finite detector efficiencies, the scheme could be used to build large cluster states for one way quantum computing with a high fidelity. (author)

  15. Performance test of the CMS link alignment system

    CERN Document Server

    Arce, P; Calvo, E; Fernández, M G; Ferrando, A; Figueroa, C F; García, N; Josa-Mutuberria, I; Molinero, A; Oller, J C; Rodrigo, T; Vila, I; Virto, A L

    2002-01-01

    A first global test of the CMS Alignment System was performed at the I4 hall of the CERN ISR tunnel. Positions of the network, reproducing a set of points in the CMS detector monitored by the Link System, were reconstructed and compared to survey measurements. Spatial and angular reconstruction precisions reached in the present experimental set-up are already close to the CMS requirements.

  16. Tracking performance with cosmic rays in CMS

    International Nuclear Information System (INIS)

    Cerati, G.B.

    2009-01-01

    The CMS Tracker is the biggest all-silicon detector in the world and is designed to be extremely efficient and accurate even in a very hostile environment such as the one close to the CMS collision point. It consists of an inner pixel detector, made of three barrel layers (48M pixels) and four forward disks (16M pixels), and an outer micro-strip detector, divided in two barrel sub-detectors, TIB and TOB, and two endcap sub-detectors, TID and TEC, for a total of 9.6M strips. The commissioning of the CMS Tracker detector has been initially carried out at the Tracker Integration Facility at CERN (TIF), where cosmic ray data were collected for the strip detector only, and is still ongoing at the CMS site (LHC Point 5). Here the Strip and Pixel detectors have been installed in the experiment and are taking part to the cosmic global-runs. After an overview of the tracking algorithms for cosmic-ray data reconstruction, the resulting tracking performance on cosmic data both at TIF and at P5 are presented. The excellent performance proves that the CMS Tracker is ready for the first collisions foreseen for 2009.

  17. Managing the CMS Data and Monte Carlo Processing during LHC Run 2

    Science.gov (United States)

    Wissing, C.; CMS Collaboration

    2017-10-01

    In order to cope with the challenges expected during the LHC Run 2 CMS put in a number of enhancements into the main software packages and the tools used for centrally managed processing. In the presentation we will highlight these improvements that allow CMS to deal with the increased trigger output rate, the increased pileup and the evolution in computing technology. The overall system aims at high flexibility, improved operational flexibility and largely automated procedures. The tight coupling of workflow classes to types of sites has been drastically relaxed. Reliable and high-performing networking between most of the computing sites and the successful deployment of a data-federation allow the execution of workflows using remote data access. That required the development of a largely automatized system to assign workflows and to handle necessary pre-staging of data. Another step towards flexibility has been the introduction of one large global HTCondor Pool for all types of processing workflows and analysis jobs. Besides classical Grid resources also some opportunistic resources as well as Cloud resources have been integrated into that Pool, which gives reach to more than 200k CPU cores.

  18. Luminosity measurement and beam condition monitoring at CMS

    Energy Technology Data Exchange (ETDEWEB)

    Leonard, Jessica Lynn [DESY, Zeuthen (Germany)

    2015-07-01

    The BRIL system of CMS consists of instrumentation to measure the luminosity online and offline, and to monitor the LHC beam conditions inside CMS. An accurate luminosity measurement is essential to the CMS physics program, and measurement of the beam background is necessary to ensure safe operation of CMS. In expectation of higher luminosity and denser proton bunch spacing during LHC Run II, many of the BRIL subsystems are being upgraded and others are being added to complement the existing measurements. The beam condition monitor (BCM) consists of several sets of diamond sensors used to measure online luminosity and beam background with a single-bunch-crossing resolution. The BCM also detects when beam conditions become unfavorable for CMS running and may trigger a beam abort to protect the detector. The beam halo monitor (BHM) uses quartz bars to measure the background of the incoming beams at larger radii. The pixel luminosity telescope (PLT) consists of telescopes of silicon sensors designed to provide a CMS online and offline luminosity measurement. In addition, the forward hadronic calorimeter (HF) will deliver an independent luminosity measurement, making the whole system robust and allowing for cross-checks of the systematics. Data from each of the subsystems will be collected and combined in the BRIL DAQ framework, which will publish it to CMS and LHC. The current status of installation and commissioning results for the BRIL subsystems are given.

  19. Distributed MRI reconstruction using Gadgetron-based cloud computing.

    Science.gov (United States)

    Xue, Hui; Inati, Souheil; Sørensen, Thomas Sangild; Kellman, Peter; Hansen, Michael S

    2015-03-01

    To expand the open source Gadgetron reconstruction framework to support distributed computing and to demonstrate that a multinode version of the Gadgetron can be used to provide nonlinear reconstruction with clinically acceptable latency. The Gadgetron framework was extended with new software components that enable an arbitrary number of Gadgetron instances to collaborate on a reconstruction task. This cloud-enabled version of the Gadgetron was deployed on three different distributed computing platforms ranging from a heterogeneous collection of commodity computers to the commercial Amazon Elastic Compute Cloud. The Gadgetron cloud was used to provide nonlinear, compressed sensing reconstruction on a clinical scanner with low reconstruction latency (eg, cardiac and neuroimaging applications). The proposed setup was able to handle acquisition and 11 -SPIRiT reconstruction of nine high temporal resolution real-time, cardiac short axis cine acquisitions, covering the ventricles for functional evaluation, in under 1 min. A three-dimensional high-resolution brain acquisition with 1 mm(3) isotropic pixel size was acquired and reconstructed with nonlinear reconstruction in less than 5 min. A distributed computing enabled Gadgetron provides a scalable way to improve reconstruction performance using commodity cluster computing. Nonlinear, compressed sensing reconstruction can be deployed clinically with low image reconstruction latency. © 2014 Wiley Periodicals, Inc.

  20. Multi-core processing and scheduling performance in CMS

    International Nuclear Information System (INIS)

    Hernández, J M; Evans, D; Foulkes, S

    2012-01-01

    Commodity hardware is going many-core. We might soon not be able to satisfy the job memory needs per core in the current single-core processing model in High Energy Physics. In addition, an ever increasing number of independent and incoherent jobs running on the same physical hardware not sharing resources might significantly affect processing performance. It will be essential to effectively utilize the multi-core architecture. CMS has incorporated support for multi-core processing in the event processing framework and the workload management system. Multi-core processing jobs share common data in memory, such us the code libraries, detector geometry and conditions data, resulting in a much lower memory usage than standard single-core independent jobs. Exploiting this new processing model requires a new model in computing resource allocation, departing from the standard single-core allocation for a job. The experiment job management system needs to have control over a larger quantum of resource since multi-core aware jobs require the scheduling of multiples cores simultaneously. CMS is exploring the approach of using whole nodes as unit in the workload management system where all cores of a node are allocated to a multi-core job. Whole-node scheduling allows for optimization of the data/workflow management (e.g. I/O caching, local merging) but efficient utilization of all scheduled cores is challenging. Dedicated whole-node queues have been setup at all Tier-1 centers for exploring multi-core processing workflows in CMS. We present the evaluation of the performance scheduling and executing multi-core workflows in whole-node queues compared to the standard single-core processing workflows.

  1. Russian and Belorussian firms receive CMS Gold Awards

    CERN Multimedia

    Maximilien Brice

    2003-01-01

    On 7 March, CMS handed out its three latest Gold Awards in recognition of outstanding supplier performance. Photos 01,02: Prof. Felicitas Pauss, Deputy Chair of the CMS Collaboration Board, presents a CMS Gold Award to Professor Valery Novikov, Director-General of Myasishchev Design Bureau, Zhukovsky, Moscow Region, Russia. The Myasishchev company was responsible for the carbon fibre structures in which the fragile lead tungstate crystals of the electromagnetic calorimeter end-caps are to be embedded. These lightweight structures must support a weight of 22.9 tonnes in each end-cap! The company produced a very thin-walled modular structure that ensured the calorimeter performance would not be harmed, while remaining stable and strong. Photos 03,04: Prof. Felicitas Pauss, Deputy Chair of the CMS Collaboration Board, presents a CMS Gold Award to Professor Boris Gabaraev, General director of N.A. Dollezhal Research and Development Institute of Power Engineering (NIKIET), Moscow, Russia (a.k.a. ENTEK) for the de...

  2. The CMS Beam Halo Monitor electronics

    International Nuclear Information System (INIS)

    Tosi, N.; Fabbri, F.; Montanari, A.; Torromeo, G.; Dabrowski, A.E.; Orfanelli, S.; Grassi, T.; Hughes, E.; Mans, J.; Rusack, R.; Stifter, K.; Stickland, D.P.

    2016-01-01

    The CMS Beam Halo Monitor has been successfully installed in the CMS cavern in LHC Long Shutdown 1 for measuring the machine induced background for LHC Run II. The system is based on 40 detector units composed of synthetic quartz Cherenkov radiators coupled to fast photomultiplier tubes (PMTs). The readout electronics chain uses many components developed for the Phase 1 upgrade to the CMS Hadronic Calorimeter electronics, with dedicated firmware and readout adapted to the beam monitoring requirements. The PMT signal is digitized by a charge integrating ASIC (QIE10), providing both the signal rise time, with few nanosecond resolution, and the charge integrated over one bunch crossing. The backend electronics uses microTCA technology and receives data via a high-speed 5 Gbps asynchronous link. It records histograms with sub-bunch crossing timing resolution and is read out via IPbus using the newly designed CMS data acquisition for non-event based data. The data is processed in real time and published to CMS and the LHC, providing online feedback on the beam quality. A dedicated calibration monitoring system has been designed to generate short triggered pulses of light to monitor the efficiency of the system. The electronics has been in operation since the first LHC beams of Run II and has served as the first demonstration of the new QIE10, Microsemi Igloo2 FPGA and high-speed 5 Gbps link with LHC data

  3. 42 CFR 403.248 - Administrative review of CMS determinations.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false Administrative review of CMS determinations. 403... Certification Program: General Provisions § 403.248 Administrative review of CMS determinations. (a) This section provides for administrative review if CMS determines— (1) Not to certify a policy; or (2) That a...

  4. A data grid prototype for distributed data production in CMS

    CERN Document Server

    Hafeez, M; Stockinger, H E

    2001-01-01

    The CMS experiment at CERN is setting up a grid infrastructure required to fulfil the needs imposed by Terabyte scale productions for the next few years. The goal is to automate the production and at the same time allow the users to interact with the system, if required, to make decisions which would optimise performance. We present the architecture, design and functionality of our first working objectivity file replication prototype. The middle-ware of choice is the Globus toolkit that provides promising functionality. Our results prove the ability of the Globus toolkit to be used as an underlying technology for a world-wide Data Grid. The required data management functionality includes high speed file transfers, secure access to remote files, selection and synchronisation of replicas and managing the meta information. The whole system is expected to be flexible enough to incorporate site specific policies. The data management granularity is the file rather than the object level. The first prototype is curre...

  5. submitter Performance studies of CMS workflows using Big Data technologies

    CERN Document Server

    Ambroz, Luca; Grandi, Claudio

    At the Large Hadron Collider (LHC), more than 30 petabytes of data are produced from particle collisions every year of data taking. The data processing requires large volumes of simulated events through Monte Carlo techniques. Furthermore, physics analysis implies daily access to derived data formats by hundreds of users. The Worldwide LHC Computing Grid (WLCG) - an international collaboration involving personnel and computing centers worldwide - is successfully coping with these challenges, enabling the LHC physics program. With the continuation of LHC data taking and the approval of ambitious projects such as the High-Luminosity LHC, such challenges will reach the edge of current computing capacity and performance. One of the keys to success in the next decades - also under severe financial resource constraints - is to optimize the efficiency in exploiting the computing resources. This thesis focuses on performance studies of CMS workflows, namely centrallyscheduled production activities and unpredictable d...

  6. Triggering on New Physics with the CMS Detector

    Energy Technology Data Exchange (ETDEWEB)

    Bose, Tulika [Boston Univ., MA (United States)

    2016-07-29

    The BU CMS group led by PI Tulika Bose has made several significant contributions to the CMS trigger and to the analysis of the data collected by the CMS experiment. Group members have played a leading role in the optimization of trigger algorithms, the development of trigger menus, and the online operation of the CMS High-Level Trigger. The group’s data analysis projects have concentrated on a broad spectrum of topics that take full advantage of their strengths in jets and calorimetry, trigger, lepton identification as well as their considerable experience in hadron collider physics. Their publications cover several searches for new heavy gauge bosons, vector-like quarks as well as diboson resonances.

  7. A new era for central processing and production in CMS

    International Nuclear Information System (INIS)

    Fajardo, E; Gutsche, O; Foulkes, S; Linacre, J; Spinoso, V; Lahiff, A; Gomez-Ceballos, G; Klute, M; Mohapatra, A

    2012-01-01

    The goal for CMS computing is to maximise the throughput of simulated event generation while also processing event data generated by the detector as quickly and reliably as possible. To maintain this achievement as the quantity of events increases CMS computing has migrated at the Tier 1 level from its old production framework, ProdAgent, to a new one, WMAgent. The WMAgent framework offers improved processing efficiency and increased resource usage as well as a reduction in operational manpower. In addition to the challenges encountered during the design of the WMAgent framework, several operational issues have arisen during its commissioning. The largest operational challenges were in the usage and monitoring of resources, mainly a result of a change in the way work is allocated. Instead of work being assigned to operators, all work is centrally injected and managed in the Request Manager system and the task of the operators has changed from running individual workflows to monitoring the global workload. In this report we present how we tackled some of the operational challenges, and how we benefitted from the lessons learned in the commissioning of the WMAgent framework at the Tier 2 level in late 2011. As case studies, we will show how the WMAgent system performed during some of the large data reprocessing and Monte Carlo simulation campaigns.

  8. 42 CFR 457.1003 - CMS review of waiver requests.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false CMS review of waiver requests. 457.1003 Section 457.1003 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES... Waivers: General Provisions § 457.1003 CMS review of waiver requests. CMS will review the waiver requests...

  9. 28 October 2013- Former US Vice President A. Gore signing the guest book with Technology Department Head F. Bordry, Head of International Relations R. Voss, Director for Research and Scientific Computing S. Bertolucci and CMS Collaboration Spokesperson J. Incandela.

    CERN Multimedia

    Maximilien Brice

    2013-01-01

    28 October 2013- Former US Vice President A. Gore signing the guest book with Technology Department Head F. Bordry, Head of International Relations R. Voss, Director for Research and Scientific Computing S. Bertolucci and CMS Collaboration Spokesperson J. Incandela.

  10. SWATCH Common software for controlling and monitoring the upgraded CMS Level-1 trigger

    CERN Document Server

    Lazaridis, Christos; Bunkowski, Karol; Codispoti, Giuseppe; Dirkx, Glenn; Ghabrous Larrea, Carlos; Lingemann, Joschka; Kreczko, Lukasz; Thea, Alessandro; Williams, Tom

    2017-01-01

    The Large Hadron Collider at CERN restarted in 2015 with a higher centre-of-mass energy of 13 TeV. The instantaneous luminosity is expected to increase significantly in the coming years. An upgraded Level-1 trigger system is being deployed in the CMS experiment in order to maintain the same efficiencies for searches and precision measurements as those achieved in the previous run. This system must be controlled and monitored coherently through software, with high operational efficiency.The legacy system is composed of approximately 4000 data processor boards, of several custom application-specific designs. These boards are organised into several subsystems; each subsystem receives data from different detector systems (calorimeters, barrel/endcap muon detectors), or with differing granularity. These boards have been controlled and monitored by a medium-sized distributed system of over 40 computers and 200 processes. Only a small fraction of the control and monitoring software was common between the different s...

  11. Performance of R-GMA based grid job monitoring system for CMS data production

    CERN Document Server

    Byrom, Robert; Fisher, Steve M; Grandi, Claudio; Hobson, Peter R; Kyberd, Paul; MacEvoy, Barry; Nebrensky, Jindrich Josef; Tallini, Hugh; Traylen, Stephen

    2004-01-01

    High Energy Physics experiments, such as the Compact Muon Solenoid (CMS) at the CERN laboratory in Geneva, have large-scale data processing requirements, with stored data accumulating at a rate of 1 Gbyte/s. This load comfortably exceeds any previous processing requirements and we believe it may be most efficiently satisfied through Grid computing. Management of large Monte Carlo productions (~3000 jobs) or data analyses and the quality assurance of the results requires careful monitoring and bookkeeping, and an important requirement when using the Grid is the ability to monitor transparently the large number of jobs that are being executed simultaneously at multiple remote sites. R-GMA is a monitoring and information management service for distributed resources based on the Grid Monitoring Architecture of the Global Grid Forum. We have previously developed a system allowing us to test its performance under a heavy load while using few real Grid resources. We present the latest results on this system and comp...

  12. modeling workflow management in a distributed computing system

    African Journals Online (AJOL)

    Dr Obe

    communication system, which allows for computerized support. ... Keywords: Distributed computing system; Petri nets;Workflow management. 1. ... A distributed operating system usually .... the questionnaire is returned with invalid data,.

  13. Xenon-Xenon collision events in CMS

    CERN Multimedia

    Mc Cauley, Thomas

    2017-01-01

    One of the first-ever xenon-xenon collision events recorded by CMS during the LHC’s one-day-only heavy-ion run with xenon nuclei. The large number of tracks emerging from the centre of the detector show the many simultaneous nucleon-nucleon interactions that take place when two xenon nuclei, each with 54 protons and 75 neutrons, collide inside CMS.

  14. CMS standard model Higgs boson results

    Directory of Open Access Journals (Sweden)

    Garcia-Abia Pablo

    2013-11-01

    Full Text Available In July 2012 CMS announced the discovery of a new boson with properties resembling those of the long-sought Higgs boson. The analysis of the proton-proton collision data recorded by the CMS detector at the LHC, corresponding to integrated luminosities of 5.1 fb−1 at √s = 7 TeV and 19.6 fb−1 at √s = 8 TeV, confirm the Higgs-like nature of the new boson, with a signal strength associated with vector bosons and fermions consistent with the expectations for a standard model (SM Higgs boson, and spin-parity clearly favouring the scalar nature of the new boson. In this note I review the updated results of the CMS experiment.

  15. 42 CFR 411.386 - CMS's advisory opinions as exclusive.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false CMS's advisory opinions as exclusive. 411.386... Relationships Between Physicians and Entities Furnishing Designated Health Services § 411.386 CMS's advisory... described in § 411.370. CMS has not and does not issue a binding advisory opinion on the subject matter in...

  16. Study of the CMS Phase-1 Pixel Pilot Blade Reconstruction

    CERN Document Server

    Vami, Tamas Almos

    2017-01-01

    The Compact Muon Solenoid (CMS) detector is one of two general-purpose detectors that measure the products of high energy particle interactions in the Large Hadron Collider (LHC) at CERN. The silicon pixel detector is the innermost component of the CMS tracking system. The detector which was in operation between 2009 and 2016 has now been replaced with an upgraded one in the beginning of 2017. During the previous shutdown period of the LHC, a prototype readout system and a third disk was inserted into the old forward pixel detector with eight prototype blades constructed using the new digital read-out chips. Testing the performance of these pilot modules enabled us to gain operational experience with the upgraded detector. In this paper, the reconstruction and analysis of the data taken with the new modules are presented including information on the calibration of the reconstruction software. The hit finding efficiency and track-hit residual distributions are also shown.

  17. Operation and Monitoring of the CMS Regional Calorimeter Trigger Hardware

    CERN Document Server

    Klabbers, P

    2008-01-01

    The electronics for the Regional Calorimeter Trigger (RCT) of the Compact Muon Solenoid Experiment (CMS) have been produced, tested, and installed. The RCT hardware consists of one clock distribution crate and 18 double-sided crates containing custom boards, ASICs, and backplanes. The RCT receives 8-bit energies and a data quality bit from the HCAL and ECAL Trigger Primitive Generators (TPGs) and sends it to the CMS Global Calorimeter Trigger (GCT) after processing. Integration tests with the TPG and GCT subsystems have been successful. Installation is complete and the RCT is integrated into the Level-1 Trigger chain. Data taking has begun using detector noise, cosmic rays, proton-beam debris, and beamhalo muons. The operation and configuration of the RCT is a completely automated process. The tools to monitor, operate, and debug the RCT are mature and will be described in detail, as well as the results from data taking with the RCT.

  18. Fragmentation patterns of jets in pPb collisions in CMS

    CERN Document Server

    AUTHOR|(CDS)2089542

    2016-01-01

    The nuclear parton distribution function and flavor composition of hard scattering processes can be accurately studied using the jet fragmentation functions. Recent measurements of the pPb nuclear modification factor ($R_{pPb}$), with diverging values for inclusive jets and charged hadrons, have raised question on jet fragmentation properties in pPb collisions. These spectra measurements are performed with pp reference at 5.02 TeV constructed by interpolation or extrapolation from different $\\sqrt{s}$, and on steeply falling power-law spectra. As the jet fragmentation function is only evolving logarithmically with $\\sqrt{s}$, this further underscores the importance of a direct measurement. Together with the CMS results in pPb inclusive jets and charge hadron $R_{pPb}$, we introduce the new CMS measurement of fragmentation function in pPb collisions, where within our uncertainties, jets in pPb is found to have identical fragmentation property vs. pp jets. We will further discuss the consistency and tension amo...

  19. Top quark pair production and modeling via QCD in CMS

    CERN Document Server

    Gonzalez Fernandez, Juan Rodrigo

    2017-01-01

    Measurements of the inclusive and differential top quark pair ($\\textrm{t}\\bar{\\textrm{t}}$) production cross section at centre-of-mass energies of 13 TeV and 5.02 TeV are presented, performed using CMS data collected in 2015 and 2016. The inclusive cross section is measured in the lepton+jets, dilepton and fully hadronic channels. Top quark pair differential cross sections are measured and are given as functions of various kinematic observables of (anti)top quark, the $\\textrm{t}\\bar{\\textrm{t}}$ system, and of the jets and leptons in the final state. Furthermore, the multiplicity and kinematic distributions of the additional jets produced in $\\textrm{t}\\bar{\\textrm{t}}$ events are also investigated and its modeling is compared for several generators. A new tune of parameters is developed for some of the generators. In addition, first measurements of top quark pair production with additional b quarks in the final state are presented. Furthermore, searches for four top quark production in CMS are also present...

  20. Studies of Jet Quenching in PbPb collisions at CMS

    CERN Document Server

    Nguyen, Matthew

    2012-01-01

    Jets are an important tool to probe the hot, dense medium produced in ultra-relativistic heavy-ion collisions. At the collision energies available at the Large Hadron Collider (LHC), there is copious production of hard processes, such that high p_T jets may be differentiated from the heavy-ion underlying event. The multipurpose Compact Muon Solenoid (CMS) detector is well designed to measure hard scattering processes with its high quality calorimeters and high precision silicon tracker. Jet quenching has been studied in CMS in PbPb collisions at sqrt(s_NN)= 2.76 TeV. As a function of centrality, dijet events with a high p_T leading jet were found to have an increasing momentum imbalance that was significantly larger than predicted by simulations. The angular distribution of jet fragmentation products has been explored by associating charged tracks with the jets measured in the calorimeters. By projecting the momenta of charged tracks onto the leading jet axis it is shown that the apparent momentum imbalance o...

  1. New operator assistance features in the CMS Run Control System

    CERN Document Server

    Andre, Jean-Marc Olivier; Branson, James; Brummer, Philipp Maximilian; Chaze, Olivier; Cittolin, Sergio; Contescu, Cristian; Craigs, Benjamin Gordon; Darlea, Georgiana Lavinia; Deldicque, Christian; Demiragli, Zeynep; Dobson, Marc; Doualot, Nicolas; Erhan, Samim; Fulcher, Jonathan F; Gigi, Dominique; Michail Gładki; Glege, Frank; Gomez Ceballos, Guillelmo; Hegeman, Jeroen Guido; Holzner, Andre Georg; Janulis, Mindaugas; Jimenez Estupinan, Raul; Masetti, Lorenzo; Meijers, Franciscus; Meschi, Emilio; Mommsen, Remigius; Morovic, Srecko; O'Dell, Vivian; Orsini, Luciano; Paus, Christoph Maria Ernst; Petrova, Petia; Pieri, Marco; Racz, Attila; Reis, Thomas; Sakulin, Hannes; Schwick, Christoph; Simelevicius, Dainius; Zejdl, Petr; Vougioukas, M.

    2017-01-01

    The Run Control System of the Compact Muon Solenoid (CMS) experiment at CERN is a distributed Java web application running on Apache Tomcat servers. During Run-1 of the LHC, many operational procedures have been automated. When detector high voltages are ramped up or down or upon certain beam mode changes of the LHC, the DAQ system is automatically partially reconfigured with new parameters. Certain types of errors such as errors caused by single-event upsets may trigger an automatic recovery procedure. Furthermore, the top-level control node continuously performs cross-checks to detect sub-system actions becoming necessary because of changes in configuration keys, changes in the set of included front-end drivers or because of potential clock instabilities. The operator is guided to perform the necessary actions through graphical indicators displayed next to the relevant command buttons in the user interface. Through these indicators, consistent configuration of CMS is ensured. However, manually following t...

  2. A software approach for readout and data acquisition in CMS

    CERN Document Server

    Antchev, G H; Chatellier, S; Cittolin, Sergio; Erhan, S; Gigi, D; Gutleber, J; Jacobs, C; Meijers, F; Nicolau, R; Orsini, L; Pollet, Lucien; Rácz, A; Samyn, D; Sinanis, N; Sphicas, Paris

    2000-01-01

    Traditional systems dominated by performance constraints tend to neglect other qualities such as maintainability and configurability. Object-Orientation allows one to encapsulate the technology differences in communication sub-systems and to provide a uniform view of data transport layer to the systems engineer. We applied this paradigm to the design and implementation of intelligent data servers in the Compact Muon Solenoid (CMS) data acquisition system at CERN to easily exploiting the physical communication resources of the available equipment. CMS is a high-energy physics experiment under study that incorporates a highly distributed data acquisition system. This paper outlines the architecture of one part, the so called Readout Unit, and shows how we can exploit the object advantage for systems with specific data rate requirements. A C++ streams communication layer with zero copying functionality has been established for UDP, TCP, DLPI and specific Myrinet and VME bus communication on the VxWorks real-time...

  3. Trying to Predict the Future - Resource Planning and Allocation in CMS

    CERN Document Server

    Kreuzer, Peter; Fisk, Ian; Merino, Gonzalo

    2012-01-01

    In the large LHC experiments the majority of computing resources are provided by the participating countries. These resource pledges account for more than three quarters of the total available computing. The experiments are asked to give indications of their requests three years in advance and to evolve these as the details and constraints become clearer. In this presentation we will discuss the resource planning techniques used in CMS to predict the computing resources several years in advance. We will discuss how we attempt to implement the activities of the computing model in spreadsheets and formulas to calculate the needs. We will talk about how those needs are reflected in the 2012 running and how the planned long shutdown of the LHC in 2013 and 2014 impact the planning process and the outcome. In the end we will speculate on the computing needs in the second major run of LHC.

  4. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    OpenAIRE

    Anisenkov, Alexey; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-01-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and s...

  5. The Future of PanDA in ATLAS Distributed Computing

    CERN Document Server

    De, Kaushik; The ATLAS collaboration; Maeno, Tadashi; Nilsson, Paul; Oleynik, Danila; Panitkin, Sergey; Petrosyan, Artem; Schovancova, Jaroslava; Vaniachine, Alexandre; Wenaus, Torre

    2015-01-01

    Experiments at the Large Hadron Collider (LHC) face unprecedented computing challenges. Heterogeneous resources are distributed worldwide at hundreds of sites, thousands of physicists analyze the data remotely, the volume of processed data is beyond the exabyte scale, while data processing requires more than a few billion hours of computing usage per year. The PanDA (Production and Distributed Analysis) system was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. In the process, the old batch job paradigm of locally managed computing in HEP was discarded in favor of a far more automated, flexible and scalable model. The success of PanDA in ATLAS is leading to widespread adoption and testing by other experiments. PanDA is the first exascale workload management system in HEP, already operating at more than a million computing jobs per day, and processing over an exabyte of data in 2013. There are many new challenges that PanDA will face in the near future, in addi...

  6. CMS Financial Reports

    Data.gov (United States)

    U.S. Department of Health & Human Services — This section contains the annual CMS financial statements as required under the Chief Financial Officers (CFO) Act of 1990 (P.L. 101-576). The CFO Act marked a major...

  7. CMS fact sheet : to give an overview of the basic facts on the CMS Detector, its aims and collaboration

    CERN Multimedia

    CMS, Outreach

    2010-01-01

    2-sided color print A4 size sheet containing the facts on the CMS Detector, its name, what it is designed to do, questions scientists hope to answer, collaboration members, detector parts and their functions, and other miscellaneous facts on the CMS detector

  8. 42 CFR 405.2440 - Conditions for reinstatement after termination by CMS.

    Science.gov (United States)

    2010-10-01

    ... CMS. 405.2440 Section 405.2440 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF... § 405.2440 Conditions for reinstatement after termination by CMS. When CMS has terminated an agreement with a Federally qualified health center, CMS will not enter into another agreement with the Federally...

  9. The CMS Masterclass and Particle Physics Outreach

    Energy Technology Data Exchange (ETDEWEB)

    Cecire, Kenneth [Notre Dame U.; Bardeen, Marjorie [Fermilab; McCauley, Thomas [Notre Dame U.

    2014-01-01

    The CMS Masterclass enables high school students to analyse authentic CMS data. Students can draw conclusions on key ratios and particle masses by combining their analyses. In particular, they can use the ratio of W^+ to W^- candidates to probe the structure of the proton, they can find the mass of the Z boson, and they can identify additional particles including, tentatively, the Higgs boson. In the United States, masterclasses are part of QuarkNet, a long-term program that enables students and teachers to use cosmic ray and particle physics data for learning with an emphasis on data from CMS.

  10. CMS conditions data access using FroNTier

    International Nuclear Information System (INIS)

    Blumenfeld, Barry; Johns Hopkins U.; Dykstra, David; Lueking, Lee; Wicklund, Eric; Fermilab

    2007-01-01

    The CMS experiment at the LHC has established an infrastructure using the FroNTier framework to deliver conditions (i.e. calibration, alignment, etc.) data to processing clients worldwide. FroNTier is a simple web service approach providing client HTTP access to a central database service. The system for CMS has been developed to work with POOL which provides object relational mapping between the C++ clients and various database technologies. Because of the read only nature of the data, Squid proxy caching servers are maintained near clients and these caches provide high performance data access. Several features have been developed to make the system meet the needs of CMS including careful attention to cache coherency with the central database, and low latency loading required for the operation of the online High Level Trigger. The ease of deployment, stability of operation, and high performance make the FroNTier approach well suited to the GRID environment being used for CMS offline, as well as for the online environment used by the CMS High Level Trigger (HLT). The use of standard software, such as Squid and various monitoring tools, make the system reliable, highly configurable and easily maintained. We describe the architecture, software, deployment, performance, monitoring and overall operational experience for the system

  11. CMS conditions data access using FroNTier

    International Nuclear Information System (INIS)

    Blumenfeld, B; Dykstra, D; Lueking, L; Wicklund, E

    2008-01-01

    The CMS experiment at the LHC has established an infrastructure using the FroNTier framework to deliver conditions (i.e. calibration, alignment, etc.) data to processing clients worldwide. FroNTier is a simple web service approach providing client HTTP access to a central database service. The system for CMS has been developed to work with POOL which provides object relational mapping between the C++ clients and various database technologies. Because of the read only nature of the data, Squid proxy caching servers are maintained near clients and these caches provide high performance data access. Several features have been developed to make the system meet the needs of CMS including careful attention to cache coherency with the central database, and low latency loading required for the operation of the online High Level Trigger. The ease of deployment, stability of operation, and high performance make the FroNTier approach well suited to the GRID environment being used for CMS offline, as well as for the online environment used by the CMS High Level Trigger. The use of standard software, such as Squid and various monitoring tools, makes the system reliable, highly configurable and easily maintained. We describe the architecture, software, deployment, performance, monitoring and overall operational experience for the system

  12. Prototyping and Simulating Parallel, Distributed Computations with VISA

    National Research Council Canada - National Science Library

    Demeure, Isabelle M; Nutt, Gary J

    1989-01-01

    ...] to support the design, prototyping, and simulation of parallel, distributed computations. In particular, VISA is meant to guide the choice of partitioning and communication strategies for such computations, based on their performance...

  13. 42 CFR 411.379 - When CMS accepts a request.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false When CMS accepts a request. 411.379 Section 411.379... Physicians and Entities Furnishing Designated Health Services § 411.379 When CMS accepts a request. (a) Upon receiving a request for an advisory opinion, CMS promptly makes an initial determination of whether the...

  14. CMS Virtual Visits @ European Researchers Night, 30 September 2016

    CERN Multimedia

    Lapka, Marzena

    2016-01-01

    CMS hosted four virtual visits during European Researchers Night. Audience from Greece (NCRS Demokritos, Athens), Poland (University of Science and Technology in Krakow), Italy (Psiquadro in Perugia & INFN in Pisa) and Portugal (Planetarium Calouste Gulbenkian, organised by LIP) had an occasion to converse with CMS researchers and "virtually" visit CMS Control Room and underground facilities.

  15. Comparative transcript profiling of the fertile and sterile flower buds of pol CMS in B. napus.

    Science.gov (United States)

    An, Hong; Yang, Zonghui; Yi, Bin; Wen, Jing; Shen, Jinxiong; Tu, Jinxing; Ma, Chaozhi; Fu, Tingdong

    2014-04-03

    The Polima (pol) system of cytoplasmic male sterility (CMS) and its fertility restoration gene Rfp have been used in hybrid breeding in Brassica napus, which has greatly improved the yield of rapeseed. However, the mechanism of the male sterility transition in pol CMS remains to be determined. To investigate the transcriptome during the male sterility transition in pol CMS, a near-isogenic line (NIL) of pol CMS was constructed. The phenotypic features and sterility stage were confirmed by anatomical analysis. Subsequently, we compared the genomic expression profiles of fertile and sterile young flower buds by RNA-Seq. A total of 105,481,136 sequences were successfully obtained. These reads were assembled into 112,770 unigenes, which composed the transcriptome of the bud. Among these unigenes, 72,408 (64.21%) were annotated using public protein databases and classified into functional clusters. In addition, we investigated the changes in expression of the fertile and sterile buds; the RNA-seq data showed 1,148 unigenes had significantly different expression and they were mainly distributed in metabolic and protein synthesis pathways. Additionally, some unigenes controlling anther development were dramatically down-regulated in sterile buds. These results suggested that an energy deficiency caused by orf224/atp6 may inhibit a series of genes that regulate pollen development through nuclear-mitochondrial interaction. This results in the sterility of pol CMS by leading to the failure of sporogenous cell differentiation. This study may provide assistance for detailed molecular analysis and a better understanding of pol CMS in B. napus.

  16. The CMS trigger in Run 2

    CERN Document Server

    Tosi, Mia

    2018-01-01

    During its second period of operation (Run 2) which started in 2015, the LHC will reach a peak instantaneous luminosity of approximately 2$\\times 10^{34}$~cm$^{-2}s^{-1}$ with an average pile-up of about 55, far larger than the design value. Under these conditions, the online event selection is a very challenging task. In CMS, it is realised by a two-level trigger system: the Level-1 (L1) Trigger, implemented in custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the offline reconstruction software running on a computer farm.\\\\ In order to face this challenge, the L1 trigger has undergone a major upgrade compared to Run 1, whereby all electronic boards of the system have been replaced, allowing more sophisticated algorithms to be run online. Its last stage, the global trigger, is now able to perform complex selections and to compute high-level quantities, like invariant masses. Likewise, the algorithms that run in the HLT went through big improvements; in particular, new ap...

  17. File and metadata management for BESIII distributed computing

    International Nuclear Information System (INIS)

    Nicholson, C; Zheng, Y H; Lin, L; Deng, Z Y; Li, W D; Zhang, X M

    2012-01-01

    The BESIII experiment at the Institute of High Energy Physics (IHEP), Beijing, uses the high-luminosity BEPCII e + e − collider to study physics in the π-charm energy region around 3.7 GeV; BEPCII has produced the worlds largest samples of J/φ and φ’ events to date. An order of magnitude increase in the data sample size over the 2011-2012 data-taking period demanded a move from a very centralized to a distributed computing environment, as well as the development of an efficient file and metadata management system. While BESIII is on a smaller scale than some other HEP experiments, this poses particular challenges for its distributed computing and data management system. These constraints include limited resources and manpower, and low quality of network connections to IHEP. Drawing on the rich experience of the HEP community, a system has been developed which meets these constraints. The design and development of the BESIII distributed data management system, including its integration with other BESIII distributed computing components, such as job management, are presented here.

  18. Systementwicklungen und Messungen zur Auslese und Kalibration von CMS Pipeline Chips für die angewandte Forschung und Serientests an CMS Streifendetektoren

    CERN Document Server

    Petertill, Markus

    2001-01-01

    The future 14 TeV proton-proton accelerator LHC at CERN serves for the CMS experiment as a high rate source of deep inelastic interactions of quarks and gluons. CMS at the LHC will be one of the "discovery machines" for new particles and theories. The central tracker in the superconducting 4 T-magnet of CMS has to ensure a precise track reconstruction in the space-time. Part I leads to the major tasks of the central tracker for the purpose of preparing the main points of the thesis. In CMS one has to cope with particle fluences of about 10^6cm^-2s^-1 and L1 trigger rates of 100 kHz. System developments have lead to a powerful data acquisition system (DAQ) constructed in VME for emulation of the hardware algorithms in the frontend driver and for research of the properties of CMS microstrip detectors. The experience and the results point to special problems for the operation of the CMS tracker. For most of them solutions will be found which can be emulated in the DAQ or simulated with offline data. If possible ...

  19. Commissioning the CMS alignment and calibration framework

    International Nuclear Information System (INIS)

    Futyan, David

    2010-01-01

    The CMS experiment has developed a powerful framework to ensure the precise and prompt alignment and calibration of its components, which is a major prerequisite to achieve the optimal performance for physics analysis. The prompt alignment and calibration strategy harnesses computing resources both at the Tier-0 site and the CERN Analysis Facility (CAF) to ensure fast turnaround for updating the corresponding database payloads. An essential element is the creation of dedicated data streams concentrating the specific event information required by the various alignment and calibration workflows. The resulting low latency is required for feeding the resulting constants into the prompt reconstruction process, which is essential for achieving swift physics analysis of the LHC data. This report discusses the implementation and the computational aspects of the alignment and calibration framework. Recent commissioning campaigns with cosmic muons, beam halo and simulated data have been used to gain detailed experience with this framework, and results of this validation are reported.

  20. Russian and Belorussian firms receive CMS Gold Awards

    CERN Multimedia

    2003-01-01

    On 7 March, CMS handed out its three latest Gold Awards in recognition of outstanding supplier performance. The directors of two Russian firms (ENTEK and the Myasishchev Design Bureau) and of the Belorussian company MZOR received their awards on the occasion of a visit by dignitaries from the two countries. The directors and dignitaries are pictured here with leaders of the CMS Collaboration in front of the CMS hadron calorimeter end-cap at the detector's assembly site.