WorldWideScience

Sample records for cms tier-2 sites

  1. Operational experience with CMS Tier-2 sites

    International Nuclear Information System (INIS)

    Gonzalez Caballero, I

    2010-01-01

    In the CMS computing model, more than one third of the computing resources are located at Tier-2 sites, which are distributed across the countries in the collaboration. These sites are the primary platform for user analyses; they host datasets that are created at Tier-1 sites, and users from all CMS institutes submit analysis jobs that run on those data through grid interfaces. They are also the primary resource for the production of large simulation samples for general use in the experiment. As a result, Tier-2 sites have an interesting mix of organized experiment-controlled activities and chaotic user-controlled activities. CMS currently operates about 40 Tier-2 sites in 22 countries, making the sites a far-flung computational and social network. We describe our operational experience with the sites, touching on our achievements, the lessons learned, and the challenges for the future.

  2. Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS

    International Nuclear Information System (INIS)

    Letts, J; Magini, N

    2011-01-01

    Tier-2 to Tier-2 data transfers have been identified as a necessary extension of the CMS computing model. The Debugging Data Transfers (DDT) Task Force in CMS was charged with commissioning Tier-2 to Tier-2 PhEDEx transfer links beginning in late 2009, originally to serve the needs of physics analysis groups for the transfer of their results between the storage elements of the Tier-2 sites associated with the groups. PhEDEx is the data transfer middleware of the CMS experiment. For analysis jobs using CRAB, the CMS Remote Analysis Builder, the challenges of remote stage out of job output at the end of the analysis jobs led to the introduction of a local fallback stage out, and will eventually require the asynchronous transfer of user data over essentially all of the Tier-2 to Tier-2 network using the same PhEDEx infrastructure. In addition, direct file sharing of physics and Monte Carlo simulated data between Tier-2 sites can relieve the operational load of the Tier-1 sites in the original CMS Computing Model, and already represents an important component of CMS PhEDEx data transfer volume. The experience, challenges and methods used to debug and commission the thousands of data transfers links between CMS Tier-2 sites world-wide are explained and summarized. The resulting operational experience with Tier-2 to Tier-2 transfers is also presented.

  3. Exercising CMS dataflows and workflows in computing challenges at the SpanishTier-1 and Tier-2 sites

    Energy Technology Data Exchange (ETDEWEB)

    Caballero, J; Colino, N; Peris, A D; G-Abia, P; Hernandez, J M; R-Calonge, F J [CIEMAT, Madrid (Spain); Cabrillo, I; Caballero, I G; Marco, R; Matorras, F [IFCA, Santander (Spain); Flix, J; Merino, G [PIC, Barcelona (Spain)], E-mail: jose.hernandez@ciemat.es

    2008-07-15

    An overview of the data transfer, processing and analysis operations conducted at the Spanish Tier-1 (PIC, Barcelona) and Tier-2 (CIEMAT-Madrid and IFCA-Santander federation) centres during the past CMS CSA06 Computing, Software and Analysis challenge and in preparation for CSA07 is present0008.

  4. Exercising CMS dataflows and workflows in computing challenges at the SpanishTier-1 and Tier-2 sites

    International Nuclear Information System (INIS)

    Caballero, J; Colino, N; Peris, A D; G-Abia, P; Hernandez, J M; R-Calonge, F J; Cabrillo, I; Caballero, I G; Marco, R; Matorras, F; Flix, J; Merino, G

    2008-01-01

    An overview of the data transfer, processing and analysis operations conducted at the Spanish Tier-1 (PIC, Barcelona) and Tier-2 (CIEMAT-Madrid and IFCA-Santander federation) centres during the past CMS CSA06 Computing, Software and Analysis challenge and in preparation for CSA07 is presented

  5. Large Scale Commissioning and Operational Experience with Tier-2 to Tier-2 Data Transfer Links in CMS

    CERN Document Server

    Letts, James

    2010-01-01

    Tier-2 to Tier-2 data transfers have been identified as a necessary extension of the CMS computing model. The Debugging Data Transfers (DDT) Task Force in CMS was charged with commissioning Tier-2 to Tier-2 PhEDEx transfer links beginning in late 2009, originally to serve the needs of physics analysis groups for the transfer of their results between the storage elements of the Tier-2 sites associated with the groups. PhEDEx is the data transfer middleware of the CMS experiment. For analysis jobs using CRAB, the CMS Remote Analysis Builder, the challenges of remote stage out of job output at the end of the analysis jobs led to the introduction of a local fallback stage out, and will eventually require the asynchronous transfer of user data over essentially all of the Tier-2 to Tier-2 network using the same PhEDEx infrastructure. In addition, direct file sharing of physics and Monte Carlo simulated data between Tier-2 sites can relieve the operational load of the Tier-1 sites in the original CMS Computing Model...

  6. CMS Experiment Data Processing at RDMS CMS Tier 2 Centers

    CERN Document Server

    Gavrilov, V; Korenkov, V; Tikhonenko, E; Shmatov, S; Zhiltsov, V; Ilyin, V; Kodolova, O; Levchuk, L

    2012-01-01

    Russia and Dubna Member States (RDMS) CMS collaboration was founded in the year 1994 [1]. The RDMS CMS takes an active part in the Compact Muon Solenoid (CMS) Collaboration [2] at the Large Hadron Collider (LHC) [3] at CERN [4]. RDMS CMS Collaboration joins more than twenty institutes from Russia and Joint Institute for Nuclear Research (JINR) member states. RDMS scientists, engineers and technicians were actively participating in design, construction and commissioning of all CMS sub-detectors in forward regions. RDMS CMS physics program has been developed taking into account the essential role of these sub-detectors for the corresponding physical channels. RDMS scientists made large contribution for preparation of study QCD, Electroweak, Exotics, Heavy Ion and other physics at CMS. The overview of RDMS CMS physics tasks and RDMS CMS computing activities are presented in [5-11]. RDMS CMS computing support should satisfy the LHC data processing and analysis requirements at the running phase of the CMS experime...

  7. WHALE, a management tool for Tier-2 LCG sites

    Science.gov (United States)

    Barone, L. M.; Organtini, G.; Talamo, I. G.

    2012-12-01

    The LCG (Worldwide LHC Computing Grid) is a grid-based hierarchical computing distributed facility, composed of more than 140 computing centers, organized in 4 tiers, by size and offer of services. Every site, although indipendent for many technical choices, has to provide services with a well-defined set of interfaces. For this reason, different LCG sites need frequently to manage very similar situations, like jobs behaviour on the batch system, dataset transfers between sites, operating system and experiment software installation and configuration, monitoring of services. In this context we created WHALE (WHALE Handles Administration in an LCG Environment), a software actually used at the T2_IT_Rome site, an LCG Tier-2 for the CMS experiment. WHALE is a generic, site independent tool written in Python: it allows administrator to interact in a uniform and coherent way with several subsystems using a high level syntax which hides specific commands. The architecture of WHALE is based on the plugin concept and on the possibility of connecting the output of a plugin to the input of the next one, in a pipe-like system, giving the administrator the possibility of making complex functions by combining the simpler ones. The core of WHALE just handles the plugin orchestrations, while even the basic functions (eg. the WHALE activity logging) are performed by plugins, giving the capability to tune and possibly modify every component of the system. WHALE already provides many plugins useful for a LCG site and some more for a Tier-2 of the CMS experiment, especially in the field of job management, dataset transfer and analysis of performance results and availability tests (eg. Nagios tests, SAM tests). Thanks to its architecture and the provided plugins WHALE makes easy to perform tasks that, even if logically simple, are technically complex or tedious, like eg. closing all the worker nodes with a job-failure rate greater than a given threshold. Finally, thanks to the

  8. WHALE, a management tool for Tier-2 LCG sites

    International Nuclear Information System (INIS)

    Barone, L M; Organtini, G; Talamo, I G

    2012-01-01

    The LCG (Worldwide LHC Computing Grid) is a grid-based hierarchical computing distributed facility, composed of more than 140 computing centers, organized in 4 tiers, by size and offer of services. Every site, although indipendent for many technical choices, has to provide services with a well-defined set of interfaces. For this reason, different LCG sites need frequently to manage very similar situations, like jobs behaviour on the batch system, dataset transfers between sites, operating system and experiment software installation and configuration, monitoring of services. In this context we created WHALE (WHALE Handles Administration in an LCG Environment), a software actually used at the T2 I T R ome site, an LCG Tier-2 for the CMS experiment. WHALE is a generic, site independent tool written in Python: it allows administrator to interact in a uniform and coherent way with several subsystems using a high level syntax which hides specific commands. The architecture of WHALE is based on the plugin concept and on the possibility of connecting the output of a plugin to the input of the next one, in a pipe-like system, giving the administrator the possibility of making complex functions by combining the simpler ones. The core of WHALE just handles the plugin orchestrations, while even the basic functions (eg. the WHALE activity logging) are performed by plugins, giving the capability to tune and possibly modify every component of the system. WHALE already provides many plugins useful for a LCG site and some more for a Tier-2 of the CMS experiment, especially in the field of job management, dataset transfer and analysis of performance results and availability tests (eg. Nagios tests, SAM tests). Thanks to its architecture and the provided plugins WHALE makes easy to perform tasks that, even if logically simple, are technically complex or tedious, like eg. closing all the worker nodes with a job-failure rate greater than a given threshold. Finally, thanks to the

  9. Tier-1 and Tier-2 real-time analysis experience in CMS Data Challenge 2004

    CERN Document Server

    De Filippis, N; Pierro, A; Silvestris, L; Fanfani, A; Grandi, C; Hernández, J M; Bonacorsi, D; Corvo, M; Fanzago, F

    2005-01-01

    During the CMS Data Challenge 2004 a real-time analysis was attempted at INFN and PIC Tier-1 and Tier-2s in order to test the ability of the instrumented methods to quickly process the data. Several agents and automatic procedures were implemented to perform the analysis at the Tier-1/2 synchronously with the data transfer from Tier-0 at CERN. The system was implemented in the LCG-2 Grid environment and allowed on-the-fly job preparation and subsequent submission to the Resource Broker as new data came along. Running job accessed data from the Storage Elements via remote file protocol, whenever possible, or copying them locally with replica manager commands. Details of the procedures adopted to run the analysis jobs and the expected results are described. An evaluation of the ability of the system to maintain an analysis rate at Tier-1 and Tier-2 comparable with the data transfer rate is also presented. The results on the analysis timeline, the statistics of submitted jobs, the overall efficiency of the GRID ...

  10. Network monitoring in the Tier2 site in Prague

    International Nuclear Information System (INIS)

    Eliáš, Marek; Fiala, Lukáš; Horký, Jirí; Chudoba, Jirí; Kouba, Tomáš; Kundrát, Jan; Švec, Jan

    2011-01-01

    Network monitoring provides different types of view on the network traffic. It's output enables computing centre staff to make qualified decisions about changes in the organization of computing centre network and to spot possible problems. In this paper we present network monitoring framework used at Tier-2 in Prague in Institute of Physics (FZU). The framework consists of standard software and custom tools. We discuss our system for hardware failures detection using syslog logging and Nagios active checks, bandwidth monitoring of physical links and analysis of NetFlow exports from Cisco routers. We present tool for automatic detection of network layout based on SNMP. This tool also records topology changes into SVN repository. Adapted weathermap4rrd is used to visualize recorded data to get fast overview showing current bandwidth usage of links in network.

  11. The CMS experiment workflows on StoRM based storage at Tier-1 and Tier-2 centers

    International Nuclear Information System (INIS)

    Bonacorsi, D; Bartolome, I Cabrillo; Matorras, F; Gonzalez Caballero, I; Sartirana, A

    2010-01-01

    Approaching LHC data taking, the CMS experiment is deploying, commissioning and operating the building tools of its grid-based computing infrastructure. The commissioning program includes testing, deployment and operation of various storage solutions to support the computing workflows of the experiment. Recently, some of the Tier-1 and Tier-2 centers supporting the collaboration have started to deploy StoRM based storage systems. These are POSIX-based disk storage systems on top of which StoRM implements the Storage Resource Manager (SRM) version 2 interface allowing for a standard-based access from the Grid. In this notes we briefly describe the experience so far achieved at the CNAF Tier-1 center and at the IFCA Tier-2 center.

  12. Simulation of the job processing performance at an ALICE Tier-2 site with MONARC

    International Nuclear Information System (INIS)

    Zach, C; Adamová, D; Betev, L

    2011-01-01

    The MONARC (MOdels of Networked Analysis at Regional Centers) framework has been developed and designed with the aim to provide a tool for realistic simulations of large scale distributed computing systems, with a special focus on the Grid systems of the experiments at the CERN LHC. In this paper, we describe a usage of the MONARC framework and tools for a simulation of the job processing performance at an ALICE Tier-2 site.

  13. Tier 2 guidelines and remediation of Tebuthiuron on a native prairie site

    Energy Technology Data Exchange (ETDEWEB)

    Bessie, K.; Harckham, N.; Dance, T. [EBA Engineering Consultants Ltd., Calgary, AB (Canada); Burk, A. [EnCana Corp., Calgary, AB (Canada); Stephenson, G. [Stantec Consulting, Guelph, ON (Canada); Corbet, B. [Access Analytical Laboratories Inc., Calgary, AB (Canada)

    2009-10-01

    Tebuthiuron is a sterilant used to control vegetation at upstream and midstream petroleum sites. This article discussed the remediation processes used to reclaim a native prairie site contaminated with tebuthiuron. The site was located within a dry mixed grass natural area. A literature review was conducted to establish soil eco-contact guidelines specific to tebuthiuron. A site-specific ecotoxicity assessment was then conducted using a liquid chromatograph to detect tebuthiuron limits in the contaminated soils. A soil sampling technique was used to delineate the affected areas at the site. Site soils were spiked with various concentrations of tebuthiuron ranging from 0.00003 mg/kg to 3000 mg/kg. Test species included a Folsomia candida, an earthworm, and 4 plant species. The study showed that the invertebrate species were less sensitive to tebuthiuron than the plant species. A groundwater assessment showed that tebuthiuron levels exceeded Tier 1 groundwater remediation guidelines. A multilayer hydro-geological model showed that remediation guidelines were orders of magnitude greater than Tier 1 groundwater remediation. A thermal desorption technique was used to remediate the site. 7 refs., 8 figs.

  14. The Legnaro-Padova distributed Tier-2: challenges and results

    International Nuclear Information System (INIS)

    Badoer, Simone; Biasotto, Massimo; Fantinel, Sergio

    2014-01-01

    The Legnaro-Padova Tier-2 is a computing facility serving the ALICE and CMS LHC experiments. It also supports other High Energy Physics experiments and other virtual organizations of different disciplines, which can opportunistically harness idle resources if available. The unique characteristic of this Tier-2 is its topology: the computational resources are spread in two different sites, about 15 km apart: the INFN Legnaro National Laboratories and the INFN Padova unit, connected through a 10 Gbps network link (it will be soon updated to 20 Gbps). Nevertheless these resources are seamlessly integrated and are exposed as a single computing facility. Despite this intrinsic complexity, the Legnaro-Padova Tier-2 ranks among the best Grid sites for what concerns reliability and availability. The Tier-2 comprises about 190 worker nodes, providing about 26000 HS06 in total. Such computing nodes are managed by the LSF local resource management system, and are accessible using a Grid-based interface implemented through multiple CREAM CE front-ends. dCache, xrootd and Lustre are the storage systems in use at the Tier-2: about 1.5 PB of disk space is available to users in total, through multiple access protocols. A 10 Gbps network link, planned to be doubled in the next months, connects the Tier-2 to WAN. This link is used for the LHC Open Network Environment (LHCONE) and for other general purpose traffic. In this paper we discuss about the experiences at the Legnaro-Padova Tier-2: the problems that had to be addressed, the lessons learned, the implementation choices. We also present the tools used for the daily management operations. These include DOCET, a Java-based webtool designed, implemented and maintained at the Legnaro-Padova Tier-2, and deployed also in other sites, such as the LHC Italian T1. DOCET provides an uniform interface to manage all the information about the physical resources of a computing center. It is also used as documentation repository available to

  15. The Legnaro-Padova distributed Tier-2: challenges and results

    Science.gov (United States)

    Badoer, Simone; Biasotto, Massimo; Costa, Fulvia; Crescente, Alberto; Fantinel, Sergio; Ferrari, Roberto; Gulmini, Michele; Maron, Gaetano; Michelotto, Michele; Sgaravatto, Massimo; Toniolo, Nicola

    2014-06-01

    The Legnaro-Padova Tier-2 is a computing facility serving the ALICE and CMS LHC experiments. It also supports other High Energy Physics experiments and other virtual organizations of different disciplines, which can opportunistically harness idle resources if available. The unique characteristic of this Tier-2 is its topology: the computational resources are spread in two different sites, about 15 km apart: the INFN Legnaro National Laboratories and the INFN Padova unit, connected through a 10 Gbps network link (it will be soon updated to 20 Gbps). Nevertheless these resources are seamlessly integrated and are exposed as a single computing facility. Despite this intrinsic complexity, the Legnaro-Padova Tier-2 ranks among the best Grid sites for what concerns reliability and availability. The Tier-2 comprises about 190 worker nodes, providing about 26000 HS06 in total. Such computing nodes are managed by the LSF local resource management system, and are accessible using a Grid-based interface implemented through multiple CREAM CE front-ends. dCache, xrootd and Lustre are the storage systems in use at the Tier-2: about 1.5 PB of disk space is available to users in total, through multiple access protocols. A 10 Gbps network link, planned to be doubled in the next months, connects the Tier-2 to WAN. This link is used for the LHC Open Network Environment (LHCONE) and for other general purpose traffic. In this paper we discuss about the experiences at the Legnaro-Padova Tier-2: the problems that had to be addressed, the lessons learned, the implementation choices. We also present the tools used for the daily management operations. These include DOCET, a Java-based webtool designed, implemented and maintained at the Legnaro-Padova Tier-2, and deployed also in other sites, such as the LHC Italian T1. DOCET provides an uniform interface to manage all the information about the physical resources of a computing center. It is also used as documentation repository available to

  16. An optimization of the ALICE XRootD storage cluster at the Tier-2 site in Czech Republic

    International Nuclear Information System (INIS)

    Adamova, D; Horky, J

    2012-01-01

    ALICE, as well as the other experiments at the CERN LHC, has been building a distributed data management infrastructure since 2002. Experience gained during years of operations with different types of storage managers deployed over this infrastructure has shown, that the most adequate storage solution for ALICE is the native XRootD manager developed within a CERN - SLAC collaboration. The XRootD storage clusters exhibit higher stability and availability in comparison with other storage solutions and demonstrate a number of other advantages, like support of high speed WAN data access or no need for maintaining complex databases. Two of the operational characteristics of XRootD data servers are a relatively high number of open sockets and a high Unix load. In this article, we would like to describe our experience with the tuning/optimization of machines hosting the XRootD servers, which are part of the ALICE storage cluster at the Tier-2 WLCG site in Prague, Czech Republic. The optimization procedure, in addition to boosting the read/write performance of the servers, also resulted in a reduction of the Unix load.

  17. Storageless and caching Tier-2 models in the UK context

    Science.gov (United States)

    Cadellin Skipsey, Samuel; Dewhurst, Alastair; Crooks, David; MacMahon, Ewan; Roy, Gareth; Smith, Oliver; Mohammed, Kashif; Brew, Chris; Britton, David

    2017-10-01

    Operational and other pressures have lead to WLCG experiments moving increasingly to a stratified model for Tier-2 resources, where “fat” Tier-2s (“T2Ds”) and “thin” Tier-2s (“T2Cs”) provide different levels of service. In the UK, this distinction is also encouraged by the terms of the current GridPP5 funding model. In anticipation of this, testing has been performed on the implications, and potential implementation, of such a distinction in our resources. In particular, this presentation presents the results of testing of storage T2Cs, where the “thin” nature is expressed by the site having either no local data storage, or only a thin caching layer; data is streamed or copied from a “nearby” T2D when needed by jobs. In OSG, this model has been adopted successfully for CMS AAA sites; but the network topology and capacity in the USA is significantly different to that in the UK (and much of Europe). We present the result of several operational tests: the in-production University College London (UCL) site, which runs ATLAS workloads using storage at the Queen Mary University of London (QMUL) site; the Oxford site, which has had scaling tests performed against T2Ds in various locations in the UK (to test network effects); and the Durham site, which has been testing the specific ATLAS caching solution of “Rucio Cache” integration with ARC’s caching layer.

  18. The commissioning of CMS sites: Improving the site reliability

    International Nuclear Information System (INIS)

    Belforte, S; Fisk, I; Flix, J; Hernandez, J M; Klem, J; Letts, J; Magini, N; Saiz, P; Sciaba, A

    2010-01-01

    The computing system of the CMS experiment works using distributed resources from more than 60 computing centres worldwide. These centres, located in Europe, America and Asia are interconnected by the Worldwide LHC Computing Grid. The operation of the system requires a stable and reliable behaviour of the underlying infrastructure. CMS has established a procedure to extensively test all relevant aspects of a Grid site, such as the ability to efficiently use their network to transfer data, the functionality of all the site services relevant for CMS and the capability to sustain the various CMS computing workflows at the required scale. This contribution describes in detail the procedure to rate CMS sites depending on their performance, including the complete automation of the program, the description of monitoring tools, and its impact in improving the overall reliability of the Grid from the point of view of the CMS computing system.

  19. Vaccine-Elicited Tier 2 HIV-1 Neutralizing Antibodies Bind to Quaternary Epitopes Involving Glycan-Deficient Patches Proximal to the CD4 Binding Site.

    Directory of Open Access Journals (Sweden)

    Ema T Crooks

    2015-05-01

    Full Text Available Eliciting broad tier 2 neutralizing antibodies (nAbs is a major goal of HIV-1 vaccine research. Here we investigated the ability of native, membrane-expressed JR-FL Env trimers to elicit nAbs. Unusually potent nAb titers developed in 2 of 8 rabbits immunized with virus-like particles (VLPs expressing trimers (trimer VLP sera and in 1 of 20 rabbits immunized with DNA expressing native Env trimer, followed by a protein boost (DNA trimer sera. All 3 sera neutralized via quaternary epitopes and exploited natural gaps in the glycan defenses of the second conserved region of JR-FL gp120. Specifically, trimer VLP sera took advantage of the unusual absence of a glycan at residue 197 (present in 98.7% of Envs. Intriguingly, removing the N197 glycan (with no loss of tier 2 phenotype rendered 50% or 16.7% (n = 18 of clade B tier 2 isolates sensitive to the two trimer VLP sera, showing broad neutralization via the surface masked by the N197 glycan. Neutralizing sera targeted epitopes that overlap with the CD4 binding site, consistent with the role of the N197 glycan in a putative "glycan fence" that limits access to this region. A bioinformatics analysis suggested shared features of one of the trimer VLP sera and monoclonal antibody PG9, consistent with its trimer-dependency. The neutralizing DNA trimer serum took advantage of the absence of a glycan at residue 230, also proximal to the CD4 binding site and suggesting an epitope similar to that of monoclonal antibody 8ANC195, albeit lacking tier 2 breadth. Taken together, our data show for the first time that strain-specific holes in the glycan fence can allow the development of tier 2 neutralizing antibodies to native spikes. Moreover, cross-neutralization can occur in the absence of protecting glycan. Overall, our observations provide new insights that may inform the future development of a neutralizing antibody vaccine.

  20. SiteDB: Marshalling people and resources available to CMS

    Energy Technology Data Exchange (ETDEWEB)

    Metson, S [H.H. Wills Physics Laboratory, Bristol (United Kingdom); Bonacorsi, D [University of Bologna and INFN Bologna (Italy); Ferreira, M Dias [SPRACE (Brazil); Egeland, R [University of Minnesota, Twin Cities (United States)

    2010-04-01

    In a collaboration the size of CMS (approx. 3000 users, and almost 100 computing centres of varying size) communication and accurate information about the sites it has access to is vital in co-ordinating the multitude of computing tasks required for smooth running. SiteDB is a tool developed by CMS to track sites available to the collaboration, the allocation to CMS of resources available at those sites and the associations between CMS members and the sites (as either a manager/operator of the site or a member of a group associated to the site). It is used to track the roles a person has for an associated site or group. SiteDB eases the coordination load for the operations teams by providing a consistent interface to manage communication with the people working at a site, by identifying who is responsible for a given task or service at a site and by offering a uniform interface to information on CMS contacts and sites. SiteDB provides api's and reports for other CMS tools to use to access the information it contains, for instance enabling CRAB to use 'user friendly' names when black/white listing CE's, providing role based authentication and authorisation for other web based services and populating various troubleshooting squads in external ticketing systems in use daily by CMS Computing operations.

  1. SiteDB: Marshalling people and resources available to CMS

    International Nuclear Information System (INIS)

    Metson, S; Bonacorsi, D; Ferreira, M Dias; Egeland, R

    2010-01-01

    In a collaboration the size of CMS (approx. 3000 users, and almost 100 computing centres of varying size) communication and accurate information about the sites it has access to is vital in co-ordinating the multitude of computing tasks required for smooth running. SiteDB is a tool developed by CMS to track sites available to the collaboration, the allocation to CMS of resources available at those sites and the associations between CMS members and the sites (as either a manager/operator of the site or a member of a group associated to the site). It is used to track the roles a person has for an associated site or group. SiteDB eases the coordination load for the operations teams by providing a consistent interface to manage communication with the people working at a site, by identifying who is responsible for a given task or service at a site and by offering a uniform interface to information on CMS contacts and sites. SiteDB provides api's and reports for other CMS tools to use to access the information it contains, for instance enabling CRAB to use 'user friendly' names when black/white listing CE's, providing role based authentication and authorisation for other web based services and populating various troubleshooting squads in external ticketing systems in use daily by CMS Computing operations.

  2. Unified storage systems for distributed Tier-2 centres

    International Nuclear Information System (INIS)

    Cowan, G A; Stewart, G A; Elwell, A

    2008-01-01

    The start of data taking at the Large Hadron Collider will herald a new era in data volumes and distributed processing in particle physics. Data volumes of hundreds of Terabytes will be shipped to Tier-2 centres for analysis by the LHC experiments using the Worldwide LHC Computing Grid (WLCG). In many countries Tier-2 centres are distributed between a number of institutes, e.g., the geographically spread Tier-2s of GridPP in the UK. This presents a number of challenges for experiments to utilise these centres efficaciously, as CPU and storage resources may be subdivided and exposed in smaller units than the experiment would ideally want to work with. In addition, unhelpful mismatches between storage and CPU at the individual centres may be seen, which make efficient exploitation of a Tier-2's resources difficult. One method of addressing this is to unify the storage across a distributed Tier-2, presenting the centres' aggregated storage as a single system. This greatly simplifies data management for the VO, which then can access a greater amount of data across the Tier-2. However, such an approach will lead to scenarios where analysis jobs on one site's batch system must access data hosted on another site. We investigate this situation using the Glasgow and Edinburgh clusters, which are part of the ScotGrid distributed Tier-2. In particular we look at how to mitigate the problems associated with 'distant' data access and discuss the security implications of having LAN access protocols traverse the WAN between centres

  3. Optimization of HEP Analysis Activities Using a Tier2 Infrastructure

    International Nuclear Information System (INIS)

    Arezzini, S; Bagliesi, G; Boccali, T; Ciampa, A; Mazzoni, E; Coscetti, S; Sarkar, S; Taneja, S

    2012-01-01

    While the model for a Tier2 is well understood and implemented within the HEP Community, a refined design for Analysis specific sites has not been agreed upon as clearly. We aim to describe the solutions adopted at the INFN Pisa, the biggest Tier2 in the Italian HEP Community. A Standard Tier2 infrastructure is optimized for Grid CPU and Storage access, while a more interactive oriented use of the resources is beneficial to the final data analysis step. In this step, POSIX file storage access is easier for the average physicist, and has to be provided in a real or emulated way. Modern analysis techniques use advanced statistical tools (like RooFit and RooStat), which can make use of multi core systems. The infrastructure has to provide or create on demand computing nodes with many cores available, above the existing and less elastic Tier2 flat CPU infrastructure. At last, the users do not want to have to deal with data placement policies at the various sites, and hence a transparent WAN file access, again with a POSIX layer, must be provided, making use of the soon-to-be-installed 10 Gbit/s regional lines. Even if standalone systems with such features are possible and exist, the implementation of an Analysis site as a virtual layer over an existing Tier2 requires novel solutions; the ones used in Pisa are described here.

  4. Renewing library Web sites CMS at libraries

    CERN Document Server

    Vida, A

    2006-01-01

    The use of the Internet has a ten-year history in Hungary. In the beginning, users were surfing on textual Web sites with the browser Lynx (1991), then a range of graphic browsers appeared: Mosaic (1993) , Netscape (1994), and finally Internet Explorer (1995). More and more institutions, including libraries decided to enter the World Wide Web with their own homepage. The past ten years have brought enormous changes and new requirements in the way that institutional homepages are designed. This article offers an overview of the development phases of Web sites, presents the new tools necessary for the state-of-the-art design and gives advice on their up-to-date maintenance.

  5. dCache data storage system implementations at a Tier-2 centre

    Energy Technology Data Exchange (ETDEWEB)

    Tsigenov, Oleg; Nowack, Andreas; Kress, Thomas [III. Physikalisches Institut B, RWTH Aachen (Germany)

    2009-07-01

    The experimental high energy physics groups of the RWTH Aachen University operate one of the largest Grid Tier-2 sites in the world and offer more than 2000 modern CPU cores and about 550 TB of disk space mainly to the CMS experiment and to a lesser extent to the Auger and Icecube collaborations.Running such a large data cluster requires a flexible storage system with high performance. We use dCache for this purpose and are integrated into the dCache support team to the benefit of the German Grid sites. Recently, a storage pre-production cluster has been built to study the setup and the behavior of novel dCache features within Chimera without interfering with the production system. This talk gives an overview about the practical experience gained with dCache on both the production and the testbed cluster and discusses future plans.

  6. Spanish ATLAS Tier-2: facing up to LHC Run 2

    CERN Document Server

    Gonzalez de la Hoz, Santiago; Fassi, Farida; Fernandez Casani, Alvaro; Kaci, Mohammed; Lacort Pellicer, Victor Ruben; Montiel Gonzalez, Almudena Del Rocio; Oliver Garcia, Elena; Pacheco Pages, Andres; Sánchez, Javier; Sanchez Martinez, Victoria; Salt, José; Villaplana Perez, Miguel

    2015-01-01

    The goal of this work is to describe the way of addressing the main challenges of Run-2 by the Spanish ATLAS Tier-2. The considerable increase of energy and luminosity for the upcoming Run-2 with respect to Run-1 has led to a revision of the ATLAS computing model as well as some of the main ATLAS computing tools. The adaptation on these changes will be shown, with the peculiarities that it is a distributed Tier-2 composed of three sites and its members are involved on ATLAS computing tasks with a hub of research, innovation and education.

  7. CMS distributed computing workflow experience

    Science.gov (United States)

    Adelman-McCarthy, Jennifer; Gutsche, Oliver; Haas, Jeffrey D.; Prosper, Harrison B.; Dutta, Valentina; Gomez-Ceballos, Guillelmo; Hahn, Kristian; Klute, Markus; Mohapatra, Ajit; Spinoso, Vincenzo; Kcira, Dorian; Caudron, Julien; Liao, Junhui; Pin, Arnaud; Schul, Nicolas; De Lentdecker, Gilles; McCartin, Joseph; Vanelderen, Lukas; Janssen, Xavier; Tsyganov, Andrey; Barge, Derek; Lahiff, Andrew

    2011-12-01

    The vast majority of the CMS Computing capacity, which is organized in a tiered hierarchy, is located away from CERN. The 7 Tier-1 sites archive the LHC proton-proton collision data that is initially processed at CERN. These sites provide access to all recorded and simulated data for the Tier-2 sites, via wide-area network (WAN) transfers. All central data processing workflows are executed at the Tier-1 level, which contain re-reconstruction and skimming workflows of collision data as well as reprocessing of simulated data to adapt to changing detector conditions. This paper describes the operation of the CMS processing infrastructure at the Tier-1 level. The Tier-1 workflows are described in detail. The operational optimization of resource usage is described. In particular, the variation of different workflows during the data taking period of 2010, their efficiencies and latencies as well as their impact on the delivery of physics results is discussed and lessons are drawn from this experience. The simulation of proton-proton collisions for the CMS experiment is primarily carried out at the second tier of the CMS computing infrastructure. Half of the Tier-2 sites of CMS are reserved for central Monte Carlo (MC) production while the other half is available for user analysis. This paper summarizes the large throughput of the MC production operation during the data taking period of 2010 and discusses the latencies and efficiencies of the various types of MC production workflows. We present the operational procedures to optimize the usage of available resources and we the operational model of CMS for including opportunistic resources, such as the larger Tier-3 sites, into the central production operation.

  8. CMS distributed computing workflow experience

    International Nuclear Information System (INIS)

    Adelman-McCarthy, Jennifer; Gutsche, Oliver; Haas, Jeffrey D; Prosper, Harrison B; Dutta, Valentina; Gomez-Ceballos, Guillelmo; Hahn, Kristian; Klute, Markus; Mohapatra, Ajit; Spinoso, Vincenzo; Kcira, Dorian; Caudron, Julien; Liao Junhui; Pin, Arnaud; Schul, Nicolas; Lentdecker, Gilles De; McCartin, Joseph; Vanelderen, Lukas; Janssen, Xavier; Tsyganov, Andrey

    2011-01-01

    The vast majority of the CMS Computing capacity, which is organized in a tiered hierarchy, is located away from CERN. The 7 Tier-1 sites archive the LHC proton-proton collision data that is initially processed at CERN. These sites provide access to all recorded and simulated data for the Tier-2 sites, via wide-area network (WAN) transfers. All central data processing workflows are executed at the Tier-1 level, which contain re-reconstruction and skimming workflows of collision data as well as reprocessing of simulated data to adapt to changing detector conditions. This paper describes the operation of the CMS processing infrastructure at the Tier-1 level. The Tier-1 workflows are described in detail. The operational optimization of resource usage is described. In particular, the variation of different workflows during the data taking period of 2010, their efficiencies and latencies as well as their impact on the delivery of physics results is discussed and lessons are drawn from this experience. The simulation of proton-proton collisions for the CMS experiment is primarily carried out at the second tier of the CMS computing infrastructure. Half of the Tier-2 sites of CMS are reserved for central Monte Carlo (MC) production while the other half is available for user analysis. This paper summarizes the large throughput of the MC production operation during the data taking period of 2010 and discusses the latencies and efficiencies of the various types of MC production workflows. We present the operational procedures to optimize the usage of available resources and we the operational model of CMS for including opportunistic resources, such as the larger Tier-3 sites, into the central production operation.

  9. CMS : An exceptional load for an exceptional work site

    CERN Multimedia

    2001-01-01

    Components of the CMS vacuum tank have been delivered to the detector assembly site at Cessy. The complete inner shell was delivered to CERN by special convoy while the outer shell is being assembled in situ. The convoy transporting the inner shell of the CMS vacuum tank took a week to cover the distance between Lons-le-Saunier and Point 5 at Cessy. Left: the convoy making its way down from the Col de la Faucille. With lights flashing, flanked by police outriders and with roads temporarily closed, the exceptional load that passed through the Pays de Gex on Monday 20 May was accorded the same VIP treatment as a leading state dignitary. But this time it was not the identity of the passenger but the exceptional size of the object being transported that made such arrangements necessary. A convoy of two lorries was needed to transport the load, an enormous 13-metre long, 6 metre diameter cylinder weighing 120 tonnes. It took a week to cover the 120 kilometres between Lons-le-Saunier and the assembly site for...

  10. Proposed Tier 2 Screening Criteria and Tier 3 Field Procedures for Evaluation of Vapor Intrusion (ESTCP Cost and Performance Report)

    Science.gov (United States)

    2012-08-01

    Security Technology Certification Program ETV Environmental Technology Verification GC gas chromatography HGL HydroGeoLogic, Inc . ITRC... Inc . (HGL) for invaluable project support. This page left blank intentionally. 1 1.0 EXECUTIVE SUMMARY 1.1 OBJECTIVES OF THE... NIKE Battery Site PR-58 N. Kingstown, RI Tier 2 Industrial Site Southeast TX Tier 2 Note: * = Tier 2 demonstration not completed due to the

  11. Experience building and operating the CMS Tier-1 computing centres

    Science.gov (United States)

    Albert, M.; Bakken, J.; Bonacorsi, D.; Brew, C.; Charlot, C.; Huang, Chih-Hao; Colling, D.; Dumitrescu, C.; Fagan, D.; Fassi, F.; Fisk, I.; Flix, J.; Giacchetti, L.; Gomez-Ceballos, G.; Gowdy, S.; Grandi, C.; Gutsche, O.; Hahn, K.; Holzman, B.; Jackson, J.; Kreuzer, P.; Kuo, C. M.; Mason, D.; Pukhaeva, N.; Qin, G.; Quast, G.; Rossman, P.; Sartirana, A.; Scheurer, A.; Schott, G.; Shih, J.; Tader, P.; Thompson, R.; Tiradani, A.; Trunov, A.

    2010-04-01

    The CMS Collaboration relies on 7 globally distributed Tier-1 computing centres located at large universities and national laboratories for a second custodial copy of the CMS RAW data and primary copy of the simulated data, data serving capacity to Tier-2 centres for analysis, and the bulk of the reprocessing and event selection capacity in the experiment. The Tier-1 sites have a challenging role in CMS because they are expected to ingest and archive data from both CERN and regional Tier-2 centres, while they export data to a global mesh of Tier-2s at rates comparable to the raw export data rate from CERN. The combined capacity of the Tier-1 centres is more than twice the resources located at CERN and efficiently utilizing this large distributed resources represents a challenge. In this article we will discuss the experience building, operating, and utilizing the CMS Tier-1 computing centres. We will summarize the facility challenges at the Tier-1s including the stable operations of CMS services, the ability to scale to large numbers of processing requests and large volumes of data, and the ability to provide custodial storage and high performance data serving. We will also present the operations experience utilizing the distributed Tier-1 centres from a distance: transferring data, submitting data serving requests, and submitting batch processing requests.

  12. Experience building and operating the CMS Tier-1 computing centres

    International Nuclear Information System (INIS)

    Albert, M; Bakken, J; Huang, Chih-Hao; Dumitrescu, C; Fagan, D; Fisk, I; Giacchetti, L; Gutsche, O; Holzman, B; Bonacorsi, D; Grandi, C; Brew, C; Jackson, J; Charlot, C; Colling, D; Fassi, F; Flix, J; Gomez-Ceballos, G; Hahn, K; Gowdy, S

    2010-01-01

    The CMS Collaboration relies on 7 globally distributed Tier-1 computing centres located at large universities and national laboratories for a second custodial copy of the CMS RAW data and primary copy of the simulated data, data serving capacity to Tier-2 centres for analysis, and the bulk of the reprocessing and event selection capacity in the experiment. The Tier-1 sites have a challenging role in CMS because they are expected to ingest and archive data from both CERN and regional Tier-2 centres, while they export data to a global mesh of Tier-2s at rates comparable to the raw export data rate from CERN. The combined capacity of the Tier-1 centres is more than twice the resources located at CERN and efficiently utilizing this large distributed resources represents a challenge. In this article we will discuss the experience building, operating, and utilizing the CMS Tier-1 computing centres. We will summarize the facility challenges at the Tier-1s including the stable operations of CMS services, the ability to scale to large numbers of processing requests and large volumes of data, and the ability to provide custodial storage and high performance data serving. We will also present the operations experience utilizing the distributed Tier-1 centres from a distance: transferring data, submitting data serving requests, and submitting batch processing requests.

  13. CMS results in the Combined Computing Readiness Challenge CCRC'08

    International Nuclear Information System (INIS)

    Bonacorsi, D.; Bauerdick, L.

    2009-01-01

    During February and May 2008, CMS participated to the Combined Computing Readiness Challenge (CCRC'08) together with all other LHC experiments. The purpose of this worldwide exercise was to check the readiness of the Computing infrastructure for LHC data taking. Another set of major CMS tests called Computing, Software and Analysis challenge (CSA'08) - as well as CMS cosmic runs - were also running at the same time: CCRC augmented the load on computing with additional tests to validate and stress-test all CMS computing workflows at full data taking scale, also extending this to the global WLCG community. CMS exercised most aspects of the CMS computing model, with very comprehensive tests. During May 2008, CMS moved more than 3.6 Petabytes among more than 300 links in the complex Grid topology. CMS demonstrated that is able to safely move data out of CERN to the Tier-1 sites, sustaining more than 600 MB/s as a daily average for more than seven days in a row, with enough headroom and with hourly peaks of up to 1.7 GB/s. CMS ran hundreds of simultaneous jobs at each Tier-1 site, re-reconstructing and skimming hundreds of millions of events. After re-reconstruction the fresh AOD (Analysis Object Data) has to be synchronized between Tier-1 centers: CMS demonstrated that the required inter-Tier-1 transfers are achievable within a few days. CMS also showed that skimmed analysis data sets can be transferred to Tier-2 sites for analysis at sufficient rate, regionally as well as inter-regionally, achieving all goals in about 90% of >200 links. Simultaneously, CMS also ran a large Tier-2 analysis exercise, where realistic analysis jobs were submitted to a large set of Tier-2 sites by a large number of people to produce a chaotic workload across the systems, and with more than 400 analysis users in May. Taken all together, CMS routinely achieved submissions of 100k jobs/day, with peaks up to 200k jobs/day. The achieved results in CCRC'08 - focussing on the distributed

  14. CMS analysis operations

    International Nuclear Information System (INIS)

    Andreeva, J; Maier, G; Spiga, D; Calloni, M; Colling, D; Fanzago, F; D'Hondt, J; Maes, J; Van Mulders, P; Villella, I; Klem, J; Letts, J; Padhi, S; Sarkar, S

    2010-01-01

    During normal data taking CMS expects to support potentially as many as 2000 analysis users. Since the beginning of 2008 there have been more than 800 individuals who submitted a remote analysis job to the CMS computing infrastructure. The bulk of these users will be supported at the over 40 CMS Tier-2 centres. Supporting a globally distributed community of users on a globally distributed set of computing clusters is a task that requires reconsidering the normal methods of user support for Analysis Operations. In 2008 CMS formed an Analysis Support Task Force in preparation for large-scale physics analysis activities. The charge of the task force was to evaluate the available support tools, the user support techniques, and the direct feedback of users with the goal of improving the success rate and user experience when utilizing the distributed computing environment. The task force determined the tools needed to assess and reduce the number of non-zero exit code applications submitted through the grid interfaces and worked with the CMS experiment dashboard developers to obtain the necessary information to quickly and proactively identify issues with user jobs and data sets hosted at various sites. Results of the analysis group surveys were compiled. Reference platforms for testing and debugging problems were established in various geographic regions. The task force also assessed the resources needed to make the transition to a permanent Analysis Operations task. In this presentation the results of the task force will be discussed as well as the CMS Analysis Operations plans for the start of data taking.

  15. 76 FR 71623 - Publication of the Tier 2 Tax Rates

    Science.gov (United States)

    2011-11-18

    ... DEPARTMENT OF THE TREASURY Internal Revenue Service Publication of the Tier 2 Tax Rates AGENCY: Internal Revenue Service (IRS), Treasury. ACTION: Notice. SUMMARY: Publication of the tier 2 tax rates for...). Tier 2 taxes on railroad employees, employers, and employee representatives are one source of funding...

  16. 75 FR 73166 - Publication of the Tier 2 Tax Rates

    Science.gov (United States)

    2010-11-29

    ... DEPARTMENT OF THE TREASURY Internal Revenue Service Publication of the Tier 2 Tax Rates AGENCY: Internal Revenue Service, Treasury. ACTION: Notice. SUMMARY: Publication of the tier 2 tax rates for...). Tier 2 taxes on railroad employees, employers, and employee representatives are one source of funding...

  17. 78 FR 71039 - Publication of the Tier 2 Tax Rates

    Science.gov (United States)

    2013-11-27

    ... DEPARTMENT OF THE TREASURY Internal Revenue Service Publication of the Tier 2 Tax Rates AGENCY: Internal Revenue Service (IRS), Treasury. ACTION: Notice. SUMMARY: Publication of the tier 2 tax rates for...). Tier 2 taxes on railroad employees, employers, and employee representatives are one source of funding...

  18. CMS end-cap yoke at the detector's assembly site.

    CERN Multimedia

    Patrice Loïez

    2002-01-01

    The magnetic flux generated by the superconducting coil in the CMS detector is returned via an iron yoke comprising three end-cap discs at each end (end-cap yoke) and five concentric cylinders (barrel yoke). This picture shows the first of three end-cap discs (red) seen through the outer cylinder of the vacuum tank which will house the superconducting coil.

  19. CMS tier structure and operation of the experiment-specific tasks in Germany

    International Nuclear Information System (INIS)

    Nowack, A

    2008-01-01

    In Germany, several university institutes and research centres take part in the CMS experiment. Concerning the data analysis, a couple of computing centres at different Tier levels, ranging from Tier 1 to Tier 3, exists at these places. The German Tier 1 centre GridKa at the research centre at Karlsruhe serves all four LHC experiments as well as four non-LHC experiments. With respect to the CMS experiment, GridKa is mainly involved in central tasks. The Tier 2 centre in Germany consists of two sites, one at the research centre DESY at Hamburg and one at RWTH Aachen University, forming a federated Tier 2 centre. Both parts cover different aspects of a Tier 2 centre. The German Tier 3 centres are located at the research centre DESY at Hamburg, at RWTH Aachen University, and at the University of Karlsruhe. Furthermore the building of a German user analysis facility is planned. Since the CMS community in German is rather small, a good cooperation between the different sites is essential. This cooperation includes physical topics as well as technical and operational issues. All available communication channels such as email, phone, monthly video conferences, and regular personal meetings are used. For example, the distribution of data sets is coordinated globally within Germany. Also the CMS-specific services such as the data transfer tool PhEDEx or the Monte Carlo production are operated by people from different sites in order to spread the knowledge widely and increase the redundancy in terms of operators

  20. Experience running a distributed Tier-2 in Spain for the ATLAS experiment

    International Nuclear Information System (INIS)

    March, L; Hoz, S Gonzales de la; Kaci, M; Fassi, F; Fernandez, A; Lamas, A; Salt, J; Sanchez, J; Peso, J del; Fernandez, P; Munoz, L; Pardo, J; Espinal, X; Garitaonandia, H; Mir, M L; Nadal, J; Pacheco, A; Shuskov, S

    2008-01-01

    The main role of the Tier-2s is to provide computing resources for production of physics simulated events and distributed data analysis. The Spanish ATLAS Tier-2 is geographically distributed among three HEP institutes: IFAE (Barcelona), IFIC (Valencia) and UAM (Madrid). Currently it has a computing power of 430 kSI2K CPU, a disk storage capacity of 87 TB and a network bandwidth, connecting the three sites and the nearest Tier-1 (PIC), of 1 Gb/s. These resources will be increased according to the ATLAS Computing Model with time in parallel to those of all ATLAS Tier-2s. Since 2002, it has been participating into the different Data Challenge exercises. Currently, it is achieving around 1.5% of the whole ATLAS collaboration production in the framework of the Computing System Commissioning exercise. A distributed data management is also arising as an important issue in the daily activities of the Tier-2. The distribution in three sites has shown to be useful due to an increasing service redundancy, a faster solution of problems, the share of computing expertise and know-how. Experience gained running the distributed Tier-2 in order to be ready at the LHC start-up will be presented

  1. Spanish ATLAS Tier-2 facing up to Run-2 period of LHC

    CERN Document Server

    Gonzalez de la Hoz, Santiago; The ATLAS collaboration; Fassi, Farida; Fernandez Casani, Alvaro; Kaci, Mohammed; Lacort Pellicer, Victor Ruben; Montiel Gonzalez, Almudena Del Rocio; Oliver Garcia, Elena; Pacheco Pages, Andres; Salt, José; Villaplana Perez, Miguel; Sanchez Martinez, Victoria; Sánchez, Javier

    2015-01-01

    The goal of this work is to describe the way of addressing the main challenges of Run-2 by the Spanish ATLAS Tier-2. The considerable increase of energy and luminosity for the upcoming Run-2 w.r.t. Run-1 has led to a revision of the ATLAS computing model as well as some of the main ATLAS computing tools. The adaptation to these changes will be shown, with the peculiarities that it is a distributed Tier-2 composed of three sites and its members are involved on ATLAS computing tasks with a hub of research, innovation and education.

  2. A New Information Architecture, Web Site and Services for the CMS Experiment

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The age and size of the CMS collaboration at the LHC means it now has many hundreds of inhomogeneous web sites and services and more than 100,000 documents. We describe a major initiative to create a single coherent CMS internal and public web site. This uses the Drupal web Content Management System (now supported by CERN/IT) on top of a standard LAMP stack (Linux, Apache, MySQL, and php/perl). The new navigation, content and search services are coherently integrated with numerous existing CERN services (CDS, EDMS, Indico, phonebook, Twiki) as well as many CMS internal Web services. We describe the information architecture; the system design, implementation and monitoring; the document and content database; security aspects; and our deployment strategy which ensured continual smooth operation of all systems at all times.

  3. 77 FR 71481 - Publication of the Tier 2 Tax Rates

    Science.gov (United States)

    2012-11-30

    ... DEPARTMENT OF THE TREASURY Internal Revenue Service Publication of the Tier 2 Tax Rates AGENCY... tax rates for calendar year 2013 as required by section 3241(d) of the Internal Revenue Code (26 U.S.C. 3241). Tier 2 taxes on railroad employees, employers, and employee representatives are one source of...

  4. Monitoring techniques and alarm procedures for CMS services and sites in WLCG

    Energy Technology Data Exchange (ETDEWEB)

    Molina-Perez, J. [UC, San Diego; Bonacorsi, D. [Bologna U.; Gutsche, O. [Fermilab; Sciaba, A. [CERN; Flix, J. [Madrid, CIEMAT; Kreuzer, P. [CERN; Fajardo, E. [Andes U., Bogota; Boccali, T. [INFN, Pisa; Klute, M. [MIT; Gomes, D. [Rio de Janeiro State U.; Kaselis, R. [Vilnius U.; Du, R. [Beijing, Inst. High Energy Phys.; Magini, N. [CERN; Butenas, I. [Vilnius U.; Wang, W. [Beijing, Inst. High Energy Phys.

    2012-01-01

    The CMS offline computing system is composed of roughly 80 sites (including most experienced T3s) and a number of central services to distribute, process and analyze data worldwide. A high level of stability and reliability is required from the underlying infrastructure and services, partially covered by local or automated monitoring and alarming systems such as Lemon and SLS, the former collects metrics from sensors installed on computing nodes and triggers alarms when values are out of range, the latter measures the quality of service and warns managers when service is affected. CMS has established computing shift procedures with personnel operating worldwide from remote Computing Centers, under the supervision of the Computing Run Coordinator at CERN. This dedicated 24/7 computing shift personnel is contributing to detect and react timely on any unexpected error and hence ensure that CMS workflows are carried out efficiently and in a sustained manner. Synergy among all the involved actors is exploited to ensure the 24/7 monitoring, alarming and troubleshooting of the CMS computing sites and services. We review the deployment of the monitoring and alarming procedures, and report on the experience gained throughout the first two years of LHC operation. We describe the efficiency of the communication tools employed, the coherent monitoring framework, the proactive alarming systems and the proficient troubleshooting procedures that helped the CMS Computing facilities and infrastructure to operate at high reliability levels.

  5. Monitoring techniques and alarm procedures for CMS Services and Sites in WLCG

    International Nuclear Information System (INIS)

    Molina-Perez, J; Sciabà, A; Magini, N; Bonacorsi, D; Gutsche, O; Flix, J; Kreuzer, P; Fajardo, E; Boccali, T; Klute, M; Gomes, D; Kaselis, R; Butenas, I; Du, R; Wang, W

    2012-01-01

    The CMS offline computing system is composed of roughly 80 sites (including most experienced T3s) and a number of central services to distribute, process and analyze data worldwide. A high level of stability and reliability is required from the underlying infrastructure and services, partially covered by local or automated monitoring and alarming systems such as Lemon and SLS; the former collects metrics from sensors installed on computing nodes and triggers alarms when values are out of range, the latter measures the quality of service and warns managers when service is affected. CMS has established computing shift procedures with personnel operating worldwide from remote Computing Centers, under the supervision of the Computing Run Coordinator at CERN. This dedicated 24/7 computing shift personnel is contributing to detect and react timely on any unexpected error and hence ensure that CMS workflows are carried out efficiently and in a sustained manner. Synergy among all the involved actors is exploited to ensure the 24/7 monitoring, alarming and troubleshooting of the CMS computing sites and services. We review the deployment of the monitoring and alarming procedures, and report on the experience gained throughout the first two years of LHC operation. We describe the efficiency of the communication tools employed, the coherent monitoring framework, the proactive alarming systems and the proficient troubleshooting procedures that helped the CMS Computing facilities and infrastructure to operate at high reliability levels.

  6. ATLAS Tier-2 monitoring system for the German cloud

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Joerg; Quadt, Arnulf; Weber, Pavel [II. Physikalisches Institut, Georg-August-Universitaet, Goettingen (Germany)

    2011-07-01

    The ATLAS tier centers in Germany provide their computing resources for the ATLAS experiment. The stable and sustainable operation of this so-called DE-cloud heavily relies on effective monitoring of the Tier-1 center GridKa and its associated Tier-2 centers. Central and local grid information services constantly collect and publish the status information from many computing resources and sites. The cloud monitoring system discussed in this presentation evaluates the information related to different cloud resources and provides a coherent and comprehensive view of the cloud. The main monitoring areas covered by the tool are data transfers, cloud software installation, site batch systems, Service Availability Monitoring (SAM). The cloud monitoring system consists of an Apache-based Python application, which retrieves the information and publishes it on the generated HTML web page. This results in an easy-to-use web interface for the limited number of sites in the cloud with fast and efficient access to the required information starting from a high level summary for the whole cloud to detailed diagnostics for the single site services. This approach provides the efficient identification of correlated site problems and simplifies the administration on both cloud and site level.

  7. Monitoring techniques and alarm procedures for CMS services and sites in WLCG

    CERN Document Server

    Molina-Perez, Jorge Amando

    2012-01-01

    The CMS offline computing system is composed of roughly 80 sites (including most experienced T3s) and a number of central services to distribute, process and analyze data worldwide. A high level of stability and reliability is required from the underlying infrastructure and services, partially covered by local or automated monitoring and alarming systems such as Lemon and SLS; the former collects metrics from sensors installed on computing nodes and triggers alarms when values are out of range, the latter measures the quality of service and warns managers when service is affected. CMS has established computing shift procedures with personnel operating worldwide from remote Computing Centers, under the supervision of the Computing Run Coordinator on duty at CERN. This dedicated 24/7 computing shift personnel is contributing to detect and react timely on any unexpected error and hence ensure that CMS workflows are carried out efficiently and in a sustained manner. Synergy among all the involved actors is explo...

  8. CMS DOCUMENTATION

    CERN Multimedia

    CMS TALKS AT MAJOR MEETINGS The agenda and talks from major CMS meetings can now be electronically accessed from the iCMS Web site. The following items can be found on: http://cms.cern.ch/iCMS/ General - CMS Weeks (Collaboration Meetings), CMS Weeks Agendas The talks presented at the Plenary Sessions. LHC Symposiums Management - CB - MB - FB - FMC Agendas and minutes are accessible to CMS members through their AFS account (ZH). However some linked documents are restricted to the Board Members. FB documents are only accessible to FB members. LHCC The talks presented at the ‘CMS Meetings with LHCC Referees’ are available on request from the PM or MB Country Representative. Annual Reviews The talks presented at the 2006 Annual reviews are posted.   CMS DOCUMENTS It is considered useful to establish information on the first employment of CMS doctoral students upon completion of their theses. Therefore it is requested that Ph.D students inform the CMS Secretariat a...

  9. CMS DOCUMENTATION

    CERN Multimedia

    CMS TALKS AT MAJOR MEETINGS The agenda and talks from major CMS meetings can now be electronically accessed from the iCMS Web site. The following items can be found on: http://cms.cern.ch/iCMS/ General - CMS Weeks (Collaboration Meetings), CMS Weeks Agendas The talks presented at the Plenary Sessions. LHC Symposiums Management - CB - MB - FB - FMC Agendas and minutes are accessible to CMS members through their AFS account (ZH). However some linked documents are restricted to the Board Members. FB documents are only accessible to FB members. LHCC The talks presented at the ‘CMS Meetings with LHCC Referees’ are available on request from the PM or MB Country Representative. Annual Reviews The talks presented at the 2006 Annual reviews are posted. CMS DOCUMENTS It is considered useful to establish information on the first employment of CMS doctoral students upon completion of their theses. Therefore it is requested that Ph.D students inform the CMS Secretariat about the natu...

  10. CMS DOCUMENTATION

    CERN Multimedia

    CMS TALKS AT MAJOR MEETINGS The agenda and talks from major CMS meetings can now be electronically accessed from the iCMS Web site. The following items can be found on: http://cms.cern.ch/iCMS/ General - CMS Weeks (Collaboration Meetings), CMS Weeks Agendas The talks presented at the Plenary Sessions. LHC Symposiums Management - CB - MB - FB - FMC Agendas and minutes are accessible to CMS members through their AFS account (ZH). However some linked documents are restricted to the Board Members. FB documents are only accessible to FB members. LHCC The talks presented at the ‘CMS Meetings with LHCC Referees’ are available on request from the PM or MB Country Representative. Annual Reviews The talks presented at the 2006 Annual reviews are posted. CMS DOCUMENTS It is considered useful to establish information on the first employment of CMS doctoral students upon completion of their theses. Therefore it is requested that Ph.D students inform the CMS Secretariat about the natur...

  11. CMS DOCUMENTATION

    CERN Multimedia

    CMS TALKS AT MAJOR MEETINGS The agenda and talks from major CMS meetings can now be electronically accessed from the iCMS Web site. The following items can be found on: http://cms.cern.ch/iCMS/ Management- CMS Weeks (Collaboration Meetings), CMS Weeks Agendas The talks presented at the Plenary Sessions. Management - CB - MB - FB Agendas and minutes are accessible to CMS members through their AFS account (ZH). However some linked documents are restricted to the Board Members. FB documents are only accessible to FB members. LHCC The talks presented at the ‘CMS Meetings with LHCC Referees’ are available on request from the PM or MB Country Representative. Annual Reviews The talks presented at the 2007 Annual reviews are posted. CMS DOCUMENTS It is considered useful to establish information on the first employment of CMS doctoral students upon completion of their theses. Therefore it is requested that Ph.D students inform the CMS Secretariat about the nature of employment and ...

  12. CMS DOCUMENTATION

    CERN Multimedia

    CMS TALKS AT MAJOR MEETINGS The agenda and talks from major CMS meetings can now be electronically accessed from the iCMS Web site. The following items can be found on: http://cms.cern.ch/iCMS/ Management- CMS Weeks (Collaboration Meetings), CMS Weeks Agendas The talks presented at the Plenary Sessions. Management - CB - MB - FB Agendas and minutes are accessible to CMS members through their AFS account (ZH). However some linked documents are restricted to the Board Members. FB documents are only accessible to FB members. LHCC The talks presented at the ‘CMS Meetings with LHCC Referees’ are available on request from the PM or MB Country Representative. Annual Reviews The talks presented at the 2007 Annual reviews are posted. CMS DOCUMENTS It is considered useful to establish information on the first employment of CMS doctoral students upon completion of their theses. Therefore it is requested that Ph.D students inform the CMS Secretariat about the nature of em¬pl...

  13. CMS DOCUMENTATION

    CERN Multimedia

    CMS TALKS AT MAJOR MEETINGS The agenda and talks from major CMS meetings can now be electronically accessed from the iCMS Web site. The following items can be found on: http://cms.cern.ch/iCMS/ General - CMS Weeks (Collaboration Meetings), CMS Weeks Agendas The talks presented at the Plenary Sessions. LHC Symposiums Management - CB - MB - FB - FMC Agendas and minutes are accessible to CMS members through their AFS account (ZH). However some linked documents are restricted to the Board Members. FB documents are only accessible to FB members. LHCC The talks presented at the ‘CMS Meetings with LHCC Referees’ are available on request from the PM or MB Country Representative. Annual Reviews The talks presented at the 2006 Annual reviews are posted. CMS DOCUMENTS It is considered useful to establish information on the first employment of CMS doctoral students upon completion of their theses. Therefore it is requested that Ph.D students inform the CMS Secretariat about the na...

  14. The CMS integration grid testbed

    Energy Technology Data Exchange (ETDEWEB)

    Graham, Gregory E.

    2004-08-26

    The CMS Integration Grid Testbed (IGT) comprises USCMS Tier-1 and Tier-2 hardware at the following sites: the California Institute of Technology, Fermi National Accelerator Laboratory, the University of California at San Diego, and the University of Florida at Gainesville. The IGT runs jobs using the Globus Toolkit with a DAGMan and Condor-G front end. The virtual organization (VO) is managed using VO management scripts from the European Data Grid (EDG). Gridwide monitoring is accomplished using local tools such as Ganglia interfaced into the Globus Metadata Directory Service (MDS) and the agent based Mona Lisa. Domain specific software is packaged and installed using the Distribution After Release (DAR) tool of CMS, while middleware under the auspices of the Virtual Data Toolkit (VDT) is distributed using Pacman. During a continuous two month span in Fall of 2002, over 1 million official CMS GEANT based Monte Carlo events were generated and returned to CERN for analysis while being demonstrated at SC2002. In this paper, we describe the process that led to one of the world's first continuously available, functioning grids.

  15. The CMS Integration Grid Testbed

    CERN Document Server

    Graham, G E; Aziz, Shafqat; Bauerdick, L.A.T.; Ernst, Michael; Kaiser, Joseph; Ratnikova, Natalia; Wenzel, Hans; Wu, Yu-jun; Aslakson, Erik; Bunn, Julian; Iqbal, Saima; Legrand, Iosif; Newman, Harvey; Singh, Suresh; Steenberg, Conrad; Branson, James; Fisk, Ian; Letts, James; Arbree, Adam; Avery, Paul; Bourilkov, Dimitri; Cavanaugh, Richard; Rodriguez, Jorge Luis; Kategari, Suchindra; Couvares, Peter; DeSmet, Alan; Livny, Miron; Roy, Alain; Tannenbaum, Todd; Graham, Gregory E.; Aziz, Shafqat; Ernst, Michael; Kaiser, Joseph; Ratnikova, Natalia; Wenzel, Hans; Wu, Yujun; Aslakson, Erik; Bunn, Julian; Iqbal, Saima; Legrand, Iosif; Newman, Harvey; Singh, Suresh; Steenberg, Conrad; Branson, James; Fisk, Ian; Letts, James; Arbree, Adam; Avery, Paul; Bourilkov, Dimitri; Cavanaugh, Richard; Rodriguez, Jorge; Kategari, Suchindra; Couvares, Peter; Smet, Alan De; Livny, Miron; Roy, Alain; Tannenbaum, Todd

    2003-01-01

    The CMS Integration Grid Testbed (IGT) comprises USCMS Tier-1 and Tier-2 hardware at the following sites: the California Institute of Technology, Fermi National Accelerator Laboratory, the University of California at San Diego, and the University of Florida at Gainesville. The IGT runs jobs using the Globus Toolkit with a DAGMan and Condor-G front end. The virtual organization (VO) is managed using VO management scripts from the European Data Grid (EDG). Gridwide monitoring is accomplished using local tools such as Ganglia interfaced into the Globus Metadata Directory Service (MDS) and the agent based Mona Lisa. Domain specific software is packaged and installed using the Distrib ution After Release (DAR) tool of CMS, while middleware under the auspices of the Virtual Data Toolkit (VDT) is distributed using Pacman. During a continuo us two month span in Fall of 2002, over 1 million official CMS GEANT based Monte Carlo events were generated and returned to CERN for analysis while being demonstrated at SC2002. ...

  16. CMS DOCUMENTATION

    CERN Multimedia

    CMS TALKS AT MAJOR MEETINGS The agenda and talks from major CMS meetings can now be electronically accessed from the ICMS Web site. The following items can be found on: http://cms.cern.ch/iCMS Management – CMS Weeks (Collaboration Meetings), CMS Weeks Agendas The talks presented at the Plenary Sessions. Management – CB – MB – FB Agendas and minutes are accessible to CMS members through Indico. LHCC The talks presented at the ‘CMS Meetings with LHCC Referees’ are available on request from the PM or MB Country Representative. Annual Reviews The talks presented at the 2008 Annual Reviews are posted in Indico. CMS DOCUMENTS It is considered useful to establish information on the first employment of CMS doctoral student upon completion of their theses.  Therefore it is requested that Ph.D students inform the CMS Secretariat about the nature of employment and name of their first employer. The Notes, Conference Reports and Theses published si...

  17. CMS Distributed Computing Workflow Experience

    CERN Document Server

    Haas, Jeffrey David

    2010-01-01

    The vast majority of the CMS Computing capacity, which is organized in a tiered hierarchy, is located away from CERN. The 7 Tier-1 sites archive the LHC proton-proton collision data that is initially processed at CERN. These sites provide access to all recorded and simulated data for the Tier-2 sites, via wide-area network (WAN) transfers. All central data processing workflows are executed at the Tier-1 level, which contain re-reconstruction and skimming workflows of collision data as well as reprocessing of simulated data to adapt to changing detector conditions. This paper describes the operation of the CMS processing infrastructure at the Tier-1 level. The Tier-1 workflows are described in detail. The operational optimization of resource usage is described. In particular, the variation of different workflows during the data taking period of 2010, their efficiencies and latencies as well as their impact on the delivery of physics results is discussed and lessons are drawn from this experience. The simul...

  18. CMS Connect

    Science.gov (United States)

    Balcas, J.; Bockelman, B.; Gardner, R., Jr.; Hurtado Anampa, K.; Jayatilaka, B.; Aftab Khan, F.; Lannon, K.; Larson, K.; Letts, J.; Marra Da Silva, J.; Mascheroni, M.; Mason, D.; Perez-Calero Yzquierdo, A.; Tiradani, A.

    2017-10-01

    The CMS experiment collects and analyzes large amounts of data coming from high energy particle collisions produced by the Large Hadron Collider (LHC) at CERN. This involves a huge amount of real and simulated data processing that needs to be handled in batch-oriented platforms. The CMS Global Pool of computing resources provide +100K dedicated CPU cores and another 50K to 100K CPU cores from opportunistic resources for these kind of tasks and even though production and event processing analysis workflows are already managed by existing tools, there is still a lack of support to submit final stage condor-like analysis jobs familiar to Tier-3 or local Computing Facilities users into these distributed resources in an integrated (with other CMS services) and friendly way. CMS Connect is a set of computing tools and services designed to augment existing services in the CMS Physics community focusing on these kind of condor analysis jobs. It is based on the CI-Connect platform developed by the Open Science Grid and uses the CMS GlideInWMS infrastructure to transparently plug CMS global grid resources into a virtual pool accessed via a single submission machine. This paper describes the specific developments and deployment of CMS Connect beyond the CI-Connect platform in order to integrate the service with CMS specific needs, including specific Site submission, accounting of jobs and automated reporting to standard CMS monitoring resources in an effortless way to their users.

  19. The CMS experiment inaugurated a new visitor centre at its Cessy site on 14 June

    CERN Multimedia

    2001-01-01

    The CMS visitor centre has been built on a platform overlooking CMS construction. It contains a set of clear descriptive posters describing the experiment, along with a video projection showing animations and movies about CMS construction.

  20. Storage element performance optimization for CMS analysis jobs

    International Nuclear Information System (INIS)

    Behrmann, G; Dahlblom, J; Guldmyr, J; Happonen, K; Lindén, T

    2012-01-01

    Tier-2 computing sites in the Worldwide Large Hadron Collider Computing Grid (WLCG) host CPU-resources (Compute Element, CE) and storage resources (Storage Element, SE). The vast amount of data that needs to processed from the Large Hadron Collider (LHC) experiments requires good and efficient use of the available resources. Having a good CPU efficiency for the end users analysis jobs requires that the performance of the storage system is able to scale with I/O requests from hundreds or even thousands of simultaneous jobs. In this presentation we report on the work on improving the SE performance at the Helsinki Institute of Physics (HIP) Tier-2 used for the Compact Muon Experiment (CMS) at the LHC. Statistics from CMS grid jobs are collected and stored in the CMS Dashboard for further analysis, which allows for easy performance monitoring by the sites and by the CMS collaboration. As part of the monitoring framework CMS uses the JobRobot which sends every four hours 100 analysis jobs to each site. CMS also uses the HammerCloud tool for site monitoring and stress testing and it has replaced the JobRobot. The performance of the analysis workflow submitted with JobRobot or HammerCloud can be used to track the performance due to site configuration changes, since the analysis workflow is kept the same for all sites and for months in time. The CPU efficiency of the JobRobot jobs at HIP was increased approximately by 50 % to more than 90 %, by tuning the SE and by improvements in the CMSSW and dCache software. The performance of the CMS analysis jobs improved significantly too. Similar work has been done on other CMS Tier-sites, since on average the CPU efficiency for CMSSW jobs has increased during 2011. Better monitoring of the SE allows faster detection of problems, so that the performance level can be kept high. The next storage upgrade at HIP consists of SAS disk enclosures which can be stress tested on demand with HammerCloud workflows, to make sure that the I

  1. Experience in Grid Site Testing for ATLAS, CMS and LHCb with HammerCloud

    International Nuclear Information System (INIS)

    Elmsheuser, Johannes; Legger, Federica; Llamas, Ramón Medrano; Sciabà, Andrea; García, Mario Úbeda; Ster, Daniel van der; Sciacca, Gianfranco

    2012-01-01

    Frequent validation and stress testing of the network, storage and CPU resources of a grid site is essential to achieve high performance and reliability. HammerCloud was previously introduced with the goals of enabling VO- and site-administrators to run such tests in an automated or on-demand manner. The ATLAS, CMS and LHCb experiments have all developed VO plugins for the service and have successfully integrated it into their grid operations infrastructures. This work will present the experience in running HammerCloud at full scale for more than 3 years and present solutions to the scalability issues faced by the service. First, we will show the particular challenges faced when integrating with CMS and LHCb offline computing, including customized dashboards to show site validation reports for the VOs and a new API to tightly integrate with the LHCbDIRAC Resource Status System. Next, a study of the automatic site exclusion component used by ATLAS will be presented along with results for tuning the exclusion policies. A study of the historical test results for ATLAS, CMS and LHCb will be presented, including comparisons between the experiments’ grid availabilities and a search for site-based or temporal failure correlations. Finally, we will look to future plans that will allow users to gain new insights into the test results; these include developments to allow increased testing concurrency, increased scale in the number of metrics recorded per test job (up to hundreds), and increased scale in the historical job information (up to many millions of jobs per VO).

  2. Experience in Grid Site Testing for ATLAS, CMS and LHCb with HammerCloud

    Science.gov (United States)

    Elmsheuser, Johannes; Medrano Llamas, Ramón; Legger, Federica; Sciabà, Andrea; Sciacca, Gianfranco; Úbeda García, Mario; van der Ster, Daniel

    2012-12-01

    Frequent validation and stress testing of the network, storage and CPU resources of a grid site is essential to achieve high performance and reliability. HammerCloud was previously introduced with the goals of enabling VO- and site-administrators to run such tests in an automated or on-demand manner. The ATLAS, CMS and LHCb experiments have all developed VO plugins for the service and have successfully integrated it into their grid operations infrastructures. This work will present the experience in running HammerCloud at full scale for more than 3 years and present solutions to the scalability issues faced by the service. First, we will show the particular challenges faced when integrating with CMS and LHCb offline computing, including customized dashboards to show site validation reports for the VOs and a new API to tightly integrate with the LHCbDIRAC Resource Status System. Next, a study of the automatic site exclusion component used by ATLAS will be presented along with results for tuning the exclusion policies. A study of the historical test results for ATLAS, CMS and LHCb will be presented, including comparisons between the experiments’ grid availabilities and a search for site-based or temporal failure correlations. Finally, we will look to future plans that will allow users to gain new insights into the test results; these include developments to allow increased testing concurrency, increased scale in the number of metrics recorded per test job (up to hundreds), and increased scale in the historical job information (up to many millions of jobs per VO).

  3. Experience in Grid Site Testing for ATLAS, CMS and LHCb with HammerCloud

    CERN Document Server

    Van der Ster , D; Medrano Llamas, R; Legger , F; Sciaba, A; Sciacca, G; Ubeda Garca , M

    2012-01-01

    Frequent validation and stress testing of the network, storage and CPU resources of a grid site is essential to achieve high performance and reliability. HammerCloud was previously introduced with the goals of enabling VO- and site-administrators to run such tests in an automated or on-demand manner. The ATLAS, CMS and LHCb experiments have all developed VO plugins for the service and have successfully integrated it into their grid operations infrastructures. This work will present the experience in running HammerCloud at full scale for more than 3 years and present solutions to the scalability issues faced by the service. First, we will show the particular challenges faced when integrating with CMS and LHCb offline computing, including customized dashboards to show site validation reports for the VOs and a new API to tightly integrate with the LHCbDIRAC Resource Status System. Next, a study of the automatic site exclusion component used by ATLAS will be presented along with results for tuning the exclusion p...

  4. Experience in Grid Site Testing for ATLAS, CMS and LHCb with HammerCloud

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Frequent validation and stress testing of the network, storage and CPU resources of a grid site is essential to achieve high performance and reliability. HammerCloud was previously introduced with the goals of enabling VO- and site-administrators to run such tests in an automated or on-demand manner. The ATLAS, CMS and LHCb experiments have all developed VO plugins for the service and have successfully integrated it into their grid operations infrastructures. This work will present the experience in running HammerCloud at full scale for more than 3 years and present solutions to the scalability issues faced by the service. First, we will show the particular challenges faced when integrating with CMS and LHCb offline computing, including customized dashboards to show site validation reports for the VOs and a new API to tightly integrate with the LHCbDIRAC Resource Status System. Next, a study of the automatic site exclusion component used by ATLAS will be presented along with results for tuning the exclusion ...

  5. Understanding the T2 traffic in CMS during Run-1

    CERN Document Server

    T, Wildish

    2015-01-01

    In the run-up to Run-1 CMS was operating its facilities according to the MONARC model, where data-transfers were strictly hierarchical in nature. Direct transfers between Tier-2 nodes was excluded, being perceived as operationally intensive and risky in an era where the network was expected to be a major source of errors. By the end of Run-1 wide-area networks were more capable and stable than originally anticipated. The original data-placement model was relaxed, and traffic was allowed between Tier-2 nodes.Tier-2 to Tier-2 traffic in 2012 already exceeded the amount of Tier-2 to Tier-1 traffic, so it clearly has the potential to become important in the future. Moreover, while Tier-2 to Tier-1 traffic is mostly upload of Monte Carlo data, the Tier-2 to Tier-2 traffic represents data moved in direct response to requests from the physics analysis community. As such, problems or delays there are more likely to have a direct impact on the user community.Tier-2 to Tier-2 traffic may also traverse parts of the WAN ...

  6. Deployment of the CMS software on the WLCG Grid

    International Nuclear Information System (INIS)

    Behrenhoff, W; Wissing, C; Kim, B; Blyweert, S; D'Hondt, J; Maes, J; Maes, M; Mulders, P Van; Villella, I; Vanelderen, L

    2011-01-01

    The CMS Experiment is taking high energy collision data at CERN. The computing infrastructure used to analyse the data is distributed round the world in a tiered structure. In order to use the 7 Tier-1 sites, the 50 Tier-2 sites and a still growing number of about 30 Tier-3 sites, the CMS software has to be available at those sites. Except for a very few sites the deployment and the removal of CMS software is managed centrally. Since the deployment team has no local accounts at the remote sites all installation jobs have to be sent via Grid jobs. Via a VOMS role the job has a high priority in the batch system and gains write privileges to the software area. Due to the lack of interactive access the installation jobs must be very robust against possible failures, in order not to leave a broken software installation. The CMS software is packaged in RPMs that are installed in the software area independent of the host OS. The apt-get tool is used to resolve package dependencies. This paper reports about the recent deployment experiences and the achieved performance.

  7. German contributions to the CMS computing infrastructure

    International Nuclear Information System (INIS)

    Scheurer, A

    2010-01-01

    The CMS computing model anticipates various hierarchically linked tier centres to counter the challenges provided by the enormous amounts of data which will be collected by the CMS detector at the Large Hadron Collider, LHC, at CERN. During the past years, various computing exercises were performed to test the readiness of the computing infrastructure, the Grid middleware and the experiment's software for the startup of the LHC which took place in September 2008. In Germany, several tier sites are set up to allow for an efficient and reliable way to simulate possible physics processes as well as to reprocess, analyse and interpret the numerous stored collision events of the experiment. It will be shown that the German computing sites played an important role during the experiment's preparation phase and during data-taking of CMS and, therefore, scientific groups in Germany will be ready to compete for discoveries in this new era of particle physics. This presentation focuses on the German Tier-1 centre GridKa, located at Forschungszentrum Karlsruhe, the German CMS Tier-2 federation DESY/RWTH with installations at the University of Aachen and the research centre DESY. In addition, various local computing resources in Aachen, Hamburg and Karlsruhe are briefly introduced as well. It will be shown that an excellent cooperation between the different German institutions and physicists led to well established computing sites which cover all parts of the CMS computing model. Therefore, the following topics are discussed and the achieved goals and the gained knowledge are depicted: data management and distribution among the different tier sites, Grid-based Monte Carlo production at the Tier-2 as well as Grid-based and locally submitted inhomogeneous user analyses at the Tier-3s. Another important task is to ensure a proper and reliable operation 24 hours a day, especially during the time of data-taking. For this purpose, the meta-monitoring tool 'HappyFace', which was

  8. Understanding the T2 traffic in CMS during Run-1

    Science.gov (United States)

    T, Wildish

    2015-12-01

    In the run-up to Run-1 CMS was operating its facilities according to the MONARC model, where data-transfers were strictly hierarchical in nature. Direct transfers between Tier-2 nodes was excluded, being perceived as operationally intensive and risky in an era where the network was expected to be a major source of errors. By the end of Run-1 wide-area networks were more capable and stable than originally anticipated. The original data-placement model was relaxed, and traffic was allowed between Tier-2 nodes. Tier-2 to Tier-2 traffic in 2012 already exceeded the amount of Tier-2 to Tier-1 traffic, so it clearly has the potential to become important in the future. Moreover, while Tier-2 to Tier-1 traffic is mostly upload of Monte Carlo data, the Tier-2 to Tier-2 traffic represents data moved in direct response to requests from the physics analysis community. As such, problems or delays there are more likely to have a direct impact on the user community. Tier-2 to Tier-2 traffic may also traverse parts of the WAN that are at the 'edge' of our network, with limited network capacity or reliability compared to, say, the Tier-0 to Tier-1 traffic which goes the over LHCOPN network. CMS is looking to exploit technologies that allow us to interact with the network fabric so that it can manage our traffic better for us, this we hope to achieve before the end of Run-2. Tier-2 to Tier-2 traffic would be the most interesting use-case for such traffic management, precisely because it is close to the users' analysis and far from the 'core' network infrastructure. As such, a better understanding of our Tier-2 to Tier-2 traffic is important. Knowing the characteristics of our data-flows can help us place our data more intelligently. Knowing how widely the data moves can help us anticipate the requirements for network capacity, and inform the dynamic data placement algorithms we expect to have in place for Run-2. This paper presents an analysis of the CMS Tier-2 traffic during Run 1.

  9. The CMS Computing Model

    International Nuclear Information System (INIS)

    Bonacorsi, D.

    2007-01-01

    The CMS experiment at LHC has developed a baseline Computing Model addressing the needs of a computing system capable to operate in the first years of LHC running. It is focused on a data model with heavy streaming at the raw data level based on trigger, and on the achievement of the maximum flexibility in the use of distributed computing resources. The CMS distributed Computing Model includes a Tier-0 centre at CERN, a CMS Analysis Facility at CERN, several Tier-1 centres located at large regional computing centres, and many Tier-2 centres worldwide. The workflows have been identified, along with a baseline architecture for the data management infrastructure. This model is also being tested in Grid Service Challenges of increasing complexity, coordinated with the Worldwide LHC Computing Grid community

  10. Examining the Efficacy of a Tier 2 Kindergarten Mathematics Intervention.

    Science.gov (United States)

    Clarke, Ben; Doabler, Christian T; Smolkowski, Keith; Baker, Scott K; Fien, Hank; Strand Cary, Mari

    2016-01-01

    This study examined the efficacy of a Tier 2 kindergarten mathematics intervention program, ROOTS, focused on developing whole number understanding for students at risk in mathematics. A total of 29 classrooms were randomly assigned to treatment (ROOTS) or control (standard district practices) conditions. Measures of mathematics achievement were collected at pretest and posttest. Treatment and control students did not differ on mathematics assessments at pretest. Gain scores of at-risk intervention students were significantly greater than those of control peers, and the gains of at-risk treatment students were greater than the gains of peers not at risk, effectively reducing the achievement gap. Implications for Tier 2 mathematics instruction in a response to intervention (RtI) model are discussed. © Hammill Institute on Disabilities 2014.

  11. The US-CMS Tier-1 Center Network Evolving toward 100Gbps

    International Nuclear Information System (INIS)

    Bobyshev, A; DeMar, P

    2011-01-01

    Fermilab hosts the US Tier-1 Center for the LHC's Compact Muon Collider (CMS) experiment. The Tier-1s are the central points for the processing and movement of LHC data. They sink raw data from the Tier-0 at CERN, process and store it locally, and then distribute the processed data to Tier-2s for simulation studies and analysis. The Fermilab Tier-1 Center is the largest of the CMS Tier-1s, accounting for roughly 35% of the experiment's Tier-1 computing and storage capacity. Providing capacious, resilient network services, both in terms of local network infrastructure and off-site data movement capabilities, presents significant challenges. This article will describe the current architecture, status, and near term plans for network support of the US-CMS Tier-1 facility.

  12. Time horizon for AFV emission savings under Tier 2

    International Nuclear Information System (INIS)

    Saricks, C. L.

    2000-01-01

    Implementation of the Federal Tier 2 vehicular emission standards according to the schedule presented in the December, 1999 Final Rule will result in substantial reductions of NMHC, CO, NO x , and fine particle emissions from motor vehicles. Currently, when compared to Tier 1 and even NLEV certification requirements, the emissions performance of automobiles and light-duty trucks powered by non-petroleum (especially, gaseous) fuels (i.e., vehicles collectively termed AFVs) enjoy measurable advantage over their gasoline- and diesel-fueled counterparts over the full Federal Test Procedure and, especially, in Bag 1 (cold start). For the lighter end of these vehicle classes, this advantage may disappear shortly after 2004 under the new standards, but should continue for a longer period (perhaps beyond 2008) for the heavier end as well as for heavy-duty vehicles relative to diesel-fueled counterparts. Because of the continuing commitment of the U.S. Department of Energy's Clean Cities coalitions to the acquisition and operation of AFVs of many types and size classes, it is important for them to know in which classes their acquisitions will remain clear relative to the petroleum-fueled counterparts they might otherwise procure. This paper provides an approximate timeline for and expected magnitude of such savings, assuming that full implementation of the Tier 2 standards covering both vehicular emissions and fuel sulfur limits proceeds on schedule. The pollutants of interest are primary ozone precursors and fine particulate matter from fuel combustion

  13. Grid Interoperation with ARC middleware for the CMS experiment

    International Nuclear Information System (INIS)

    Edelmann, Erik; Groenager, Michael; Johansson, Daniel; Kleist, Josva; Field, Laurence; Qing, Di; Frey, Jaime; Happonen, Kalle; Klem, Jukka; Koivumaeki, Jesper; Linden, Tomas; Pirinen, Antti

    2010-01-01

    The Compact Muon Solenoid (CMS) is one of the general purpose experiments at the CERN Large Hadron Collider (LHC). CMS computing relies on different grid infrastructures to provide computational and storage resources. The major grid middleware stacks used for CMS computing are gLite, Open Science Grid (OSG) and ARC (Advanced Resource Connector). Helsinki Institute of Physics (HIP) hosts one of the Tier-2 centers for CMS computing. CMS Tier-2 centers operate software systems for data transfers (PhEDEx), Monte Carlo production (ProdAgent) and data analysis (CRAB). In order to provide the Tier-2 services for CMS, HIP uses tools and components from both ARC and gLite grid middleware stacks. Interoperation between grid systems is a challenging problem and HIP uses two different solutions to provide the needed services. The first solution is based on gLite-ARC grid level interoperability. This allows to use ARC resources in CMS without modifying the CMS application software. The second solution is based on developing specific ARC plugins in CMS software.

  14. Grid Interoperation with ARC middleware for the CMS experiment

    Energy Technology Data Exchange (ETDEWEB)

    Edelmann, Erik; Groenager, Michael; Johansson, Daniel; Kleist, Josva [Nordic DataGrid Facility, Kastruplundgade 22, 1., DK-2770 Kastrup (Denmark); Field, Laurence; Qing, Di [CERN, CH-1211 Geneve 23 (Switzerland); Frey, Jaime [University of Wisconsin-Madison, 1210 W. Dayton St., Madison, WI (United States); Happonen, Kalle; Klem, Jukka; Koivumaeki, Jesper; Linden, Tomas; Pirinen, Antti, E-mail: Jukka.Klem@cern.c [Helsinki Institute of Physics, PO Box 64, FIN-00014 University of Helsinki (Finland)

    2010-04-01

    The Compact Muon Solenoid (CMS) is one of the general purpose experiments at the CERN Large Hadron Collider (LHC). CMS computing relies on different grid infrastructures to provide computational and storage resources. The major grid middleware stacks used for CMS computing are gLite, Open Science Grid (OSG) and ARC (Advanced Resource Connector). Helsinki Institute of Physics (HIP) hosts one of the Tier-2 centers for CMS computing. CMS Tier-2 centers operate software systems for data transfers (PhEDEx), Monte Carlo production (ProdAgent) and data analysis (CRAB). In order to provide the Tier-2 services for CMS, HIP uses tools and components from both ARC and gLite grid middleware stacks. Interoperation between grid systems is a challenging problem and HIP uses two different solutions to provide the needed services. The first solution is based on gLite-ARC grid level interoperability. This allows to use ARC resources in CMS without modifying the CMS application software. The second solution is based on developing specific ARC plugins in CMS software.

  15. Implementation of Grid Tier 2 and Tier 3 facilities on a Distributed OpenStack Cloud

    International Nuclear Information System (INIS)

    Limosani, Antonio; Boland, Lucien; Crosby, Sean; Huang, Joanna; Sevior, Martin; Coddington, Paul; Zhang, Shunde; Wilson, Ross

    2014-01-01

    The Australian Government is making a $AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 30,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All will be linked via a 100 Gb/s network. This proceeding describes the development of a fully connected WLCG Tier-2 grid site as well as a general purpose Tier-3 computing cluster based on this architecture. The facility employs an extension to Torque to enable dynamic allocations of virtual machine instances. A base Scientific Linux virtual machine (VM) image is deployed in the OpenStack cloud and automatically configured as required using Puppet. Custom scripts are used to launch multiple VMs, integrate them into the dynamic Torque cluster and to mount remote file systems. We report on our experience in developing this nation-wide ATLAS and Belle II Tier 2 and Tier 3 computing infrastructure using the national Research Cloud and storage facilities.

  16. Implementation of Grid Tier 2 and Tier 3 facilities on a Distributed OpenStack Cloud

    Science.gov (United States)

    Limosani, Antonio; Boland, Lucien; Coddington, Paul; Crosby, Sean; Huang, Joanna; Sevior, Martin; Wilson, Ross; Zhang, Shunde

    2014-06-01

    The Australian Government is making a AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 30,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All will be linked via a 100 Gb/s network. This proceeding describes the development of a fully connected WLCG Tier-2 grid site as well as a general purpose Tier-3 computing cluster based on this architecture. The facility employs an extension to Torque to enable dynamic allocations of virtual machine instances. A base Scientific Linux virtual machine (VM) image is deployed in the OpenStack cloud and automatically configured as required using Puppet. Custom scripts are used to launch multiple VMs, integrate them into the dynamic Torque cluster and to mount remote file systems. We report on our experience in developing this nation-wide ATLAS and Belle II Tier 2 and Tier 3 computing infrastructure using the national Research Cloud and storage facilities.

  17. Spanish ATLAS Tier-1 &Tier-2 perspective on computing over the next years

    CERN Document Server

    Gonzalez de la Hoz, Santiago; The ATLAS collaboration

    2018-01-01

    Since the beginning of the WLCG Project the Spanish ATLAS computer centres have contributed with reliable and stable resources as well as personnel for the ATLAS Collaboration. Our contribution to the ATLAS Tier2s and Tier1s computing resources (disk and CPUs) in the last 10 years has been around 5%, even though the Spanish contribution to the ATLAS detector construction as well as the number of authors are both close to 3%. In 2015 an international advisory committee recommended to revise our contribution according to the participation in the ATLAS experiment. With this scenario, we are optimising the federation of three sites located in Barcelona, Madrid and Valencia, taking into account that the ATLAS collaboration has developed workflows and tools to flexibly use all the resources available to the collaboration, where the Tiered structure is somehow vanishing. In this contribution, we would like to show the evolution and technical updates in the ATLAS Spanish Federated Tier2 and Tier1. Some developments w...

  18. RFI to CMS: An Approach to Regulatory Acceptance of Site Remediation Technologies

    Science.gov (United States)

    Rowland, Martin A.

    2001-01-01

    Lockheed Martin made a smooth transition from RCRA Facility Investigation (RFI) at the National Aeronautics and Space Administrations'(NASA) Michoud Assembly Facility (MA-F) to its Corrective Measures Study (CMS) phase within the RCRA Corrective Action Process. We located trichloroethylene (TCE) contamination that resulted from the manufacture of the Apollo Program Saturn V rocket and the Space Shuttle External Tank, began the cleanup, and identified appropriate technologies for final remedies. This was accomplished by establishing a close working relationship with the state environmental regulatory agency through each step of the process, and resulted in receiving approvals for each of those steps. The agency has designated Lockheed Martin's management of the TCE-contamination at the MAF site as a model for other manufacturing sites in a similar situation. In February 1984, the Louisiana Department of Environmental Quality (LDEQ) issued a compliance order to begin the clean up of groundwater contaminated with TCE. In April 1984 Lockheed Martin began operating a groundwater recovery well to capture the TCE plume. The well not only removes contaminants, but also sustains an inward groundwater hydraulic gradient so that the potential offsite migration of the TCE plume is greatly diminished. This effort was successful, and for the agency to give orders and for a regulated industry to follow them is standard procedure, but this is a passive approach to solving environmental problems. The goal of the company thereafter was to take a leadership, proactive role and guide the MAF contamination clean up to its best conclusion at minimum time and lowest cost to NASA. To accomplish this goal, we have established a positive working relationship with LDEQ, involving them interactively in the implementation of advanced remedial activities at MAF as outlined in the following paragraphs.

  19. Towards more stable operation of the Tokyo Tier2 center

    Science.gov (United States)

    Nakamura, T.; Mashimo, T.; Matsui, N.; Sakamoto, H.; Ueda, I.

    2014-06-01

    The Tokyo Tier2 center, which is located at the International Center for Elementary Particle Physics (ICEPP) in the University of Tokyo, was established as a regional analysis center in Japan for the ATLAS experiment. The official operation with WLCG was started in 2007 after the several years development since 2002. In December 2012, we have replaced almost all hardware as the third system upgrade to deal with analysis for further growing data of the ATLAS experiment. The number of CPU cores are increased by factor of two (9984 cores in total), and the performance of individual CPU core is improved by 20% according to the HEPSPEC06 benchmark test at 32bit compile mode. The score is estimated as 18.03 (SL6) per core by using Intel Xeon E5-2680 2.70 GHz. Since all worker nodes are made by 16 CPU cores configuration, we deployed 624 blade servers in total. They are connected to 6.7 PB of disk storage system with non-blocking 10 Gbps internal network backbone by using two center network switches (NetIron MLXe-32). The disk storage is made by 102 of RAID6 disk arrays (Infortrend DS S24F-G2840-4C16DO0) and served by equivalent number of 1U file servers with 8G-FC connection to maximize the file transfer throughput per storage capacity. As of February 2013, 2560 CPU cores and 2.00 PB of disk storage have already been deployed for WLCG. Currently, the remaining non-grid resources for both CPUs and disk storage are used as dedicated resources for the data analysis by the ATLAS Japan collaborators. Since all hardware in the non-grid resources are made by same architecture with Tier2 resource, they will be able to be migrated as the Tier2 extra resource on demand of the ATLAS experiment in the future. In addition to the upgrade of computing resources, we expect the improvement of connectivity on the wide area network. Thanks to the Japanese NREN (NII), another 10 Gbps trans-Pacific line from Japan to Washington will be available additionally with existing two 10 Gbps lines

  20. Towards more stable operation of the Tokyo Tier2 center

    International Nuclear Information System (INIS)

    Nakamura, T; Mashimo, T; Matsui, N; Sakamoto, H; Ueda, I

    2014-01-01

    The Tokyo Tier2 center, which is located at the International Center for Elementary Particle Physics (ICEPP) in the University of Tokyo, was established as a regional analysis center in Japan for the ATLAS experiment. The official operation with WLCG was started in 2007 after the several years development since 2002. In December 2012, we have replaced almost all hardware as the third system upgrade to deal with analysis for further growing data of the ATLAS experiment. The number of CPU cores are increased by factor of two (9984 cores in total), and the performance of individual CPU core is improved by 20% according to the HEPSPEC06 benchmark test at 32bit compile mode. The score is estimated as 18.03 (SL6) per core by using Intel Xeon E5-2680 2.70 GHz. Since all worker nodes are made by 16 CPU cores configuration, we deployed 624 blade servers in total. They are connected to 6.7 PB of disk storage system with non-blocking 10 Gbps internal network backbone by using two center network switches (NetIron MLXe-32). The disk storage is made by 102 of RAID6 disk arrays (Infortrend DS S24F-G2840-4C16DO0) and served by equivalent number of 1U file servers with 8G-FC connection to maximize the file transfer throughput per storage capacity. As of February 2013, 2560 CPU cores and 2.00 PB of disk storage have already been deployed for WLCG. Currently, the remaining non-grid resources for both CPUs and disk storage are used as dedicated resources for the data analysis by the ATLAS Japan collaborators. Since all hardware in the non-grid resources are made by same architecture with Tier2 resource, they will be able to be migrated as the Tier2 extra resource on demand of the ATLAS experiment in the future. In addition to the upgrade of computing resources, we expect the improvement of connectivity on the wide area network. Thanks to the Japanese NREN (NII), another 10 Gbps trans-Pacific line from Japan to Washington will be available additionally with existing two 10 Gbps lines

  1. Grid Interoperation with ARC Middleware for the CMS Experiment

    CERN Document Server

    Edelmann, Erik; Frey, Jaime; Gronager, Michael; Happonen, Kalle; Johansson, Daniel; Kleist, Josva; Klem, Jukka; Koivumaki, Jesper; Linden, Tomas; Pirinen, Antti; Qing, Di

    2010-01-01

    The Compact Muon Solenoid (CMS) is one of the general purpose experiments at the CERN Large Hadron Collider (LHC). CMS computing relies on different grid infrastructures to provide computational and storage resources. The major grid middleware stacks used for CMS computing are gLite, Open Science Grid (OSG) and ARC (Advanced Resource Connector). Helsinki Institute of Physics (HIP) hosts one of the Tier-2 centers for CMS computing. CMS Tier-2 centers operate software systems for data transfers (PhEDEx), Monte Carlo production (ProdAgent) and data analysis (CRAB). In order to provide the Tier-2 services for CMS, HIP uses tools and components from both ARC and gLite grid middleware stacks. Interoperation between grid systems is a challenging problem and HIP uses two different solutions to provide the needed services. The first solution is based on gLite-ARC grid level interoperability. This allows to use ARC resources in CMS without modifying the CMS application software. The second solution is based on developi...

  2. Components of the CMS magnet system at the detector's assembly site.

    CERN Multimedia

    Maximilien Brice

    2002-01-01

    Photos 01, 05: Outer cylinder of the CMS vacuum tank. The vacuum tank consists of inner and outer stainless-steel cylinders and houses the superconducting coil. As can be seen, the cylinder is attached to the innermost ring of the barrel yoke. Photos 02, 04: CMS end-cap yoke. The magnetic flux generated by the superconducting coil in the CMS detector is returned via an iron yoke comprising three end-cap discs at each end (end-cap yoke) and five concentric cylinders (barrel yoke).Photo 03: Inner cylinder of the CMS vacuum tank. The vacuum tank consists of inner and outer stainless-steel cylinders and houses the superconducting coil. The inner cylinder contains all the barrel sub-detectors, which it supports via a system of horizontal rails. The cylinder is pictured here in the vertical position on a yellow platform mounted on the ferris-wheel support structure. This will allow it to be pivoted and inserted into the outer cylinder already attached to the innermost ring of the barrel yoke.

  3. New computer system for the Japan Tier-2 center

    CERN Multimedia

    Hiroyuki Matsunaga

    2007-01-01

    The ICEPP (International Center for Elementary Particle Physics) of the University of Tokyo has been operating an LCG Tier-2 center dedicated to the ATLAS experiment, and is going to switch over to the new production system which has been recently installed. The system will be of great help to the exciting physics analyses for coming years. The new computer system includes brand-new blade servers, RAID disks, a tape library system and Ethernet switches. The blade server is DELL PowerEdge 1955 which contains two Intel dual-core Xeon (WoodCrest) CPUs running at 3GHz, and a total of 650 servers will be used as compute nodes. Each of the RAID disks is configured to be RAID-6 with 16 Serial ATA HDDs. The equipment as well as the cooling system is placed in a new large computer room, and both are hooked up to UPS (uninterruptible power supply) units for stable operation. As a whole, the system has been built with redundant configuration in a cost-effective way. The next major upgrade will take place in thre...

  4. CMS Centre at CERN

    CERN Multimedia

    2007-01-01

    A new "CMS Centre" is being established on the CERN Meyrin site by the CMS collaboration. It will be a focal point for communications, where physicists will work together on data quality monitoring, detector calibration, offline analysis of physics events, and CMS computing operations. Construction of the CMS Centre begins in the historic Proton Synchrotron (PS) control room. The historic Proton Synchrotron (PS) control room, Opened by Niels Bohr in 1960, will be reused by CMS to built its control centre. TThe LHC@FNAL Centre, in operation at Fermilab in the US, will work very closely with the CMS Centre, as well as the CERN Control Centre. (Photo Fermilab)The historic Proton Synchrotron (PS) control room is about to start a new life. Opened by Niels Bohr in 1960, the room will be reused by CMS to built its control centre. When finished, it will resemble the CERN Contro...

  5. A Step-by-Step Guide to Tier 2 Behavioral Progress Monitoring

    Science.gov (United States)

    Bruhn, Allison L.; McDaniel, Sara C.; Rila, Ashley; Estrapala, Sara

    2018-01-01

    Students who are at risk for or show low-intensity behavioral problems may need targeted, Tier 2 interventions. Often, Tier 2 problem-solving teams are charged with monitoring student responsiveness to intervention. This process may be difficult for those who are not trained in data collection and analysis procedures. To aid practitioners in these…

  6. Transporting Motivational Interviewing to School Settings to Improve the Engagement and Fidelity of Tier 2 Interventions

    Science.gov (United States)

    Frey, Andy J.; Lee, Jon; Small, Jason W.; Seeley, John R.; Walker, Hill M.; Feil, Edward G.

    2013-01-01

    The majority of Tier 2 interventions are facilitated by specialized instructional support personnel, such as a school psychologists, school social workers, school counselors, or behavior consultants. Many professionals struggle to involve parents and teachers in Tier 2 behavior interventions. However, attention to the motivational issues for…

  7. Evolution of the ATLAS data and computing model for a Tier2 in the EGI infrastructure

    CERN Document Server

    Fernández Casaní, A; The ATLAS collaboration; González de la Hoz, S; Salt Cairols, J; Fassi, F; Kaci, M; Lamas, A; Oliver, E; Sánchez, J; Sánchez, V

    2012-01-01

    Since the start of the LHC pp collisions in 2010, the ATLAS computing model has moved from a more strict design, where every Tier2 had a liaison and a network dependence from a Tier1, to a more meshed approach where every cloud could be connected. Evolution of ATLAS data models requires changes in ATLAS Tier2s policy for the data replication, dynamic data caching and remote data access. It also requires rethinking the network infrastructure to enable any Tier2 and associated Tier3 to easily connect to any Tier1 or Tier2. Tier2s are becoming more and more important in the ATLAS computing model as it allows more data to be readily accessible for analysis jobs to all users, independently of their geographical location. The Tier2s disk space has been reserved for real, simulated, calibration and alignment, group, and user data. A buffer disk space is needed for input and output data for simulations jobs. Tier2s are going to be used more efficiently. In this way Tier1s and Tier2s are becoming more equivalent for t...

  8. Evolution of the Atlas data and computing model for a Tier-2 in the EGI infrastructure

    CERN Document Server

    Fernandez, A; The ATLAS collaboration; AMOROS, G; VILLAPLANA, M; FASSI, F; KACI, M; LAMAS, A; OLIVER, E; SALT, J; SANCHEZ, J; SANCHEZ, V

    2012-01-01

    ABSTRAC ISCG 2012 Evolution of the Atlas data and computing model for a Tier2 in the EGI infrastructure During last years the Atlas computing model has moved from a more strict design, where every Tier2 had a liaison and a network dependence from a Tier1, to a more meshed approach where every cloud could be connected. Evolution of ATLAS data models requires changes in ATLAS Tier2s policy for the data replication, dynamic data caching and remote data access. It also requires rethinking the network infrastructure to enable any Tier2 and associated Tier3 to easily connect to any Tier1 or Tier2. Tier2s are becoming more and more important in the ATLAS computing model as it allows more data to be readily accessible for analysis jobs to all users, independently of their geographical location. The Tier2s disk space has been reserved for real, simulated, calibration and alignment, group, and user data. A buffer disk space is needed for input and output data for simulations jobs. Tier2s are going to be used more effic...

  9. Developing the Capacity to Implement Tier 2 and Tier 3 Supports: How Do We Support Our Faculty and Staff in Preparing for Sustainability?

    Science.gov (United States)

    Oakes, Wendy Peia; Lane, Kathleen Lynne; Germer, Kathryn A.

    2014-01-01

    School-site and district-level leadership teams rely on the existing knowledge base to select, implement, and evaluate evidence-based practices meeting students' multiple needs within the context of multitiered systems of support. The authors focus on the stages of implementation science as applied to Tier 2 and Tier 3 supports; the…

  10. CMS Factsheet

    CERN Multimedia

    Lapka, Marzena; Rao, Achintya

    2016-01-01

    CMS Factsheets: containing facts about the CMS collaboration and detector. Printed copies of the English version are available from the CMS Secretariat. Responsible for translations: English only - E.Gibney (updated 2015)

  11. CMS Data Transfer operations after the first years of LHC collisions

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    CMS experiment possesses distributed computing infrastructure and its performance heavily depends on the fast and smooth distribution of data between different CMS sites. Data must be transferred from the Tier-0 (CERN) to the Tier-1 for storing and archiving, and time and good quality are vital to avoid overflowing CERN storage buffers. At the same time, processed data has to be distributed from Tier-1 sites to all Tier-2 sites for physics analysis while MonteCarlo simulations synchronized back to Tier-1 sites for further archival. At the core of all transferring machinery is PhEDEx (Physics Experiment Data Export) data transfer system. It is very important to ensure reliable operation of the system, and the operational tasks comprise monitoring and debugging all transfer issues. Based on transfer quality information Site Readiness tool is used to create plans for resources utilization in the future. We review the operational procedures created to enforce reliable data delivery to CMS distributed sites all ov...

  12. CMS readiness for multi-core workload scheduling

    Science.gov (United States)

    Perez-Calero Yzquierdo, A.; Balcas, J.; Hernandez, J.; Aftab Khan, F.; Letts, J.; Mason, D.; Verguilov, V.

    2017-10-01

    In the present run of the LHC, CMS data reconstruction and simulation algorithms benefit greatly from being executed as multiple threads running on several processor cores. The complexity of the Run 2 events requires parallelization of the code to reduce the memory-per- core footprint constraining serial execution programs, thus optimizing the exploitation of present multi-core processor architectures. The allocation of computing resources for multi-core tasks, however, becomes a complex problem in itself. The CMS workload submission infrastructure employs multi-slot partitionable pilots, built on HTCondor and GlideinWMS native features, to enable scheduling of single and multi-core jobs simultaneously. This provides a solution for the scheduling problem in a uniform way across grid sites running a diversity of gateways to compute resources and batch system technologies. This paper presents this strategy and the tools on which it has been implemented. The experience of managing multi-core resources at the Tier-0 and Tier-1 sites during 2015, along with the deployment phase to Tier-2 sites during early 2016 is reported. The process of performance monitoring and optimization to achieve efficient and flexible use of the resources is also described.

  13. CMS Readiness for Multi-Core Workload Scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Perez-Calero Yzquierdo, A. [Madrid, CIEMAT; Balcas, J. [Caltech; Hernandez, J. [Madrid, CIEMAT; Aftab Khan, F. [NCP, Islamabad; Letts, J. [UC, San Diego; Mason, D. [Fermilab; Verguilov, V. [CLMI, Sofia

    2017-11-22

    In the present run of the LHC, CMS data reconstruction and simulation algorithms benefit greatly from being executed as multiple threads running on several processor cores. The complexity of the Run 2 events requires parallelization of the code to reduce the memory-per- core footprint constraining serial execution programs, thus optimizing the exploitation of present multi-core processor architectures. The allocation of computing resources for multi-core tasks, however, becomes a complex problem in itself. The CMS workload submission infrastructure employs multi-slot partitionable pilots, built on HTCondor and GlideinWMS native features, to enable scheduling of single and multi-core jobs simultaneously. This provides a solution for the scheduling problem in a uniform way across grid sites running a diversity of gateways to compute resources and batch system technologies. This paper presents this strategy and the tools on which it has been implemented. The experience of managing multi-core resources at the Tier-0 and Tier-1 sites during 2015, along with the deployment phase to Tier-2 sites during early 2016 is reported. The process of performance monitoring and optimization to achieve efficient and flexible use of the resources is also described.

  14. Managing a tier-2 computer centre with a private cloud infrastructure

    International Nuclear Information System (INIS)

    Bagnasco, Stefano; Berzano, Dario; Brunetti, Riccardo; Lusso, Stefano; Vallero, Sara

    2014-01-01

    In a typical scientific computing centre, several applications coexist and share a single physical infrastructure. An underlying Private Cloud infrastructure eases the management and maintenance of such heterogeneous applications (such as multipurpose or application-specific batch farms, Grid sites, interactive data analysis facilities and others), allowing dynamic allocation resources to any application. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques. Such infrastructures are being deployed in some large centres (see e.g. the CERN Agile Infrastructure project), but with several open-source tools reaching maturity this is becoming viable also for smaller sites. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 centre, an Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The private cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem and the OpenWRT Linux distribution (used for network virtualization); a future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and OCCI

  15. A Voyage to Arcturus: A model for automated management of a WLCG Tier-2 facility

    International Nuclear Information System (INIS)

    Roy, Gareth; Crooks, David; Mertens, Lena; Mitchell, Mark; Skipsey, Samuel Cadellin; Britton, David; Purdie, Stuart

    2014-01-01

    With the current trend towards 'On Demand Computing' in big data environments it is crucial that the deployment of services and resources becomes increasingly automated. Deployment based on cloud platforms is available for large scale data centre environments but these solutions can be too complex and heavyweight for smaller, resource constrained WLCG Tier-2 sites. Along with a greater desire for bespoke monitoring and collection of Grid related metrics, a more lightweight and modular approach is desired. In this paper we present a model for a lightweight automated framework which can be use to build WLCG grid sites, based on 'off the shelf' software components. As part of the research into an automation framework the use of both IPMI and SNMP for physical device management will be included, as well as the use of SNMP as a monitoring/data sampling layer such that more comprehensive decision making can take place and potentially be automated. This could lead to reduced down times and better performance as services are recognised to be in a non-functional state by autonomous systems.

  16. Testing SLURM open source batch system for a Tierl/Tier2 HEP computing facility

    Science.gov (United States)

    Donvito, Giacinto; Salomoni, Davide; Italiano, Alessandro

    2014-06-01

    In this work the testing activities that were carried on to verify if the SLURM batch system could be used as the production batch system of a typical Tier1/Tier2 HEP computing center are shown. SLURM (Simple Linux Utility for Resource Management) is an Open Source batch system developed mainly by the Lawrence Livermore National Laboratory, SchedMD, Linux NetworX, Hewlett-Packard, and Groupe Bull. Testing was focused both on verifying the functionalities of the batch system and the performance that SLURM is able to offer. We first describe our initial set of requirements. Functionally, we started configuring SLURM so that it replicates all the scheduling policies already used in production in the computing centers involved in the test, i.e. INFN-Bari and the INFN-Tier1 at CNAF, Bologna. Currently, the INFN-Tier1 is using IBM LSF (Load Sharing Facility), while INFN-Bari, an LHC Tier2 for both CMS and Alice, is using Torque as resource manager and MAUI as scheduler. We show how we configured SLURM in order to enable several scheduling functionalities such as Hierarchical FairShare, Quality of Service, user-based and group-based priority, limits on the number of jobs per user/group/queue, job age scheduling, job size scheduling, and scheduling of consumable resources. We then show how different job typologies, like serial, MPI, multi-thread, whole-node and interactive jobs can be managed. Tests on the use of ACLs on queues or in general other resources are then described. A peculiar SLURM feature we also verified is triggers on event, useful to configure specific actions on each possible event in the batch system. We also tested highly available configurations for the master node. This feature is of paramount importance since a mandatory requirement in our scenarios is to have a working farm cluster even in case of hardware failure of the server(s) hosting the batch system. Among our requirements there is also the possibility to deal with pre-execution and post

  17. Testing SLURM open source batch system for a Tierl/Tier2 HEP computing facility

    International Nuclear Information System (INIS)

    Donvito, Giacinto; Italiano, Alessandro; Salomoni, Davide

    2014-01-01

    In this work the testing activities that were carried on to verify if the SLURM batch system could be used as the production batch system of a typical Tier1/Tier2 HEP computing center are shown. SLURM (Simple Linux Utility for Resource Management) is an Open Source batch system developed mainly by the Lawrence Livermore National Laboratory, SchedMD, Linux NetworX, Hewlett-Packard, and Groupe Bull. Testing was focused both on verifying the functionalities of the batch system and the performance that SLURM is able to offer. We first describe our initial set of requirements. Functionally, we started configuring SLURM so that it replicates all the scheduling policies already used in production in the computing centers involved in the test, i.e. INFN-Bari and the INFN-Tier1 at CNAF, Bologna. Currently, the INFN-Tier1 is using IBM LSF (Load Sharing Facility), while INFN-Bari, an LHC Tier2 for both CMS and Alice, is using Torque as resource manager and MAUI as scheduler. We show how we configured SLURM in order to enable several scheduling functionalities such as Hierarchical FairShare, Quality of Service, user-based and group-based priority, limits on the number of jobs per user/group/queue, job age scheduling, job size scheduling, and scheduling of consumable resources. We then show how different job typologies, like serial, MPI, multi-thread, whole-node and interactive jobs can be managed. Tests on the use of ACLs on queues or in general other resources are then described. A peculiar SLURM feature we also verified is triggers on event, useful to configure specific actions on each possible event in the batch system. We also tested highly available configurations for the master node. This feature is of paramount importance since a mandatory requirement in our scenarios is to have a working farm cluster even in case of hardware failure of the server(s) hosting the batch system. Among our requirements there is also the possibility to deal with pre-execution and post

  18. CMS Collaboration

    International Nuclear Information System (INIS)

    Faridah Mohammad Idris; Wan Ahmad Tajuddin Wan Abdullah; Zainol Abidin Ibrahim

    2013-01-01

    Full-text: CMS Collaboration is an international scientific collaboration located at European Organization for Nuclear Research (CERN), Switzerland, dedicated in carried out research on experimental particle physics. Consisting of 179 institutions from 41 countries from all around the word, CMS Collaboration host a general purpose detector for example the Compact Muon Solenoid (CMS) for members in CMS Collaboration to conduct experiment from the collision of two proton beams accelerated to a speed of 8 TeV in the LHC ring. In this paper, we described how the CMS detector is used by the scientist in CMS Collaboration to reconstruct the most basic building of matter. (author)

  19. The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model

    International Nuclear Information System (INIS)

    González de la Hoz, S

    2012-01-01

    Originally the ATLAS Computing and Data Distribution model assumed that the Tier-2s should keep on disk collectively at least one copy of all “active” AOD and DPD datasets. Evolution of ATLAS Computing and Data model requires changes in ATLAS Tier-2s policy for the data replication, dynamic data caching and remote data access. Tier-2 operations take place completely asynchronously with respect to data taking. Tier-2s do simulation and user analysis. Large-scale reprocessing jobs on real data are at first taking place mostly at Tier-1s but will progressively be shared with Tier-2s as well. The availability of disk space at Tier-2s is extremely important in the ATLAS Computing model as it allows more data to be readily accessible for analysis jobs to all users, independently of their geographical location. The Tier-2s disk space has been reserved for real, simulated, calibration and alignment, group, and user data. A buffer disk space is needed for input and output data for simulations jobs. Tier-2s are going to be used more efficiently. In this way Tier-1s and Tier-2s are becoming more equivalent for the network and the hierarchy of Tier-1, 2 is less strict. This paper presents the usage of Tier-2s resources in different Grid activities, caching of data at Tier-2s, and their role in the analysis in the new ATLAS Computing and Data model.

  20. A Tier2 Center at the University of Florida

    Science.gov (United States)

    Rodriguez, Jorge Luis

    2005-11-01

    The High Energy Physics (HEP) Group at the University of Florida is involved in a variety of projects ranging from HEP Experiments at hadron and electron positron colliders to cutting edge computer science experiments focused on grid computing. In support of these activities the Florida group have developed and deployed a computational facility consisting of several service nodes, compute clusters and disk storage devices. The resources contribute collectively or individually to production and development activities for the CMS experiment at the Large Hadron Collider (LHC), Monte Carlo production for the CDF experiment at Fermi Lab, the CLEO experiment, and research on grid computing for the GriPhyN, iVDGL and UltraLight projects. The collection of servers, clusters and storage devices is managed as a single facility using the ROCKS cluster management system. Operating the facility as a single centrally managed system enhances our ability to relocate and reconfigure the resources as necessary in support of our research and production activities. In this paper we describe the architecture, including details on our local implementation of the ROCKS systems and how this simplifies the maintenance and administration of the facility.

  1. Cleavage-Independent HIV-1 Trimers From CHO Cell Lines Elicit Robust Autologous Tier 2 Neutralizing Antibodies

    Directory of Open Access Journals (Sweden)

    Shridhar Bale

    2018-05-01

    Full Text Available Native flexibly linked (NFL HIV-1 envelope glycoprotein (Env trimers are cleavage-independent and display a native-like, well-folded conformation that preferentially displays broadly neutralizing determinants. The NFL platform simplifies large-scale production of Env by eliminating the need to co-transfect the precursor-cleaving protease, furin that is required by the cleavage-dependent SOSIP trimers. Here, we report the development of a CHO-M cell line that expressed BG505 NFL trimers at a high level of homogeneity and yields of ~1.8 g/l. BG505 NFL trimers purified by single-step lectin-affinity chromatography displayed a native-like closed structure, efficient recognition by trimer-preferring bNAbs, no recognition by non-neutralizing CD4 binding site-directed and V3-directed antibodies, long-term stability, and proper N-glycan processing. Following negative-selection, formulation in ISCOMATRIX adjuvant and inoculation into rabbits, the trimers rapidly elicited potent autologous tier 2 neutralizing antibodies. These antibodies targeted the N-glycan “hole” naturally present on the BG505 Env proximal to residues at positions 230, 241, and 289. The BG505 NFL trimers that did not expose V3 in vitro, elicited low-to-no tier 1 virus neutralization in vivo, indicating that they remained intact during the immunization process, not exposing V3. In addition, BG505 NFL and BG505 SOSIP trimers expressed from 293F cells, when formulated in Adjuplex adjuvant, elicited equivalent BG505 tier 2 autologous neutralizing titers. These titers were lower in potency when compared to the titers elicited by CHO-M cell derived trimers. In addition, increased neutralization of tier 1 viruses was detected. Taken together, these data indicate that both adjuvant and cell-type expression can affect the elicitation of tier 2 and tier 1 neutralizing responses in vivo.

  2. CMS Statistics

    Data.gov (United States)

    U.S. Department of Health & Human Services — The CMS Center for Strategic Planning produces an annual CMS Statistics reference booklet that provides a quick reference for summary information about health...

  3. CMS 2006 - CMS France days

    International Nuclear Information System (INIS)

    Huss, D.; Dobrzynski, L.; Virdee, J.; Boudoule, G.; Fontaine, J.C.; Faure, J.L.; Paganini, P.; Mathez, H.; Gross, L.; Charlot, C.; Trunov, A.; Patois, Y.; Busson, P.; Maire, M.; Berthon, U.; Todorov, T.; Beaudette, F.; Sirois, Y.; Baffioni, S.; Beauceron, S.; Delmeire, E.; Agram, J.L.; Goerlach, U.; Mangeol, D.; Salerno, R.; Bloch, D.; Lassila-Perini, K.; Blaha, J.; Drobychev, G.; Gras, P.; Hagenauer, M.; Denegri, D.; Lounis, A.; Faccio, F.; Lecoq, J.

    2006-01-01

    These CMS talks give the opportunity for all the teams working on the CMS (Compact Muon Spectrometer) project to present the status of their works and to exchange ideas. 5 sessions have been organized: 1) CMS status and perspectives, 2) contributions of the different laboratories, 3) software and computation, 4) physics with CMS (particularly the search for the Higgs boson), and 5) electronic needs. This document gathers the slides of the presentations

  4. Scaling up a CMS tier-3 site with campus resources and a 100 Gb/s network connection: what could go wrong?

    Science.gov (United States)

    Wolf, Matthias; Woodard, Anna; Li, Wenzhao; Hurtado Anampa, Kenyi; Tovar, Benjamin; Brenner, Paul; Lannon, Kevin; Hildreth, Mike; Thain, Douglas

    2017-10-01

    The University of Notre Dame (ND) CMS group operates a modest-sized Tier-3 site suitable for local, final-stage analysis of CMS data. However, through the ND Center for Research Computing (CRC), Notre Dame researchers have opportunistic access to roughly 25k CPU cores of computing and a 100 Gb/s WAN network link. To understand the limits of what might be possible in this scenario, we undertook to use these resources for a wide range of CMS computing tasks from user analysis through large-scale Monte Carlo production (including both detector simulation and data reconstruction.) We will discuss the challenges inherent in effectively utilizing CRC resources for these tasks and the solutions deployed to overcome them.

  5. Implementing data placement strategies for the CMS experiment based on a popularity model

    International Nuclear Information System (INIS)

    Barreiro Megino, F H; Cinquilli, M; Giordano, D; Karavakis, E; Girone, M; Magini, N; Mancinelli, V; Spiga, D

    2012-01-01

    During the first two years of data taking, the CMS experiment has collected over 20 PetaBytes of data and processed and analyzed it on the distributed, multi-tiered computing infrastructure on the WorldWide LHC Computing Grid. Given the increasing data volume that has to be stored and efficiently analyzed, it is a challenge for several LHC experiments to optimize and automate the data placement strategies in order to fully profit of the available network and storage resources and to facilitate daily computing operations. Building on previous experience acquired by ATLAS, we have developed the CMS Popularity Service that tracks file accesses and user activity on the grid and will serve as the foundation for the evolution of their data placement. A fully automated, popularity-based site-cleaning agent has been deployed in order to scan Tier-2 sites that are reaching their space quota and suggest obsolete, unused data that can be safely deleted without disrupting analysis activity. Future work will be to demonstrate dynamic data placement functionality based on this popularity service and integrate it in the data and workload management systems: as a consequence the pre-placement of data will be minimized and additional replication of hot datasets will be requested automatically. This paper will give an insight into the development, validation and production process and will analyze how the framework has influenced resource optimization and daily operations in CMS.

  6. CMS Space Monitoring

    Science.gov (United States)

    Ratnikova, N.; Huang, C.-H.; Sanchez-Hernandez, A.; Wildish, T.; Zhang, X.

    2014-06-01

    During the first LHC run, CMS stored about one hundred petabytes of data. Storage accounting and monitoring help to meet the challenges of storage management, such as efficient space utilization, fair share between users and groups and resource planning. We present a newly developed CMS space monitoring system based on the storage metadata dumps produced at the sites. The information extracted from the storage dumps is aggregated and uploaded to a central database. A web based data service is provided to retrieve the information for a given time interval and a range of sites, so it can be further aggregated and presented in the desired format. The system has been designed based on the analysis of CMS monitoring requirements and experiences of the other LHC experiments. In this paper, we demonstrate how the existing software components of the CMS data placement system, PhEDEx, have been re-used, dramatically reducing the development effort.

  7. CMS Space Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Ratnikova, N. [Fermilab; Huang, C.-H. [Fermilab; Sanchez-Hernandez, A. [CINVESTAV, IPN; Wildish, T. [Princeton U.; Zhang, X. [Beijing, Inst. High Energy Phys.

    2014-01-01

    During the first LHC run, CMS stored about one hundred petabytes of data. Storage accounting and monitoring help to meet the challenges of storage management, such as efficient space utilization, fair share between users and groups and resource planning. We present a newly developed CMS space monitoring system based on the storage metadata dumps produced at the sites. The information extracted from the storage dumps is aggregated and uploaded to a central database. A web based data service is provided to retrieve the information for a given time interval and a range of sites, so it can be further aggregated and presented in the desired format. The system has been designed based on the analysis of CMS monitoring requirements and experiences of the other LHC experiments. In this paper, we demonstrate how the existing software components of the CMS data placement system, PhEDEx, have been re-used, dramatically reducing the development effort.

  8. Enabling IPv6 at FZU - WLCG Tier2 in Prague

    International Nuclear Information System (INIS)

    Kouba, Tomáš; Chudoba, Jiří; Eliáš, Marek

    2014-01-01

    The usage of the new IPv6 protocol in production is becoming reality in the HEP community and the Computing Centre of the Institute of Physics in Prague participates in many IPv6 related activities. Our contribution presents experience with monitoring in HEPiX distributed IPv6 testbed which includes 11 remote sites. We use Nagios to check availability of services and Smokeping for monitoring the network latency. Since it is not always trivial to setup DNS in a dual stack environment properly, we developed a Nagios plugin for checking whether a domain name is resolvable when using only IP protocol version 6 and only version 4. We will also present local area network monitoring and tuning related to IPv6 performance. One of the most important software for a grid site is a batch system for a job execution. We will present our experience with configuring and running Torque batch system in a dual stack environment. We also discuss the steps needed to run VO specific jobs in our IPv6 testbed.

  9. Enabling IPv6 at FZU - WLCG Tier2 in Prague

    Science.gov (United States)

    Kouba, Tomáš; Chudoba, Jiří; Eliáš, Marek

    2014-06-01

    The usage of the new IPv6 protocol in production is becoming reality in the HEP community and the Computing Centre of the Institute of Physics in Prague participates in many IPv6 related activities. Our contribution presents experience with monitoring in HEPiX distributed IPv6 testbed which includes 11 remote sites. We use Nagios to check availability of services and Smokeping for monitoring the network latency. Since it is not always trivial to setup DNS in a dual stack environment properly, we developed a Nagios plugin for checking whether a domain name is resolvable when using only IP protocol version 6 and only version 4. We will also present local area network monitoring and tuning related to IPv6 performance. One of the most important software for a grid site is a batch system for a job execution. We will present our experience with configuring and running Torque batch system in a dual stack environment. We also discuss the steps needed to run VO specific jobs in our IPv6 testbed.

  10. Stability and Scalability of the CMS Global Pool: Pushing HTCondor and GlideinWMS to New Limits

    Energy Technology Data Exchange (ETDEWEB)

    Balcas, J. [Caltech; Bockelman, B. [Nebraska U.; Hufnagel, D. [Fermilab; Hurtado Anampa, K. [Notre Dame U.; Aftab Khan, F. [NCP, Islamabad; Larson, K. [Fermilab; Letts, J. [UC, San Diego; Marra da Silva, J. [Sao Paulo, IFT; Mascheroni, M. [Fermilab; Mason, D. [Fermilab; Perez-Calero Yzquierdo, A. [Madrid, CIEMAT; Tiradani, A. [Fermilab

    2017-11-22

    The CMS Global Pool, based on HTCondor and glideinWMS, is the main computing resource provisioning system for all CMS workflows, including analysis, Monte Carlo production, and detector data reprocessing activities. The total resources at Tier-1 and Tier-2 grid sites pledged to CMS exceed 100,000 CPU cores, while another 50,000 to 100,000 CPU cores are available opportunistically, pushing the needs of the Global Pool to higher scales each year. These resources are becoming more diverse in their accessibility and configuration over time. Furthermore, the challenge of stably running at higher and higher scales while introducing new modes of operation such as multi-core pilots, as well as the chaotic nature of physics analysis workflows, places huge strains on the submission infrastructure. This paper details some of the most important challenges to scalability and stability that the CMS Global Pool has faced since the beginning of the LHC Run II and how they were overcome.

  11. IPv6 testing and deployment at Prague Tier 2

    Science.gov (United States)

    Kouba, Tomáŝ; Chudoba, Jiří; Eliáŝ, Marek; Fiala, Lukáŝ

    2012-12-01

    Computing Center of the Institute of Physics in Prague provides computing and storage resources for various HEP experiments (D0, Atlas, Alice, Auger) and currently operates more than 300 worker nodes with more than 2500 cores and provides more than 2PB of disk space. Our site is limited to one C-sized block of IPv4 addresses, and hence we had to move most of our worker nodes behind the NAT. However this solution demands more difficult routing setup. We see the IPv6 deployment as a solution that provides less routing, more switching and therefore promises higher network throughput. The administrators of the Computing Center strive to configure and install all provided services automatically. For installation tasks we use PXE and kickstart, for network configuration we use DHCP and for software configuration we use CFEngine. Many hardware boxes are configured via specific web pages or telnet/ssh protocol provided by the box itself. All our services are monitored with several tools e.g. Nagios, Munin, Ganglia. We rely heavily on the SNMP protocol for hardware health monitoring. All these installation, configuration and monitoring tools must be tested before we can switch completely to IPv6 network stack. In this contribution we present the tests we have made, limitations we have faced and configuration decisions that we have made during IPv6 testing. We also present testbed built on virtual machines that was used for all the testing and evaluation.

  12. IPv6 testing and deployment at Prague Tier 2

    International Nuclear Information System (INIS)

    Kouba, Tomáŝ; Chudoba, Jiří; Eliáŝ, Marek; Fiala, Lukáŝ

    2012-01-01

    Computing Center of the Institute of Physics in Prague provides computing and storage resources for various HEP experiments (D0, Atlas, Alice, Auger) and currently operates more than 300 worker nodes with more than 2500 cores and provides more than 2PB of disk space. Our site is limited to one C-sized block of IPv4 addresses, and hence we had to move most of our worker nodes behind the NAT. However this solution demands more difficult routing setup. We see the IPv6 deployment as a solution that provides less routing, more switching and therefore promises higher network throughput. The administrators of the Computing Center strive to configure and install all provided services automatically. For installation tasks we use PXE and kickstart, for network configuration we use DHCP and for software configuration we use CFEngine. Many hardware boxes are configured via specific web pages or telnet/ssh protocol provided by the box itself. All our services are monitored with several tools e.g. Nagios, Munin, Ganglia. We rely heavily on the SNMP protocol for hardware health monitoring. All these installation, configuration and monitoring tools must be tested before we can switch completely to IPv6 network stack. In this contribution we present the tests we have made, limitations we have faced and configuration decisions that we have made during IPv6 testing. We also present testbed built on virtual machines that was used for all the testing and evaluation.

  13. Explicit Instructional Interactions: Exploring the Black Box of a Tier 2 Mathematics Intervention

    Science.gov (United States)

    Doabler, Christian T.; Clarke, Ben; Stoolmiller, Mike; Kosty, Derek B.; Fien, Hank; Smolkowski, Keith; Baker, Scott K.

    2017-01-01

    A critical aspect of intervention research is investigating the active ingredients that underlie intensive interventions and their theories of change. This study explored the rate of instructional interactions within treatment groups to determine whether they offered explanatory power of an empirically validated Tier 2 kindergarten mathematics…

  14. Testing the Efficacy of a Tier 2 Mathematics Intervention: A Conceptual Replication Study

    Science.gov (United States)

    Doabler, Christian T.; Clarke, Ben; Kosty, Derek B.; Kurtz-Nelson, Evangeline; Fien, Hank; Smolkowski, Keith; Baker, Scott K.

    2016-01-01

    The purpose of this closely aligned conceptual replication study was to investigate the efficacy of a Tier 2 kindergarten mathematics intervention. The replication study differed from the initial randomized controlled trial on three important elements: geographical region, timing of the intervention, and instructional context of the…

  15. RE-AIM Checklist for Integrating and Sustaining Tier 2 Social-Behavioral Interventions

    Science.gov (United States)

    Cheney, Douglas A.; Yong, Minglee

    2014-01-01

    Even though evidence-based Tier 2 programs are now more commonly available, integrating and sustaining these interventions in schools remain challenging. RE-AIM, which stands for Reach, Effectiveness, Adoption, Implementation, and Maintenance, is a public health framework used to maximize the effectiveness of health promotion programs in…

  16. 40 CFR 141.203 - Tier 2 Public Notice-Form, manner, and frequency of notice.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Tier 2 Public Notice-Form, manner, and frequency of notice. 141.203 Section 141.203 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY...., house renters, apartment dwellers, university students, nursing home patients, prison inmates, etc...

  17. User and group storage management the CMS CERN T2 centre

    Science.gov (United States)

    Cerminara, G.; Franzoni, G.; Pfeiffer, A.

    2015-12-01

    A wide range of detector commissioning, calibration and data analysis tasks is carried out by CMS using dedicated storage resources available at the CMS CERN Tier-2 centre. Relying on the functionalities of the EOS disk-only storage technology, the optimal exploitation of the CMS user/group resources has required the introduction of policies for data access management, data protection, cleanup campaigns based on access pattern, and long term tape archival. The resource management has been organised around the definition of working groups and the delegation to an identified responsible of each group composition. In this paper we illustrate the user/group storage management, and the development and operational experience at the CMS CERN Tier-2 centre in the 2012-2015 period.

  18. User and group storage management the CMS CERN T2 centre

    CERN Document Server

    Cerminara, G; Pfeiffer, A

    2015-01-01

    A wide range of detector commissioning, calibration and data analysis tasks is carried out by CMS using dedicated storage resources available at the CMS CERN Tier-2 centre. Relying on the functionalities of the EOS disk-only storage technology, the optimal exploitation of the CMS user/group resources has required the introduction of policies for data access management, data protection, cleanup campaigns based on access pattern, and long term tape archival. The resource management has been organised around the definition of working groups and the delegation to an identified responsible of each group composition. In this paper we illustrate the user/group storage management, and the development and operational experience at the CMS CERN Tier-2 centre in the 2012-2015 period.

  19. Optimization of Italian CMS computing centers via MIUR funded research projects

    International Nuclear Information System (INIS)

    Boccali, T; Mazzoni, E; Donvito, G; Pompili, A; Ricca, G Della; Talamo, I; Argiro, S; Grandi, C; Bonacorsi, D; Lista, L; Fabozzi, F; Barone, L M; Santocchia, A; Riahi, H; Tricomi, A; Sgaravatto, M; Maron, G

    2014-01-01

    In 2012, 14 Italian Institutions participating LHC Experiments (10 in CMS) have won a grant from the Italian Ministry of Research (MIUR), to optimize Analysis activities and in general the Tier2/Tier3 infrastructure. A large range of activities is actively carried on: they cover data distribution over WAN, dynamic provisioning for both scheduled and interactive processing, design and development of tools for distributed data analysis, and tests on the porting of CMS software stack to new highly performing / low power architectures.

  20. The Evolving role of Tier2s in ATLAS with the new Computing and Data Distribution Model

    CERN Document Server

    Gonzalez de la Hoz, S; The ATLAS collaboration

    2012-01-01

    Originally the ATLAS computing model assumed that the Tier2s of each of the 10 clouds should keep on disk collectively at least one copy of all "active" AOD and DPD datasets. Evolution of ATLAS computing and data models requires changes in ATLAS Tier2s policy for the data replication, dynamic data caching and remote data access. Tier2 operations take place completely asynchronously with respect to data taking. Tier2s do simulation and user analysis. Large-scale reprocessing jobs on real data are at first taking place mostly at Tier1s but will progressively move to Tier2s as well. The availability of disk space at Tier2s is extremely important in the ATLAS computing model as it allows more data to be readily accessible for analysis jobs to all users, independently of their geographical location. The Tier2s disk space has been reserved for real, simulated, calibration and alignment, group, and user data. A buffer disk space is needed for input and output data for simulations jobs. Tier2s are going to be used mo...

  1. The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model

    CERN Document Server

    Gonzalez de la Hoz, S

    2012-01-01

    Originally the ATLAS computing model assumed that the Tier2s of each of the 10 clouds should keep on disk collectively at least one copy of all "active" AOD and DPD datasets. Evolution of ATLAS computing and data models requires changes in ATLAS Tier2s policy for the data replication, dynamic data caching and remote data access. Tier2 operations take place completely asynchronously with respect to data taking. Tier2s do simulation and user analysis. Large-scale reprocessing jobs on real data are at first taking place mostly at Tier1s but will progressively move to Tier2s as well. The availability of disk space at Tier2s is extremely important in the ATLAS computing model as it allows more data to be readily accessible for analysis jobs to all users, independently of their geographical location. The Tier2s disk space has been reserved for real, simulated, calibration and alignment, group, and user data. A buffer disk space is needed for input and output data for simulations jobs. Tier2s are going to be used mo...

  2. Hiding the Complexity: Building a Distributed ATLAS Tier-2 with a Single Resource Interface using ARC Middleware

    International Nuclear Information System (INIS)

    Purdie, S; Stewart, G; Skipsey, S; Washbrook, A; Bhimji, W; Filipcic, A; Kenyon, M

    2011-01-01

    Since their inception, Grids for high energy physics have found management of data to be the most challenging aspect of operations. This problem has generally been tackled by the experiment's data management framework controlling in fine detail the distribution of data around the grid and the careful brokering of jobs to sites with co-located data. This approach, however, presents experiments with a difficult and complex system to manage as well as introducing a rigidity into the framework which is very far from the original conception of the grid. In this paper we describe how the ScotGrid distributed Tier-2, which has sites in Glasgow, Edinburgh and Durham, was presented to ATLAS as a single, unified resource using the ARC middleware stack. In this model the ScotGrid 'data store' is hosted at Glasgow and presented as a single ATLAS storage resource. As jobs are taken from the ATLAS PanDA framework, they are dispatched to the computing cluster with the fastest response time. An ARC compute element at each site then asynchronously stages the data from the data store into a local cache hosted at each site. The job is then launched in the batch system and accesses data locally. We discuss the merits of this system compared to other operational models and consider, from the point of view of the resource providers (sites), and from the resource consumers (experiments); and consider issues involved in transitions to this model.

  3. ATLAS Tier-2 at the Compute Resource Center GoeGrid in Göttingen

    Science.gov (United States)

    Meyer, Jörg; Quadt, Arnulf; Weber, Pavel; ATLAS Collaboration

    2011-12-01

    GoeGrid is a grid resource center located in Göttingen, Germany. The resources are commonly used, funded, and maintained by communities doing research in the fields of grid development, computer science, biomedicine, high energy physics, theoretical physics, astrophysics, and the humanities. For the high energy physics community, GoeGrid serves as a Tier-2 center for the ATLAS experiment as part of the world-wide LHC computing grid (WLCG). The status and performance of the Tier-2 center is presented with a focus on the interdisciplinary setup and administration of the cluster. Given the various requirements of the different communities on the hardware and software setup the challenge of the common operation of the cluster is detailed. The benefits are an efficient use of computer and personpower resources.

  4. Distributed Analysis in CMS

    CERN Document Server

    Fanfani, Alessandra; Sanches, Jose Afonso; Andreeva, Julia; Bagliesi, Giusepppe; Bauerdick, Lothar; Belforte, Stefano; Bittencourt Sampaio, Patricia; Bloom, Ken; Blumenfeld, Barry; Bonacorsi, Daniele; Brew, Chris; Calloni, Marco; Cesini, Daniele; Cinquilli, Mattia; Codispoti, Giuseppe; D'Hondt, Jorgen; Dong, Liang; Dongiovanni, Danilo; Donvito, Giacinto; Dykstra, David; Edelmann, Erik; Egeland, Ricky; Elmer, Peter; Eulisse, Giulio; Evans, Dave; Fanzago, Federica; Farina, Fabio; Feichtinger, Derek; Fisk, Ian; Flix, Josep; Grandi, Claudio; Guo, Yuyi; Happonen, Kalle; Hernandez, Jose M; Huang, Chih-Hao; Kang, Kejing; Karavakis, Edward; Kasemann, Matthias; Kavka, Carlos; Khan, Akram; Kim, Bockjoo; Klem, Jukka; Koivumaki, Jesper; Kress, Thomas; Kreuzer, Peter; Kurca, Tibor; Kuznetsov, Valentin; Lacaprara, Stefano; Lassila-Perini, Kati; Letts, James; Linden, Tomas; Lueking, Lee; Maes, Joris; Magini, Nicolo; Maier, Gerhild; McBride, Patricia; Metson, Simon; Miccio, Vincenzo; Padhi, Sanjay; Pi, Haifeng; Riahi, Hassen; Riley, Daniel; Rossman, Paul; Saiz, Pablo; Sartirana, Andrea; Sciaba, Andrea; Sekhri, Vijay; Spiga, Daniele; Tuura, Lassi; Vaandering, Eric; Vanelderen, Lukas; Van Mulders, Petra; Vedaee, Aresh; Villella, Ilaria; Wicklund, Eric; Wildish, Tony; Wissing, Christoph; Wurthwein, Frank

    2009-01-01

    The CMS experiment expects to manage several Pbytes of data each year during the LHC programme, distributing them over many computing sites around the world and enabling data access at those centers for analysis. CMS has identified the distributed sites as the primary location for physics analysis to support a wide community with thousands potential users. This represents an unprecedented experimental challenge in terms of the scale of distributed computing resources and number of user. An overview of the computing architecture, the software tools and the distributed infrastructure is reported. Summaries of the experience in establishing efficient and scalable operations to get prepared for CMS distributed analysis are presented, followed by the user experience in their current analysis activities.

  5. CMS AWARDS

    CERN Multimedia

    Steven Lowette

    Working under great time pressure towards a common goal in gradual steps can sometimes cause us to forget to take a step back, and celebrate what marvels have been achieved. A general need was felt within CMS to expand the recognition for our young scientists that made outstanding, well recognized and creative contributions to CMS, which served to significantly advance the performance of CMS as a complete and powerful experiment. Therefore, the Collaboration Board endorsed in March 2009 a proposal from the CB Chair and Advisory Group to award each year the newly created "CMS Achievement Award" to fourteen graduate students and postdocs that made exceptional contributions to the Tracker, ECAL, HCAL and Muon subdetectors as well as the TriDAS project, the Commissioning of CMS and the Offline Software and Computing projects. It was also agreed that there was a need to go back in time, and retroactively attribute awards for the years 2007 and 2008 when CMS went from a bare cavern to a detect...

  6. The Atlantic Coast of Maryland, Sediment Budget Update: Tier 2, Assateague Island and Ocean City Inlet

    Science.gov (United States)

    2016-06-01

    111 – Rivers and Harbors Act), the navigational structures at the Ocean City Inlet, and a number of Federally authorized channels (Figure 1). Reed...Tier 2, Assateague Island and Ocean City Inlet by Ernest R. Smith, Joseph C. Reed, and Ian L. Delwiche PURPOSE: This Coastal and Hydraulics...of the Atlantic Ocean shoreline within the U.S. Army Corps of Engineers (USACE) Baltimore District’s Area of Responsibility, which for coastal

  7. Implementing data placement strategies for the CMS experiment based on a popularity mode

    CERN Multimedia

    CERN. Geneva; Barreiro Megino, Fernando Harald

    2012-01-01

    During the first two years of data taking, the CMS experiment has collected over 20 PetaBytes of data and processed and analyzed it on the distributed, multi-tiered computing infrastructure on the WorldWide LHC Computing Grid. Given the increasing data volume that has to be stored and efficiently analyzed, it is a challenge for several LHC experiments to optimize and automate the data placement strategies in order to fully profit of the available network and storage resources and to facilitate daily computing operations. Building on previous experience acquired by ATLAS, we have developed the CMS Popularity Service that tracks file accesses and user activity on the grid and will serve as the foundation for the evolution of their data placement. A fully automated, popularity-based site-cleaning agent has been deployed in order to scan Tier2 sites that are reaching their space quota and suggest obsolete, unused data that can be safely deleted without disrupting analysis activity. Future work will be to demons...

  8. Implementing data placement strategies for the CMS experiment based on a popularity model

    CERN Document Server

    Giordano, Domenico

    2012-01-01

    During the first two years of data taking, the CMS experiment has collected over 20 PetaBytes of data and processed and analyzed it on the distributed, multi-tiered computing infrastructure on the WorldWide LHC Computing Grid. Given the increasing data volume that has to be stored and efficiently analyzed, it is a challenge for several LHC experiments to optimize and automate the data placement strategies in order to fully profit of the available network and storage resources and to facilitate daily computing operations. Building on previous experience acquired by ATLAS, we have developed the CMS Popularity Service that tracks file accesses and user activity on the grid and will serve as the foundation for the evolution of their data placement. A fully automated, popularity-based site-cleaning agent has been deployed in order to scan Tier-2 sites that are reaching their space quota and suggest obsolete, unused data that can be safely deleted without disrupting analysis activity. Future work will be to demonst...

  9. Transport of the first half of the CMS hadronic forward calorimeter (HF) from building 186 (CERN Meyrin site) to the CMS construction hall at point 5, Cessy, France.

    CERN Multimedia

    Florelle Antoine

    2006-01-01

    The two halves of the Forward Hadronic Calorimeter (HF) were transported from the CERN Meyrin site to the surface assembly hall at LHC Point 5 in Cessy, France, during the first part of July. Transporting these 300 tonne objects involved the construction around them of a 65-metre long trailer, simultaneously pushed and pulled by two trucks at either end. The main road between St. Genis and Cessy was closed during these operations and a police escort was provided for the ~5 hour journeys. The two HF halves will be the first major elements to be lowered by the gantry crane into the underground experimental cavern around the end of July or beginning of August.

  10. Opportunistic resource usage in CMS

    International Nuclear Information System (INIS)

    Kreuzer, Peter; Hufnagel, Dirk; Dykstra, D; Gutsche, O; Tadel, M; Sfiligoi, I; Letts, J; Wuerthwein, F; McCrea, A; Bockelman, B; Fajardo, E; Linares, L; Wagner, R; Konstantinov, P; Blumenfeld, B; Bradley, D

    2014-01-01

    CMS is using a tiered setup of dedicated computing resources provided by sites distributed over the world and organized in WLCG. These sites pledge resources to CMS and are preparing them especially for CMS to run the experiment's applications. But there are more resources available opportunistically both on the GRID and in local university and research clusters which can be used for CMS applications. We will present CMS' strategy to use opportunistic resources and prepare them dynamically to run CMS applications. CMS is able to run its applications on resources that can be reached through the GRID, through EC2 compliant cloud interfaces. Even resources that can be used through ssh login nodes can be harnessed. All of these usage modes are integrated transparently into the GlideIn WMS submission infrastructure, which is the basis of CMS' opportunistic resource usage strategy. Technologies like Parrot to mount the software distribution via CVMFS and xrootd for access to data and simulation samples via the WAN are used and will be described. We will summarize the experience with opportunistic resource usage and give an outlook for the restart of LHC data taking in 2015.

  11. CMS Status

    International Nuclear Information System (INIS)

    Dobrzynski, L.

    2007-01-01

    The status of the construction and installation of CMS detector is reviewed. The 4T magnet is cold since end of February 2006. Its commissioning up to the nominal field started in July 2006 allowing a Cosmic Challenge in which elements of the final detector are involved. All big mechanical pieces equipped with muons chambers have been assembled in the surface hall SX5. Since mid July the detector is closed with commissioned HCAL, two ECAL supermodules and representative elements of the silicon tracker. The trigger system as well as the DAQ are tested. After the achievement of the physics TDR, CMS is now ready for the promising signal hunting. (author)

  12. CMS overview

    CERN Document Server

    AUTHOR|(CDS)2071615

    2016-01-01

    Most recent CMS data related to the high-density QCD are presented for pp and PbPb collisions at 2.76 TeV and pPb collisions at 5.02 TeV. The PbPb collision is essential to understand collective behavior and the final-state effects for the detailed characteristics of hot, dense partonic matter, whereas the pPb collision provides the critical information on the initial-state effects including the modification of the parton distribution function in cold nuclei. This paper highlights some of recent heavy-ion related results from CMS.

  13. CMS computing on grid

    International Nuclear Information System (INIS)

    Guan Wen; Sun Gongxing

    2007-01-01

    CMS has adopted a distributed system of services which implement CMS application view on top of Grid services. An overview of CMS services will be covered. Emphasis is on CMS data management and workload Management. (authors)

  14. CMS Awards

    CERN Multimedia

    2004-01-01

    Ali Mohammad Rafiee receives the CMS Gold Award from Michel Della Negra of CMS. As part of the fifth annual CMS Awards, Iranian contractor HEPCO, located in Arak, an industrial town 200 km west of Tehran, received their Gold Award in a ceremony held on 14 June 2004 (the other award winners were reported in bulletin 13/2004). The Awards are given each year to a small number of the approximately one thousand contractors working on the CMS project. Gold Awards are given for outstanding technical achievement in work carried out for the detector. HEPCO received the Award for the excellent quality of their work in constructing two 25 tonne support tables, two 75 tonne shields (FCS) and eight supporting brackets to lower the HF into the cavern. Welds and machining obtained tolerances that were very difficult in structures of that size. Mr. A. M. Rafiee, the General Manager of the company, acknowledged the benefits of this collaboration, and thanked the efforts and skills of the many staff involved.

  15. CMS Detector Posters

    CERN Multimedia

    2016-01-01

    CMS Detector posters (produced in 2000): CMS installation CMS collaboration From the Big Bang to Stars LHC Magnetic Field Magnet System Trackering System Tracker Electronics Calorimetry Eletromagnetic Calorimeter Hadronic Calorimeter Muon System Muon Detectors Trigger and data aquisition (DAQ) ECAL posters (produced in 2010, FR & EN): CMS ECAL CMS ECAL-Supermodule cooling and mechatronics CMS ECAL-Supermodule assembly

  16. Protocol for Tier 2 Evaluation of Vapor Intrusion at Corrective Action Sites

    Science.gov (United States)

    2012-07-01

    and Evaluation o condition the evalu Contro negati buildi pressu Vapo intrusio “on” Contro positiv buildi pressu Vapo intrusio “off” Figure 2...of Vapor Intrusion 11 Table 3.1: Performance Objectives Performance Objective Data Requirements Success Criteria Results Quantitative ...defined in Appendix D of the demonstration plan. Quantitative objectives for Precision, Accuracy, Completeness, Representativeness, and Comparability

  17. Distributed Analysis Experience using Ganga on an ATLAS Tier2 infrastructure

    International Nuclear Information System (INIS)

    Fassi, F.; Cabrera, S.; Vives, R.; Fernandez, A.; Gonzalez de la Hoz, S.; Sanchez, J.; March, L.; Salt, J.; Kaci, M.; Lamas, A.; Amoros, G.

    2007-01-01

    The ATLAS detector will explore the high-energy frontier of Particle Physics collecting the proton-proton collisions delivered by the LHC (Large Hadron Collider). Starting in spring 2008, the LHC will produce more than 10 Peta bytes of data per year. The adapted tiered hierarchy for computing model at the LHC is: Tier-0 (CERN), Tiers-1 and Tiers-2 centres distributed around the word. The ATLAS Distributed Analysis (DA) system has the goal of enabling physicists to perform Grid-based analysis on distributed data using distributed computing resources. IFIC Tier-2 facility is participating in several aspects of DA. In support of the ATLAS DA activities a prototype is being tested, deployed and integrated. The analysis data processing applications are based on the Athena framework. GANGA, developed by LHCb and ATLAS experiments, allows simple switching between testing on a local batch system and large-scale processing on the Grid, hiding Grid complexities. GANGA deals with providing physicists an integrated environment for job preparation, bookkeeping and archiving, job splitting and merging. The experience with the deployment, configuration and operation of the DA prototype will be presented. Experiences gained of using DA system and GANGA in the Top physics analysis will be described. (Author)

  18. Monitoring and optimization of ATLAS Tier 2 center GoeGrid

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00219638; Quadt, Arnulf; Yahyapour, Ramin

    The demand on computational and storage resources is growing along with the amount of information that needs to be processed and preserved. In order to ease the provisioning of the digital services to the growing number of consumers, more and more distributed computing systems and platforms are actively developed and employed. The building block of the distributed computing infrastructure are single computing centers, similar to the Worldwide LHC Computing Grid, Tier 2 centre GoeGrid. The main motivation of this thesis was the optimization of GoeGrid performance by efficient monitoring. The goal has been achieved by means of the GoeGrid monitoring information analysis. The data analysis approach was based on the adaptive-network-based fuzzy inference system (ANFIS) and machine learning algorithm such as Linear Support Vector Machine (SVM). The main object of the research was the digital service, since availability, reliability and serviceability of the computing platform can be measured according to the const...

  19. Illustrative Example of Distributed Analysis in ATLAS Spanish Tier-2 and Tier-3 centers

    CERN Document Server

    Oliver, E; The ATLAS collaboration; González de la Hoz, S; Kaci, M; Lamas, A; Salt, J; Sánchez, J; Villaplana, M

    2011-01-01

    Data taking in ATLAS has been going on for more than one year. The necessity of a computing infrastructure for data storage, access for thousands of users and process of hundreds of million of events has been confirmed in this period. Fortunately, this task has been managed by the GRID infrastructure and the manpower that also has been developing specific GRID tools for the ATLAS community. An example of a physics analysis, searches for the decay of a heavy resonance into a ttbar pair, using this infrastructure is shown. Concretely using the ATLAS Spanish Tier-2 and the IFIC Tier-3. In this moment, the ATLAS Distributed Computing group is working to improve the connectivity among centers in order to be ready for the foreseen increase on the ATLAS activity in the next years.

  20. Conceptual design of an ALICE Tier-2 centre. Integrated into a multi-purpose computing facility

    Energy Technology Data Exchange (ETDEWEB)

    Zynovyev, Mykhaylo

    2012-06-29

    This thesis discusses the issues and challenges associated with the design and operation of a data analysis facility for a high-energy physics experiment at a multi-purpose computing centre. At the spotlight is a Tier-2 centre of the distributed computing model of the ALICE experiment at the Large Hadron Collider at CERN in Geneva, Switzerland. The design steps, examined in the thesis, include analysis and optimization of the I/O access patterns of the user workload, integration of the storage resources, and development of the techniques for effective system administration and operation of the facility in a shared computing environment. A number of I/O access performance issues on multiple levels of the I/O subsystem, introduced by utilization of hard disks for data storage, have been addressed by the means of exhaustive benchmarking and thorough analysis of the I/O of the user applications in the ALICE software framework. Defining the set of requirements to the storage system, describing the potential performance bottlenecks and single points of failure and examining possible ways to avoid them allows one to develop guidelines for selecting the way how to integrate the storage resources. The solution, how to preserve a specific software stack for the experiment in a shared environment, is presented along with its effects on the user workload performance. The proposal for a flexible model to deploy and operate the ALICE Tier-2 infrastructure and applications in a virtual environment through adoption of the cloud computing technology and the 'Infrastructure as Code' concept completes the thesis. Scientific software applications can be efficiently computed in a virtual environment, and there is an urgent need to adapt the infrastructure for effective usage of cloud resources.

  1. Conceptual design of an ALICE Tier-2 centre. Integrated into a multi-purpose computing facility

    International Nuclear Information System (INIS)

    Zynovyev, Mykhaylo

    2012-01-01

    This thesis discusses the issues and challenges associated with the design and operation of a data analysis facility for a high-energy physics experiment at a multi-purpose computing centre. At the spotlight is a Tier-2 centre of the distributed computing model of the ALICE experiment at the Large Hadron Collider at CERN in Geneva, Switzerland. The design steps, examined in the thesis, include analysis and optimization of the I/O access patterns of the user workload, integration of the storage resources, and development of the techniques for effective system administration and operation of the facility in a shared computing environment. A number of I/O access performance issues on multiple levels of the I/O subsystem, introduced by utilization of hard disks for data storage, have been addressed by the means of exhaustive benchmarking and thorough analysis of the I/O of the user applications in the ALICE software framework. Defining the set of requirements to the storage system, describing the potential performance bottlenecks and single points of failure and examining possible ways to avoid them allows one to develop guidelines for selecting the way how to integrate the storage resources. The solution, how to preserve a specific software stack for the experiment in a shared environment, is presented along with its effects on the user workload performance. The proposal for a flexible model to deploy and operate the ALICE Tier-2 infrastructure and applications in a virtual environment through adoption of the cloud computing technology and the 'Infrastructure as Code' concept completes the thesis. Scientific software applications can be efficiently computed in a virtual environment, and there is an urgent need to adapt the infrastructure for effective usage of cloud resources.

  2. CMS Wallet Card

    Data.gov (United States)

    U.S. Department of Health & Human Services — The CMS Wallet Card is a quick reference statistical summary on annual CMS program and financial data. The CMS Wallet Card is available for each year from 2004...

  3. CMS Fast Facts

    Data.gov (United States)

    U.S. Department of Health & Human Services — CMS has developed a new quick reference statistical summary on annual CMS program and financial data. CMS Fast Facts includes summary information on total program...

  4. 75 FR 7426 - Tier 2 Light-Duty Vehicle and Light-Duty Truck Emission Standards and Gasoline Sulfur Control...

    Science.gov (United States)

    2010-02-19

    ... 2060-AI23; 2060-AQ12 Tier 2 Light-Duty Vehicle and Light-Duty Truck Emission Standards and Gasoline.... The rulemaking also required oil refiners to limit the sulfur content of the gasoline they produce. Sulfur in gasoline has a detrimental impact on catalyst performance and the sulfur requirements have...

  5. A Novel Amphibian Tier 2 Testing Protocol: A 30-Week Exposure of Xenopus Tropicalis to the Antiandrogen Flutamide

    National Research Council Canada - National Science Library

    Knechtges, Paul L; Sprando, Robert L; Porter, Karen L; Brennan, Linda M; Miller, Mark F; Kumsher, David M; Dennis, William E; Brown, Charles C; Clegg Paul L. Knechtges. Robert L. Sprando. Karen L. Potter., Eric D

    2007-01-01

    .... For that reason, a tier 2 testing protocol using Xenopus (Silurana) tropicalis and a 30-week, flow-through exposure to the antiandrogen flutamide from stage 46 tadpoles through sexually mature adult frogs were developed and evaluated in this pilot study...

  6. Distributed Grid Experiences in CMS DC04

    CERN Document Server

    Fanfani, A; Grandi, C; Legrand, I; Suresh, S; Campana, S; Donno, F; Jank, W; Sinanis, N; Sciabà, A; García-Abia, P; Hernández, J; Ernst, M; Anzar, A; Fisk, I; Giacchetti, L; Graham, G; Heavey, A; Kaiser, J; Kuropatine, N; Perelmutov, T; Pordes, R; Ratnikova, N; Weigand, J; Wu, Y; Colling, D J; MacEvoy, B; Tallini, H; Wakefield, L; De Filippis, N; Donvito, G; Maggi, G; Bonacorsi, D; Dell'Agnello, L; Martelli, B; Biasotto, M; Fantinel, S; Corvo, M; Fanzago, F; Mazzucato, M; Tuura, L; Martin, T; Letts, J; Bockjoo, K; Prescott, C; Rodríguez, J; Zahn, A; Bradley, D

    2005-01-01

    In March-April 2004 the CMS experiment undertook a Data Challenge (DC04). During the previous 8 months CMS undertook a large simulated event production. The goal of the challenge was to run CMS reconstruction for sustained period at 25Hz in put rate, distribute the data to the CMS Tier-1 centers and analyze them at remote sites. Grid environments developed in Europe by the LHC Computing Grid (LCG) and in the US with Grid2003 were utilized to complete the aspects of the challenge. A description of the experiences, successes and lessons learned from both experiences with grid infrastructure is presented.

  7. The Effect of Tier 2 Intervention for Phonemic Awareness in a Response-to-Intervention Model in Low-Income Preschool Classrooms

    Science.gov (United States)

    Koutsoftas, Anthony D.; Harmon, Mary Towle; Gray, Shelley

    2009-01-01

    Purpose: This study assessed the effectiveness of a Tier 2 intervention that was designed to increase the phonemic awareness skills of low-income preschoolers who were enrolled in Early Reading First classrooms. Method: Thirty-four preschoolers participated in a multiple baseline across participants treatment design. Tier 2 intervention for…

  8. Debugging Data Transfers in CMS

    CERN Document Server

    Bagliesi, G; Bloom, K; Bockelman, B; Bonacorsi, D; Fisk, I; Flix, J; Hernandez, J; D'Hondt, J; Kadastik, M; Klem, J; Kodolova, O; Kuo, C M; Letts, J; Maes, J; Magini, N; Metson, S; Piedra, J; Pukhaeva, N; Tuura, L; Sonajalg, S; Wu, Y; Van Mulders, P; Villella, I; Wurthwein, F

    2010-01-01

    The CMS experiment at CERN is preparing for LHC data taking in several computing preparation activities. In early 2007 a traffic load generator infrastructure for distributed data transfer tests called the LoadTest was designed and deployed to equip the WLCG sites that support CMS with a means for debugging, load-testing and commissioning data transfer routes among CMS computing centres. The LoadTest is based upon PhEDEx as a reliable, scalable data set replication system. The Debugging Data Transfers (DDT) task force was created to coordinate the debugging of the data transfer links. The task force aimed to commission most crucial transfer routes among CMS sites by designing and enforcing a clear procedure to debug problematic links. Such procedure aimed to move a link from a debugging phase in a separate and independent environment to a production environment when a set of agreed conditions are achieved for that link. The goal was to deliver one by one working transfer routes to the CMS data operations team...

  9. Towards higher reliability of CMS computing facilities

    International Nuclear Information System (INIS)

    Bagliesi, G; Bloom, K; Brew, C; Flix, J; Kreuzer, P; Sciabà, A

    2012-01-01

    The CMS experiment has adopted a computing system where resources are distributed worldwide in more than 50 sites. The operation of the system requires a stable and reliable behaviour of the underlying infrastructure. CMS has established procedures to extensively test all relevant aspects of a site and their capability to sustain the various CMS computing workflows at the required scale. The Site Readiness monitoring infrastructure has been instrumental in understanding how the system as a whole was improving towards LHC operations, measuring the reliability of sites when running CMS activities, and providing sites with the information they need to troubleshoot any problem. This contribution reviews the complete automation of the Site Readiness program, with the description of monitoring tools and their inclusion into the Site Status Board (SSB), the performance checks, the use of tools like HammerCloud, and the impact in improving the overall reliability of the Grid from the point of view of the CMS computing system. These results are used by CMS to select good sites to conduct workflows, in order to maximize workflows efficiencies. The performance against these tests seen at the sites during the first years of LHC running is as well reviewed.

  10. CERN Open Days CMS Posters

    CERN Multimedia

    Davis, Siona Ruth

    2016-01-01

    Themes: 1) You are here (location P5, Cessy) 2) CERN 3) LHC 4) CMS Detector 5) Magnet 6) Subdetectors (Tracker, ECAL, HCAL, Muons) 7) Trigger and Data Acquisition 8) Collaboration 9) Site Geography 10) Construction 11) Lowering and Installation 12) Physics

  11. Representatives of the companies receiving CMS Gold Awards on 15 March, pictured in front of the first superconducting coil module at the experiment construction site.

    CERN Multimedia

    Maximilien Brice

    2004-01-01

    The fifth annual CMS awards ceremony was held at CERN on 15 March. This year, six of the approximately 1000 companies who work for CMS were honoured with the Gold Award for demonstrating excellence by providing parts on schedule, within budget and within specification. Two of the prize-winners also received the Crystal Award, which is given to a company that has taken further efforts to develop new designs, explore novel technologies and collaborate in research and development program

  12. A systems relations model for Tier 2 early intervention child mental health services with schools: an exploratory study.

    Science.gov (United States)

    van Roosmalen, Marc; Gardner-Elahi, Catherine; Day, Crispin

    2013-01-01

    Over the last 15 years, policy initiatives have aimed at the provision of more comprehensive Child and Adolescent Mental Health care. These presented a series of new challenges in organising and delivering Tier 2 child mental health services, particularly in schools. This exploratory study aimed to examine and clarify the service model underpinning a Tier 2 child mental health service offering school-based mental health work. Using semi-structured interviews, clinician descriptions of operational experiences were gathered. These were analysed using grounded theory methods. Analysis was validated by respondents at two stages. A pathway for casework emerged that included a systemic consultative function, as part of an overall three-function service model, which required: (1) activity as a member of the multi-agency system; (2) activity to improve the system working around a particular child; and (3) activity to universally develop a Tier 1 workforce confident in supporting children at risk of or experiencing mental health problems. The study challenged the perception of such a service serving solely a Tier 2 function, the requisite workforce to deliver the service model, and could give service providers a rationale for negotiating service models that include an explicit focus on improving the children's environments.

  13. CMS brochure (English version)

    CERN Document Server

    Marcastel, Fabienne

    2014-01-01

    CMS is the heaviest detector at the LHC, the most powerful particle accelerator in the world, which has started up in 2008. A multi-purpose detector, CMS is composed of several systems built around a powerful superconducting magnet.

  14. CMS Program Statistics

    Data.gov (United States)

    U.S. Department of Health & Human Services — The CMS Office of Enterprise Data and Analytics has developed CMS Program Statistics, which includes detailed summary statistics on national health care, Medicare...

  15. CMS Drug Spending

    Data.gov (United States)

    U.S. Department of Health & Human Services — CMS has released several information products that provide spending information for prescription drugs in the Medicare and Medicaid programs. The CMS Drug Spending...

  16. CMS Brochure (german version)

    CERN Multimedia

    Marcastel, F

    2007-01-01

    CMS is the heaviest detector at the LHC, the most powerful particle accelerator in the world, which will start up in 2008. A multi-purpose detector, CMS is composed of several systems built around a powerful superconducting magnet.

  17. CMS brochure (Spanish version)

    CERN Multimedia

    Lefevre, C

    2008-01-01

    CMS is the heaviest detector at the LHC, the most powerful particle accelerator in the world, which will start up in 2008. A multi-purpose detector, CMS is composed of several systems built around a powerful superconducting magnet.

  18. CMS Records Schedule

    Data.gov (United States)

    U.S. Department of Health & Human Services — The CMS Records Schedule provides disposition authorizations approved by the National Archives and Records Administration (NARA) for CMS program-related records...

  19. CMS-Wave

    Science.gov (United States)

    2015-10-30

    Coastal Inlets Research Program CMS -Wave CMS -Wave is a two-dimensional spectral wind-wave generation and transformation model that employs a forward...marching, finite-difference method to solve the wave action conservation equation. Capabilities of CMS -Wave include wave shoaling, refraction... CMS -Wave can be used in either on a half- or full-plane mode, with primary waves propagating from the seaward boundary toward shore. It can

  20. CMS 2006 - CMS France days; CMS 2006 les journees CMS FRANCE

    Energy Technology Data Exchange (ETDEWEB)

    Huss, D.; Dobrzynski, L.; Virdee, J.; Boudoule, G.; Fontaine, J.C.; Faure, J.L.; Paganini, P.; Mathez, H.; Gross, L.; Charlot, C.; Trunov, A.; Patois, Y.; Busson, P.; Maire, M.; Berthon, U.; Todorov, T.; Beaudette, F.; Sirois, Y.; Baffioni, S.; Beauceron, S.; Delmeire, E.; Agram, J.L.; Goerlach, U.; Mangeol, D.; Salerno, R.; Bloch, D.; Lassila-Perini, K.; Blaha, J.; Drobychev, G.; Gras, P.; Hagenauer, M.; Denegri, D.; Lounis, A.; Faccio, F.; Lecoq, J

    2006-07-01

    These CMS talks give the opportunity for all the teams working on the CMS (Compact Muon Spectrometer) project to present the status of their works and to exchange ideas. 5 sessions have been organized: 1) CMS status and perspectives, 2) contributions of the different laboratories, 3) software and computation, 4) physics with CMS (particularly the search for the Higgs boson), and 5) electronic needs. This document gathers the slides of the presentations.

  1. CMS Central Hadron Calorimeter

    OpenAIRE

    Budd, Howard S.

    2001-01-01

    We present a description of the CMS central hadron calorimeter. We describe the production of the 1996 CMS hadron testbeam module. We show the results of the quality control tests of the testbeam module. We present some results of the 1995 CMS hadron testbeam.

  2. CMS Comic Book Brochure

    CERN Document Server

    2006-01-01

    To raise students' awareness of what the CMS detector is, how it was constructed and what it hopes to find. Titled "CMS Particle Hunter," this colorful comic book style brochure explains to young budding scientists and science enthusiasts in colorful animation how the CMS detector was made, its main parts, and what scientists hope to find using this complex tool.

  3. Prototype for a generic thin-client remote analysis environment for CMS

    International Nuclear Information System (INIS)

    Steenberg, C.D.; Bunn, J.J.; Hickey, T.M.; Holtman, K.; Legrand, I.; Litvin, V.; Newman, H.B.; Samar, A.; Singh, S.; Wilkinson, R.

    2001-01-01

    The multi-tiered architecture of the highly-distributed CMS computing systems necessitates a flexible data distribution and analysis environment. The authors describe a prototype analysis environment which functions efficiently over wide area networks using a server installed at the Caltech/UCSD Tier 2 prototype to analyze CMS data stored at various locations using a thin client. The analysis environment is based on existing HEP (Anaphe) and CMS (CARF, ORCA, IGUANA) software technology on the server accessed from a variety of clients. A Java Analysis Studio (JAS, from SLAC) plug-in is being developed as a reference client. The server is operated as a 'black box' on the proto-Tier2 system. ORCA objectivity databases (e.g. an existing large CMS Muon sample) are hosted on the master and slave nodes, and remote clients can request processing of queries across the server nodes, and get the histogram results returned and rendered in the client. The server is implemented using pure C++, and use XML-RPC as a language-neutral transport. This has several benefits, including much better scalability, better integration with CARF-ORCA, and importantly, makes the work directly useful to other non-Java general-purpose analysis and presentation tools such as Hippodraw, Lizard, or ROOT

  4. CMS offline web tools

    International Nuclear Information System (INIS)

    Metson, S; Newbold, D; Belforte, S; Kavka, C; Bockelman, B; Dziedziniewicz, K; Egeland, R; Elmer, P; Eulisse, G; Tuura, L; Evans, D; Fanfani, A; Feichtinger, D; Kuznetsov, V; Lingen, F van; Wakefield, S

    2008-01-01

    We describe a relatively new effort within CMS to converge on a set of web based tools, using state of the art industry techniques, to engage with the CMS offline computing system. CMS collaborators require tools to monitor various components of the computing system and interact with the system itself. The current state of the various CMS web tools is described along side current planned developments. The CMS collaboration comprises of nearly 3000 people from all over the world. As well as its collaborators, its computing resources are spread all over globe and are accessed via the LHC grid to run analysis, large scale production and data transfer tasks. Due to the distributed nature of collaborators effective provision of collaborative tools is essential to maximise physics exploitation of the CMS experiment, especially when the size of the CMS data set is considered. CMS has chosen to provide such tools over the world wide web as a top level service, enabling all members of the collaboration to interact with the various offline computing components. Traditionally web interfaces have been added in HEP experiments as an afterthought. In the CMS offline we have decided to put web interfaces, and the development of a common CMS web framework, on an equal footing with the rest of the offline development. Tools exist within CMS to transfer and catalogue data (PhEDEx and DBS/DLS), run Monte Carlo production (ProdAgent) and submit analysis (CRAB). Effective human interfaces to these systems are required for users with different agendas and practical knowledge of the systems to effectively use the CMS computing system. The CMS web tools project aims to provide a consistent interface to all these tools

  5. CMS offline web tools

    Energy Technology Data Exchange (ETDEWEB)

    Metson, S; Newbold, D [H.H. Wills Physics Laboratory, University of Bristol, Tyndall Avenue, Bristol BS8 1TL (United Kingdom); Belforte, S; Kavka, C [INFN, Sezione di Trieste (Italy); Bockelman, B [University of Nebraska Lincoln, Lincoln, NE (United States); Dziedziniewicz, K [CERN, Geneva (Switzerland); Egeland, R [University of Minnesota Twin Cities, Minneapolis, MN (United States); Elmer, P [Princeton (United States); Eulisse, G; Tuura, L [Northeastern University, Boston, MA (United States); Evans, D [Fermilab MS234, Batavia, IL (United States); Fanfani, A [Universita degli Studi di Bologna (Italy); Feichtinger, D [PSI, Villigen (Switzerland); Kuznetsov, V [Cornell University, Ithaca, NY (United States); Lingen, F van [California Institute of Technology, Pasedena, CA (United States); Wakefield, S [Blackett Laboratory, Imperial College, London (United Kingdom)

    2008-07-15

    We describe a relatively new effort within CMS to converge on a set of web based tools, using state of the art industry techniques, to engage with the CMS offline computing system. CMS collaborators require tools to monitor various components of the computing system and interact with the system itself. The current state of the various CMS web tools is described along side current planned developments. The CMS collaboration comprises of nearly 3000 people from all over the world. As well as its collaborators, its computing resources are spread all over globe and are accessed via the LHC grid to run analysis, large scale production and data transfer tasks. Due to the distributed nature of collaborators effective provision of collaborative tools is essential to maximise physics exploitation of the CMS experiment, especially when the size of the CMS data set is considered. CMS has chosen to provide such tools over the world wide web as a top level service, enabling all members of the collaboration to interact with the various offline computing components. Traditionally web interfaces have been added in HEP experiments as an afterthought. In the CMS offline we have decided to put web interfaces, and the development of a common CMS web framework, on an equal footing with the rest of the offline development. Tools exist within CMS to transfer and catalogue data (PhEDEx and DBS/DLS), run Monte Carlo production (ProdAgent) and submit analysis (CRAB). Effective human interfaces to these systems are required for users with different agendas and practical knowledge of the systems to effectively use the CMS computing system. The CMS web tools project aims to provide a consistent interface to all these tools.

  6. CMS MANANGEMENT MEETINGS

    CERN Multimedia

    Management Board Agendas and minutes of meetings of the Management Board are accessible to CMS members at: http://indico.cern.ch/categoryDisplay.py?categId=223 Collaboration Board Agendas and minutes of meetings of the Collaboration Board are accessible to CMS members at: http://indico.cern.ch/categoryDisplay.py?categId=174 LHCC: Feedback from the CMS Referees, LHCC 97 February 25, 2009. The CMS LHCC referees met with representatives of CMS on 17-2-09, to review progress since the last November minireview. The main topics included shutdown construction, maintenance and repairs; status of the preshower detector; commissioning and physics analysis results from cosmic ray running and CSA08; preparations for physics, off line analysis, computing, and data distribution. TOTEM management and the TOTEM referees then joined us for a joint session to examine the readiness of the TOTEM detector. Detector construction, maintenance, and repairs. The referees congratulate CMS Management and the Detector Groups for the...

  7. International Masterclass at CMS

    CERN Multimedia

    Lapka, M

    2012-01-01

    The CMS collaboration welcomed a class of French high school students to the CERN facility in Meyrin, Switzerland on the 12 of March, 2012. Students spent the day meeting with physicists, hearing talks, asking questions, and participating in a hands-on exercise using real data collected by the CMS experiment on the Large Hadron Colider. Talks and other resources are available here: http://ippog-dev.web.cern.ch/resources/2012/ippog-international-masterclass-2012-cms

  8. A new visitor centre for CMS

    CERN Document Server

    2001-01-01

    At the inauguration of the new CMS visitor centre. The CMS experiment inaugurated a new visitor centre at its Cessy site on 14 June. This will allow the thousands of people who come to CERN each year to follow the construction of one the Laboratory's flagship experiments first-hand. CERN receives over 20,000 visitors each year. Until recently, many of them were taken on a guided tour of one of the LEP experiments. With the closure of LEP, however, trips underground are no longer possible, and the Visits' Service has put in place a number of other itineraries (Bulletin 46/2000). Since the CMS detector will be almost entirely constructed in a surface hall, it is now taking a big share of the limelight. The CMS visitor centre has been built on a platform overlooking CMS construction. It contains a set of clear descriptive posters describing the experiment, along with a video projection showing animations and movies about CMS construction. In the coming weeks, a display of CMS detector elements will be added, as...

  9. Predicting dataset popularity for the CMS experiment

    CERN Document Server

    INSPIRE-00005122; Li, Ting; Giommi, Luca; Bonacorsi, Daniele; Wildish, Tony

    2016-01-01

    The CMS experiment at the LHC accelerator at CERN relies on its computing infrastructure to stay at the frontier of High Energy Physics, searching for new phenomena and making discoveries. Even though computing plays a significant role in physics analysis we rarely use its data to predict the system behavior itself. A basic information about computing resources, user activities and site utilization can be really useful for improving the throughput of the system and its management. In this paper, we discuss a first CMS analysis of dataset popularity based on CMS meta-data which can be used as a model for dynamic data placement and provide the foundation of data-driven approach for the CMS computing infrastructure.

  10. Predicting dataset popularity for the CMS experiment

    International Nuclear Information System (INIS)

    Kuznetsov, V.; Li, T.; Giommi, L.; Bonacorsi, D.; Wildish, T.

    2016-01-01

    The CMS experiment at the LHC accelerator at CERN relies on its computing infrastructure to stay at the frontier of High Energy Physics, searching for new phenomena and making discoveries. Even though computing plays a significant role in physics analysis we rarely use its data to predict the system behavior itself. A basic information about computing resources, user activities and site utilization can be really useful for improving the throughput of the system and its management. In this paper, we discuss a first CMS analysis of dataset popularity based on CMS meta-data which can be used as a model for dynamic data placement and provide the foundation of data-driven approach for the CMS computing infrastructure. (paper)

  11. CMS rewards eight of its suppliers

    CERN Multimedia

    2002-01-01

    At the third awards ceremony to honour its top suppliers, the CMS collaboration presented awards to eight firms. Seven of them are involved in the manufacture of the magnet. The winners of the third CMS suppliers' awards visit the assembly site for the detector. Unsurprisingly, the CMS magnet was once again in the limelight at the third awards ceremony in honour of the collaboration's top suppliers. 'Unsurprisingly', because this magnet, which must produce an intense field of 4 Tesla inside an enormous volume (12 metres in diameter and 13 metres in length) is the detector's key component. As a result, many firms are involved in its construction. The CMS suppliers' awards are an annual event aimed at rewarding the exceptional efforts of certain companies. Firms are only eligible once they have delivered at least 50% of their supplies. This year, the collaboration honoured eight firms at a ceremony held on Monday 4 March in the main auditorium. Seven of th...

  12. Auger Physicists visit CMS

    CERN Multimedia

    Hoch, Michael

    2012-01-01

    Visit at CERN P5 CMS in the experimental cavern Alan Watson, Auger Spokesperson Emeritus, University of Leeds; Jim Cronin, Nobel Laureate, Auger Spokesperson Emeritus, University of Chicago; Jim Virdee, CMS Former Spokesperson, Imperial College; Jim Matthews, Auger Co-Spokesperson, Louisiana State University

  13. CMS MANAGEMENT MEETINGS

    CERN Multimedia

    2010-01-01

    The Agendas and Minutes of the Management Board meetings are accessible to CMS members at: http://indico.cern.ch/categoryDisplay.py?categId=223 The Agendas and Minutes of the Collaboration Board meetings are accessible to CMS members at: http://indico.cern.ch/categoryDisplay.py?categId=174

  14. CMS MANAGEMENT MEETINGS

    CERN Multimedia

    The Agendas and Minutes of the Management Board meetings are accessible to CMS members at: http://indico.cern.ch/categoryDisplay.py?categId=223  The Agendas and Minutes of the Collaboration Board meetings are accessible to CMS members at: http://indico.cern.ch/categoryDisplay.py?categId=174 

  15. Scaling CMS data transfer system for LHC start-up

    International Nuclear Information System (INIS)

    Tuura, L; Bockelman, B; Bonacorsi, D; Egeland, R; Feichtinger, D; Metson, S; Rehn, J

    2008-01-01

    The CMS experiment will need to sustain uninterrupted high reliability, high throughput and very diverse data transfer activities as the LHC operations start. PhEDEx, the CMS data transfer system, will be responsible for the full range of the transfer needs of the experiment. Covering the entire spectrum is a demanding task: from the critical high-throughput transfers between CERN and the Tier-1 centres, to high-scale production transfers among the Tier-1 and Tier-2 centres, to managing the 24/7 transfers among all the 170 institutions in CMS and to providing straightforward access to handful of files to individual physicists. In order to produce the system with confirmed capability to meet the objectives, the PhEDEx data transfer system has undergone rigourous development and numerous demanding scale tests. We have sustained production transfers exceeding 1 PB/month for several months and have demonstrated core system capacity several orders of magnitude above expected LHC levels. We describe the level of scalability reached, and how we got there, with focus on the main insights into developing a robust, lock-free and scalable distributed database application, the validation stress test methods we have used, and the development and testing tools we found practically useful

  16. CMS MANAGEMENT MEETINGS

    CERN Multimedia

    Management Board Agendas and minutes of meetings of the Management Board are accessible to CMS members at: http://indico.cern.ch/categoryDisplay.py?categId=223 Collaboration Board Agendas and minutes of meetings of the Collaboration Board are accessible to CMS members at: http://indico.cern.ch/categoryDisplay.py?categId=174 LHCC: Feedback from the CMS Referees, LHCC 97 February 25, 2009. The CMS LHCC referees met with representatives of CMS on 17-2-09, to review progress since the last November minireview. The main topics included  shutdown construction, maintenance and repairs;  status of the preshower detector; commissioning and physics analysis results from cosmic ray running and CSA08;   preparations for physics, off line analysis, computing, and data distribution. TOTEM management and the TOTEM referees then joined us for a joint session to examine the readiness of the TOTEM detector. Detector construction, maintenance, and repairs. The referees congratulate C...

  17. CMS MANAGEMENT MEETINGS

    CERN Multimedia

    Jim Virdee

    Management Board Agendas and minutes of meetings of the Management Board are accessible to CMS members at: http://indico.cern.ch/categoryDisplay.py?categId=223 Collaboration Board Agendas and minutes of meetings of the Collaboration Board are accessible to CMS members at: http://indico.cern.ch/categoryDisplay.py?categId=174 LHCC: Feedback from the CMS Referees, LHCC 97 February 25, 2009. The CMS LHCC referees met with representatives of CMS on 17-2-09, to review progress since the last November minireview. The main topics included  shutdown construction, maintenance and repairs;  status of the preshower detector; commissioning and physics analysis results from cosmic ray running and CSA08;   preparations for physics, off line analysis, computing, and data distribution. TOTEM management and the TOTEM referees then joined us for a joint session to examine the readiness of the TOTEM detector. Detector construction, maintenance, and repairs. The referees congratula...

  18. Russian and Belorussian firms receive CMS Gold Awards

    CERN Multimedia

    2003-01-01

    On 7 March, CMS handed out its three latest Gold Awards in recognition of outstanding supplier performance. The directors of two Russian firms (ENTEK and the Myasishchev Design Bureau) and of the Belorussian company MZOR received their awards on the occasion of a visit by dignitaries from the two countries. The directors and dignitaries are pictured here with leaders of the CMS Collaboration in front of the CMS hadron calorimeter end-cap at the detector's assembly site.

  19. Readiness of the ATLAS Spanish Federated Tier-2 for the Physics Analysis of the early collision events at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    Oliver, E; Amoros, G; Fassi, F; Fernandez, A; Gonzalez, S; Kaci, M; Lamas, A; Salt, J; Sanchez, J [Instituto de Fisica Corpuscular (IFIC) (centro mixto CSIC - University Valencia), E-46071 Valencia (Spain); Nadal, J; Borrego, C; Campos, M; Pacheco, A [Institut de Fisica d' Altes Energies (IFAE) Facultat de Ciencies UAB, E-08193 Bellaterra, Barcelona (Spain); Pardo, J; Del Cano, L; Peso, J Del; Fernandez, P; March, L; Munoz, L [Universidad Autonoma de Madrid (UAM) Dpto. de Fisica Teorica, 28049 Madrid (Spain); Espinal, X, E-mail: elena.oliver@ific.uv.e [Port d' Informacio CientIfica (PIC) Campus UAB Edifici D E-08193 Bellaterra, Barcelona (Spain)

    2010-04-01

    In this contribution an evaluation of the readiness parameters for the Spanish ATLAS Federated Tier-2 is presented, regarding the ATLAS data taking which is expected to start by the end of the year 2009. Special attention will be paid to the Physics Analysis from different points of view: Data Management, Simulated events Production and Distributed Analysis Tests. Several use cases of Distributed Analysis in GRID infrastructures and local interactive analysis in non-Grid farms, are provided, in order to evaluate the interoperability between both environments, and to compare the different performances. The prototypes for local computing infrastructures for data analysis are described. Moreover, information about a local analysis facilities, called Tier-3, is given.

  20. CMS analysis school model

    International Nuclear Information System (INIS)

    Malik, S; Bloom, K; Shipsey, I; Cavanaugh, R; Klima, B; Chan, Kai-Feng; D'Hondt, J; Narain, M; Palla, F; Rolandi, G; Schörner-Sadenius, T

    2014-01-01

    To impart hands-on training in physics analysis, CMS experiment initiated the concept of CMS Data Analysis School (CMSDAS). It was born over three years ago at the LPC (LHC Physics Centre), Fermilab and is based on earlier workshops held at the LPC and CLEO Experiment. As CMS transitioned from construction to the data taking mode, the nature of earlier training also evolved to include more of analysis tools, software tutorials and physics analysis. This effort epitomized as CMSDAS has proven to be a key for the new and young physicists to jump start and contribute to the physics goals of CMS by looking for new physics with the collision data. With over 400 physicists trained in six CMSDAS around the globe, CMS is trying to engage the collaboration in its discovery potential and maximize physics output. As a bigger goal, CMS is striving to nurture and increase engagement of the myriad talents, in the development of physics, service, upgrade, education of those new to CMS and the career development of younger members. An extension of the concept to the dedicated software and hardware schools is also planned, keeping in mind the ensuing upgrade phase.

  1. CMS Analysis School Model

    Energy Technology Data Exchange (ETDEWEB)

    Malik, S. [Nebraska U.; Shipsey, I. [Purdue U.; Cavanaugh, R. [Illinois U., Chicago; Bloom, K. [Nebraska U.; Chan, Kai-Feng [Taiwan, Natl. Taiwan U.; D' Hondt, J. [Vrije U., Brussels; Klima, B. [Fermilab; Narain, M. [Brown U.; Palla, F. [INFN, Pisa; Rolandi, G. [CERN; Schörner-Sadenius, T. [DESY

    2014-01-01

    To impart hands-on training in physics analysis, CMS experiment initiated the concept of CMS Data Analysis School (CMSDAS). It was born over three years ago at the LPC (LHC Physics Centre), Fermilab and is based on earlier workshops held at the LPC and CLEO Experiment. As CMS transitioned from construction to the data taking mode, the nature of earlier training also evolved to include more of analysis tools, software tutorials and physics analysis. This effort epitomized as CMSDAS has proven to be a key for the new and young physicists to jump start and contribute to the physics goals of CMS by looking for new physics with the collision data. With over 400 physicists trained in six CMSDAS around the globe, CMS is trying to engage the collaboration in its discovery potential and maximize physics output. As a bigger goal, CMS is striving to nurture and increase engagement of the myriad talents, in the development of physics, service, upgrade, education of those new to CMS and the career development of younger members. An extension of the concept to the dedicated software and hardware schools is also planned, keeping in mind the ensuing upgrade phase.

  2. CMS tracker visualization tools

    CERN Document Server

    Zito, G; Osborne, I; Regano, A

    2005-01-01

    This document will review the design considerations, implementations and performance of the CMS Tracker Visualization tools. In view of the great complexity of this sub-detector (more than 50 millions channels organized in 16540 modules each one of these being a complete detector), the standard CMS visualization tools (IGUANA and IGUANACMS) that provide basic 3D capabilities and integration within CMS framework, respectively, have been complemented with additional 2D graphics objects. Based on the experience acquired using this software to debug and understand both hardware and software during the construction phase, we propose possible future improvements to cope with online monitoring and event analysis during data taking.

  3. CMS tracker visualization tools

    Energy Technology Data Exchange (ETDEWEB)

    Mennea, M.S. [Dipartimento Interateneo di Fisica ' Michelangelo Merlin' e INFN sezione di Bari, Via Amendola 173 - 70126 Bari (Italy); Osborne, I. [Northeastern University, 360 Huntington Avenue, Boston, MA 02115 (United States); Regano, A. [Dipartimento Interateneo di Fisica ' Michelangelo Merlin' e INFN sezione di Bari, Via Amendola 173 - 70126 Bari (Italy); Zito, G. [Dipartimento Interateneo di Fisica ' Michelangelo Merlin' e INFN sezione di Bari, Via Amendola 173 - 70126 Bari (Italy)]. E-mail: giuseppe.zito@ba.infn.it

    2005-08-21

    This document will review the design considerations, implementations and performance of the CMS Tracker Visualization tools. In view of the great complexity of this sub-detector (more than 50 millions channels organized in 16540 modules each one of these being a complete detector), the standard CMS visualization tools (IGUANA and IGUANACMS) that provide basic 3D capabilities and integration within CMS framework, respectively, have been complemented with additional 2D graphics objects. Based on the experience acquired using this software to debug and understand both hardware and software during the construction phase, we propose possible future improvements to cope with online monitoring and event analysis during data taking.

  4. CMS tracker visualization tools

    International Nuclear Information System (INIS)

    Mennea, M.S.; Osborne, I.; Regano, A.; Zito, G.

    2005-01-01

    This document will review the design considerations, implementations and performance of the CMS Tracker Visualization tools. In view of the great complexity of this sub-detector (more than 50 millions channels organized in 16540 modules each one of these being a complete detector), the standard CMS visualization tools (IGUANA and IGUANACMS) that provide basic 3D capabilities and integration within CMS framework, respectively, have been complemented with additional 2D graphics objects. Based on the experience acquired using this software to debug and understand both hardware and software during the construction phase, we propose possible future improvements to cope with online monitoring and event analysis during data taking

  5. CMS brochure (English version)

    CERN Document Server

    2017-01-01

    CMS is the heaviest detector at the LHC, the most powerful particle accelerator in the world, which has started up in 2008. A multi-purpose detector, CMS is composed of several systems built around a powerful superconducting magnet.CMS est la plus lourde des expériences du LHC, l'accélérateur de particules le plus puissant au monde qui a été mis en service en 2008. Les détecteurs de cette expérience polyvalente sont placés autour d'un puissant aimant supraconducteur.

  6. CMS brochure (French version)

    CERN Document Server

    2017-01-01

    CMS is the heaviest detector at the LHC, the most powerful particle accelerator in the world, which has started up in 2008. A multi-purpose detector, CMS is composed of several systems built around a powerful superconducting magnet.CMS est la plus lourde des expériences du LHC, l'accélérateur de particules le plus puissant au monde qui a été mis en service en 2008. Les détecteurs de cette expérience polyvalente sont placés autour d'un puissant aimant supraconducteur.

  7. CMS Higgs boson results

    CERN Document Server

    Bluj, Michal Jacek

    2018-01-01

    In this report we review recent Higgs boson results obtained with pp collisions at $\\sqrt{s}=\\,$13 TeV recorded by the CMS detector in 2016 for an integrated luminosity of 35.9fb$^{\\text{-1}}$. The 2016 data allowed the observation of the $H \\to \\tau\\tau$ and $H \\to WW$ decays with high significance. We also present a combined measurement based on a full set of CMS analyses performed with 2016 data. These results are compatible with the standard model predictions with precision of several measurements exceeding results from combination of ATLAS and CMS data collected in 2011 and 2012.

  8. Data Scouting in CMS

    CERN Document Server

    Anderson, Dustin James

    2016-01-01

    In 2011, the CMS collaboration introduced Data Scouting as a way to produce physics results with events that cannot be stored on disk, due to resource limits in the data acquisition and offline infrastructure. The viability of this technique was demonstrated in 2012, when 18 fb$^{-1}$ of collision data at $\\sqrt{s}$ = 8 TeV were collected. The technique is now a standard ingredient of CMS and ATLAS data-taking strategy. In this talk, we present the status of data scouting in CMS and the improvements introduced in 2015 and 2016, which promoted data scouting to a full-fledged, flexible discovery tool for the LHC Run II.

  9. Methods for Tier 2 Modeling Within the Training Range Environmental Evaluation and Characterization System

    Science.gov (United States)

    2011-03-01

    acre-yr, compared with 54 tons/acre-yr as computed with the Universal Soil Loss Equation ( USLE ). Thus, it appears that the Einstein and Brown equations... USLE that is already needed for soil erosion that exports aqueous phase (adsorbed and dissolved) MC. This will mean that solid phase MC will not affect...phase MC mass to soil mass b = soil dry bulk density, g/m3 A = AOI site area, m2 E = soil erosion rate as determined from the USLE , m/yr It is

  10. CMS Financial Reports

    Data.gov (United States)

    U.S. Department of Health & Human Services — This section contains the annual CMS financial statements as required under the Chief Financial Officers (CFO) Act of 1990 (P.L. 101-576). The CFO Act marked a major...

  11. CMS Statistics Reference Booklet

    Data.gov (United States)

    U.S. Department of Health & Human Services — The annual CMS Statistics reference booklet provides a quick reference for summary information about health expenditures and the Medicare and Medicaid health...

  12. CMS cavern inspection robot

    CERN Document Server

    Ibrahim, Ibrahim

    2017-01-01

    Robots which are immune to the CMS cavern environment, wirelessly controlled: -One actuated by smart materials (Ionic Polymer-Metal Composites and Macro Fiber Composites) -One regular brushed DC rover -One servo-driven rover -Stair-climbing robot

  13. The CMS Electronic Logbook

    CERN Multimedia

    Bukowiec, S; Beccati, B; Behrens, U; Biery, K; Branson, J; Cano, E; Cheung, H; Ciganek, M; Cittolin, S; Coarasa Perez, J A; Deldicque, C; Erhan, S; Gigi, D; Glege, F; Gomez-Reino, R; Hatton, D; Hwong, Y L; Loizides, C; Ma, F; Masetti, L; Meijers, F; Meschi, E; Meyer, A; Mommsen, R K; Moser, R; O’Dell, V; Orsini, L; Paus, C; Petrucci, A; Pieri, M; Racz, A; Raginel, O; Sakulin, H; Sani, M; Schieferdecker, P; Schwick, C; Shpakov, D; Simon, M; Sumorok, K; Sungho Yoon, A

    2010-01-01

    The CMS ELogbook (ELog) is a collaborative tool, which provides a platform to share and store information about various events or problems occurring in the Compact Muon Solenoid (CMS) experiment at CERN during operation. The ELog is based on a Model–View–Controller (MVC) software architectural pattern and uses an Oracle database to store messages and attachments. The ELog is developed as a pluggable web component in Oracle Portal in order to provide better management, monitoring and security.

  14. Forward physics with CMS

    CERN Document Server

    Grothe, Monika

    2008-01-01

    Forward physics with CMS at the LHC covers a wide range of physics subjects, including very low-x_Bj QCD, underlying event and multiple interactions characteristics, gamma-mediated processes, shower development at the energy scale of primary cosmic ray interactions with the atmosphere, diffraction in the presence of a hard scale and even MSSM Higgs discovery in central exclusive production. Selected feasibility studies to illustrate the forward physics potential of CMS are presented.

  15. CMS geometry through 2020

    International Nuclear Information System (INIS)

    Osborne, I; Brownson, E; Eulisse, G; Jones, C D; Sexton-Kennedy, E; Lange, D J

    2014-01-01

    CMS faces real challenges with upgrade of the CMS detector through 2020 and beyond. One of the challenges, from the software point of view, is managing upgrade simulations with the same software release as the 2013 scenario. We present the CMS geometry description software model, its integration with the CMS event setup and core software. The CMS geometry configuration and selection is implemented in Python. The tools collect the Python configuration fragments into a script used in CMS workflow. This flexible and automated geometry configuration allows choosing either transient or persistent version of the same scenario and specific version of the same scenario. We describe how the geometries are integrated and validated, and how we define and handle different geometry scenarios in simulation and reconstruction. We discuss how to transparently manage multiple incompatible geometries in the same software release. Several examples are shown based on current implementation assuring consistent choice of scenario conditions. The consequences and implications for multiple/different code algorithms are discussed.

  16. Tier-2 Optimisation for Computational Density/Diversity and Big Data

    Science.gov (United States)

    Fay, R. B.; Bland, J.

    2014-06-01

    As the number of cores on chip continues to trend upwards and new CPU architectures emerge, increasing CPU density and diversity presents multiple challenges to site administrators. These include scheduling for massively multi-core systems (potentially including Graphical Processing Units (GPU), integrated and dedicated) and Many Integrated Core (MIC)) to ensure a balanced throughput of jobs while preserving overall cluster throughput, as well as the increasing complexity of developing for these heterogeneous platforms, and the challenge in managing this more complex mix of resources. In addition, meeting data demands as both dataset sizes increase and as the rate of demand scales with increased computational power requires additional performance from the associated storage elements. In this report, we evaluate one emerging technology, Solid State Drive (SSD) caching for RAID controllers, with consideration to its potential to assist in meeting evolving demand. We also briefly consider the broader developing trends outlined above in order to identify issues that may develop and assess what actions should be taken in the immediate term to address those.

  17. Tier-2 optimisation for computational density/diversity and big data

    International Nuclear Information System (INIS)

    Fay, R B; Bland, J

    2014-01-01

    As the number of cores on chip continues to trend upwards and new CPU architectures emerge, increasing CPU density and diversity presents multiple challenges to site administrators. These include scheduling for massively multi-core systems (potentially including Graphical Processing Units (GPU), integrated and dedicated) and Many Integrated Core (MIC)) to ensure a balanced throughput of jobs while preserving overall cluster throughput, as well as the increasing complexity of developing for these heterogeneous platforms, and the challenge in managing this more complex mix of resources. In addition, meeting data demands as both dataset sizes increase and as the rate of demand scales with increased computational power requires additional performance from the associated storage elements. In this report, we evaluate one emerging technology, Solid State Drive (SSD) caching for RAID controllers, with consideration to its potential to assist in meeting evolving demand. We also briefly consider the broader developing trends outlined above in order to identify issues that may develop and assess what actions should be taken in the immediate term to address those.

  18. Acute tier-1 and tier-2 effect assessment approaches in the EFSA Aquatic Guidance Diocument: are they sufficiently protective for insecticides?

    NARCIS (Netherlands)

    Wijngaarden, van R.P.A.; Maltby, L.; Brock, T.C.M.

    2015-01-01

    BACKGROUND The objective of this paper is to evaluate whether the acute tier-1 and tier-2 methods as proposed by the Aquatic Guidance Document recently published by the European Food Safety Authority (EFSA) are appropriate for deriving regulatory acceptable concentrations (RACs) for insecticides.

  19. 75 FR 57958 - Solicitation of Written Comments on Draft Tier 2 Strategies/Modules for Inclusion in the “HHS...

    Science.gov (United States)

    2010-09-23

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES Solicitation of Written Comments on Draft Tier 2 Strategies/ Modules for Inclusion in the ``HHS Action Plan to Prevent Healthcare- Associated Infections'' AGENCY: Department of Health and Human Services, Office of the Assistant Secretary for Health, Office of...

  20. CMS (Compact Muon Solenoid)

    International Nuclear Information System (INIS)

    Anon.

    1995-01-01

    The milestone workshops on LHC experiments in Aachen in 1990 and at Evian in 1992 provided the first sketches of how LHC detectors might look. The concept of a compact general-purpose LHC experiment based on a solenoid to provide the magnetic field was first discussed at Aachen, and the formal Expression of Interest was aired at Evian. It was here that the Compact Muon Solenoid (CMS) name first became public. Optimizing first the muon detection system is a natural starting point for a high luminosity (interaction rate) proton-proton collider experiment. The compact CMS design called for a strong magnetic field, of some 4 Tesla, using a superconducting solenoid, originally about 14 metres long and 6 metres bore. (By LHC standards, this warrants the adjective 'compact'.) The main design goals of CMS are: 1 - a very good muon system providing many possibilities for momentum measurement (physicists call this a 'highly redundant' system); 2 - the best possible electromagnetic calorimeter consistent with the above; 3 - high quality central tracking to achieve both the above; and 4 - an affordable detector. Overall, CMS aims to detect cleanly the diverse signatures of new physics by identifying and precisely measuring muons, electrons and photons over a large energy range at very high collision rates, while also exploiting the lower luminosity initial running. As well as proton-proton collisions, CMS will also be able to look at the muons emerging from LHC heavy ion beam collisions. The Evian CMS conceptual design foresaw the full calorimetry inside the solenoid, with emphasis on precision electromagnetic calorimetry for picking up photons. (A light Higgs particle will probably be seen via its decay into photon pairs.) The muon system now foresaw four stations. Inner tracking would use silicon microstrips and microstrip gas chambers, with over 10 7 channels offering high track finding efficiency. In the central CMS barrel, the tracking elements are

  1. Performance studies and improvements of CMS distributed data transfers

    International Nuclear Information System (INIS)

    Bonacorsi, D; Flix, J; Kaselis, R; Magini, N; Letts, J; Sartirana, A

    2012-01-01

    CMS computing needs reliable, stable and fast connections among multi-tiered distributed infrastructures. CMS experiment relies on File Transfer Services (FTS) for data distribution, a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centers and used by all the computing sites in CMS, subject to established CMS and sites setup policies, including all the virtual organizations making use of the Grid resources at the site, and properly dimensioned to satisfy all the requirements for them. Managing the service efficiently needs good knowledge of the CMS needs for all kind of transfer routes, and the sharing and interference with other VOs using the same FTS transfer managers. This contribution deals with a complete revision of all FTS servers used by CMS, customizing the topologies and improving their setup in order to keep CMS transferring data to the desired levels, as well as performance studies for all kind of transfer routes, including overheads measurements introduced by SRM servers and storage systems, FTS server misconfigurations and identification of congested channels, historical transfer throughputs per stream, file-latency studies,… This information is retrieved directly from the FTS servers through the FTS Monitor webpages and conveniently archived for further analysis. The project provides an interface for all these values, to ease the analysis of the data.

  2. Evacuation drill at CMS

    CERN Multimedia

    Niels Dupont-Sagorin and Christoph Schaefer

    2012-01-01

    Training personnel, including evacuation guides and shifters, checking procedures, improving collaboration with the CERN Fire Brigade: the first real-life evacuation drill at CMS took place on Friday 3 February from 12p.m. to 3p.m. in the two caverns located at Point 5 of the LHC.   CERN personnel during the evacuation drill at CMS. Evacuation drills are required by law and have to be organized periodically in all areas of CERN, both above and below ground. The last drill at CMS, which took place in June 2007, revealed some desiderata, most notably the need for a public address system. With this equipment in place, it is now possible to broadcast audio messages from the CMS control room to the underground areas.   The CMS Technical Coordination Team and the GLIMOS have focused particularly on preparing collaborators for emergency situations by providing training and organizing regular safety drills with the HSE Unit and the CERN Fire Brigade. This Friday, the practical traini...

  3. CMS Thesis Award

    CERN Multimedia

    2004-01-01

    The 2003 CMS thesis award was presented to Riccardo Ranieri on 15 March for his Ph.D. thesis "Trigger Selection of WH → μ ν b bbar with CMS" where 'WH → μ ν b bbar' represents the associated production of the W boson and the Higgs boson and their subsequent decays. Riccardo received his Ph.D. from the University of Florence and was supervised by Carlo Civinini. In total nine thesis were nominated for the award, which was judged on originality, impact within the field of high energy physics, impact within CMS and clarity of writing. Gregory Snow, secretary of the awarding committee, explains why Riccardo's thesis was chosen, ‘‘The search for the Higgs boson is one of the main physics goals of CMS. Riccardo's thesis helps the experiment to formulate the strategy which will be used in that search.'' Lorenzo Foà, Chairperson of the CMS Collaboration Board, presented Riccardo with an commemorative engraved plaque. He will also receive the opportunity to...

  4. Synergy between the CIMENT tier-2 HPC centre and the HEP community at LPSC in Grenoble (France)

    International Nuclear Information System (INIS)

    Biscarat, C; Bzeznik, B

    2014-01-01

    Two of the most pressing questions in current research in Particle Physics are the characterisation of the newly discovered Higgs-like boson at the LHC and the search for New Phenomena beyond the Standard Model of Particle Physics. Physicists at LPSC in Grenoble are leading the search for one type of New Phenomena in ATLAS. Given the rich multitude of physics studies proceeding in parallel in ATLAS, one limiting factor in the timely analysis of data is the availability of computing resources. Another LPSC team suffers from the same limitation. This team is leading the ultimate precision measurement of the W boson mass with DØ data, which yields an indirect constraint on the Higgs boson mass which can be compared with the direct measurements of the mass of the newly discovered boson at LHC. In this paper, we describe the synergy between CIMENT, a regional multidisciplinary HPC centre, and the HEP community in Grenoble in the context of the analysis of data recorded by the ATLAS experiment at the LHC collider and the D0 experiment at the Tevatron collider. CIMENT is a federation of twelve HPC clusters, of about 90 TFlop/s, one of the most powerful HPC tier-2 centres in France. The sharing of resources between different scientific fields, like the ones discussed in this article, constitutes a great asset because the spikes in need of computing resources are uncorrelated in time between different fields.

  5. A new era for central processing and production in CMS

    International Nuclear Information System (INIS)

    Fajardo, E; Gutsche, O; Foulkes, S; Linacre, J; Spinoso, V; Lahiff, A; Gomez-Ceballos, G; Klute, M; Mohapatra, A

    2012-01-01

    The goal for CMS computing is to maximise the throughput of simulated event generation while also processing event data generated by the detector as quickly and reliably as possible. To maintain this achievement as the quantity of events increases CMS computing has migrated at the Tier 1 level from its old production framework, ProdAgent, to a new one, WMAgent. The WMAgent framework offers improved processing efficiency and increased resource usage as well as a reduction in operational manpower. In addition to the challenges encountered during the design of the WMAgent framework, several operational issues have arisen during its commissioning. The largest operational challenges were in the usage and monitoring of resources, mainly a result of a change in the way work is allocated. Instead of work being assigned to operators, all work is centrally injected and managed in the Request Manager system and the task of the operators has changed from running individual workflows to monitoring the global workload. In this report we present how we tackled some of the operational challenges, and how we benefitted from the lessons learned in the commissioning of the WMAgent framework at the Tier 2 level in late 2011. As case studies, we will show how the WMAgent system performed during some of the large data reprocessing and Monte Carlo simulation campaigns.

  6. Success in the pipeline for CMS

    CERN Multimedia

    2008-01-01

    The very heart of any LHC experiment is not a pixel detector, nor a vertex locator but a beam pipe. It is the site of each collision and the boundary where the accelerator and experiment meet. As an element of complex design and manufacture the CMS beam pipe was fifteen years in the making and finally fully installed on Tuesday 10 June. Watch the video! End cap beam pipe installation in the CMS detector. Central beam pipe installation.The compensation modules were the final pieces to take their places in the cavern at Point 5: "These are like bellows," says Wolfram Zeuner, Deputy Technical Co-ordinator for CMS. "They allow us to compensate for the change in length when we heat or cool the beam pipe. And they are the very last elements; beam pipe installation, which began last year, is now complete." The beam pipe is neither too fragile nor too bulky, but just right to satisfy the conflicting n...

  7. CMS Use of a Data Federation

    CERN Document Server

    Bloom, Kenneth Arthur

    2014-01-01

    CMS is in the process of deploying an Xrootd based infrastructure to facilitate a global data federation. The services of the federation are available to export data from half the physical capacity and the majority of sites are configured to read data over the federation as a back-up. CMS began with a relatively modest set of use-cases for recovery of failed local file opens, debugging and visualization. CMS is finding that the data federation can be used to support small scale analysis and load balancing. Looking forward we see potential in using the federation to provide more flexibility in the location workflows are executed as the differenced between local access and wide area access are diminished by optimization and improved networking. In this presentation we will discuss the application development work and the facility deployment work, the use-cases currently in production, and the potential for the technology moving forward.

  8. The Latest from CMS

    CERN Multimedia

    2009-01-01

    CMS is on track to be ready for physics one month in advance of the LHC restart. The final installations are being completed and tests are being run to ensure that the experiment is as well prepared as possible to exploit sustained LHC operation throughout 2010. Physics week in Bologna, Italy, was a valuable time for CMS collaborators to discuss preparations for numerous physics analyses, as well as the performance of the detector during the recent data-taking period with cosmics (CRAFT 09). During this five-week exercise, more than 300 million cosmic events were recorded with the magnetic field on. This large data-set is being used to further improve the sub-detector alignment, calibration and performance whilst awaiting p-p collisions. Meanwhile, in the experimental cavern, Wolfram Zeuner, Deputy Technical Coordinator of CMS, reports "We are now very nearly closed up again. We are just doing the final clean-up work and are ready t...

  9. Tracking performance with cosmic rays in CMS

    International Nuclear Information System (INIS)

    Cerati, G.B.

    2009-01-01

    The CMS Tracker is the biggest all-silicon detector in the world and is designed to be extremely efficient and accurate even in a very hostile environment such as the one close to the CMS collision point. It consists of an inner pixel detector, made of three barrel layers (48M pixels) and four forward disks (16M pixels), and an outer micro-strip detector, divided in two barrel sub-detectors, TIB and TOB, and two endcap sub-detectors, TID and TEC, for a total of 9.6M strips. The commissioning of the CMS Tracker detector has been initially carried out at the Tracker Integration Facility at CERN (TIF), where cosmic ray data were collected for the strip detector only, and is still ongoing at the CMS site (LHC Point 5). Here the Strip and Pixel detectors have been installed in the experiment and are taking part to the cosmic global-runs. After an overview of the tracking algorithms for cosmic-ray data reconstruction, the resulting tracking performance on cosmic data both at TIF and at P5 are presented. The excellent performance proves that the CMS Tracker is ready for the first collisions foreseen for 2009.

  10. Model of CMS Tracker

    CERN Multimedia

    Breuker

    1999-01-01

    A full scale CMS tracker mock-up exposed temporarily in the hall of building 40. The purpose of the mock-up is to study the routing of services, assembly and installation. The people in front are only a small fraction of the CMS tracker collaboration. Left to right : M. Atac, R. Castaldi, H. Breuker, D. Pandoulas,P. Petagna, A. Caner, A. Carraro, H. Postema, M. Oriunno, S. da Mota Silva, L. Van Lancker, W. Glessing, G. Benefice, A. Onnela, M. Gaspar, G. M. Bilei

  11. Automating the CMS DAQ

    International Nuclear Information System (INIS)

    Bauer, G; Darlea, G-L; Gomez-Ceballos, G; Bawej, T; Chaze, O; Coarasa, J A; Deldicque, C; Dobson, M; Dupont, A; Gigi, D; Glege, F; Gomez-Reino, R; Hartl, C; Hegeman, J; Masetti, L; Behrens, U; Branson, J; Cittolin, S; Holzner, A; Erhan, S

    2014-01-01

    We present the automation mechanisms that have been added to the Data Acquisition and Run Control systems of the Compact Muon Solenoid (CMS) experiment during Run 1 of the LHC, ranging from the automation of routine tasks to automatic error recovery and context-sensitive guidance to the operator. These mechanisms helped CMS to maintain a data taking efficiency above 90% and to even improve it to 95% towards the end of Run 1, despite an increase in the occurrence of single-event upsets in sub-detector electronics at high LHC luminosity.

  12. CMS Comic Book

    CERN Document Server

    Gill, Karl Aaron

    2006-01-01

    Titled "CMS Particle Hunter," this colorful comic book style brochure explains to young budding scientists and science enthusiasts in colorful animation how the CMS detector was made, its main parts, and what scientists hope to find using this complex tool. Book invites young students to get involved in particle physics themselves to join the adventure. Written by Dave Barney and Aline Guevera. Layout and drawings by Eric Paiharey and Frederic Vignaux. Available in English, French, German, Italian, Spanish and Portuguese. Year Produced: 2006. Update: September 2013.

  13. Tier2 Submit Software

    Science.gov (United States)

    Download this tool for Windows or Mac, which helps facilities prepare a Tier II electronic chemical inventory report. The data can also be exported into the CAMEOfm (Computer-Aided Management of Emergency Operations) emergency planning software.

  14. CERN Researchers' Night @ CMS + TOTEM

    CERN Multimedia

    Hoch, Michael

    2011-01-01

    Young researchers' shifter training at CMS; • Introduction talk with discussion, • CMS control room shadowing the shifters • TOTEM control room introduction and discusson • Scientific poster work shop and presentation • Science Art installations ‘Faces of CMS’ & ‘Science Cloud’ • CMS Shift diploma presentation

  15. Final descent for CMS

    CERN Multimedia

    The 15th and last section of the CMS detector was lowered on Tuesday 22 January. The YE-1 endcap (1430 tonnes) began its 100-metre descent at 7 am and arrived gently on the floor of the experiment hall at 5.30 pm.

  16. Exclusive Production at CMS

    CERN Document Server

    Walczak, Marek

    2016-01-01

    I briefly introduce so-called central exclusive production. I mainly focus on the example analyses that have been performed in the CMS experiment at CERN. I conclude with ideas and perspectives for future work that will be done during Run 2 of the LHC. I pay special attention to the ultraperipheral collisions.

  17. Exotica in CMS

    CERN Document Server

    AUTHOR|(CDS)2072123

    2015-01-01

    Selected results on exotica searches with the CMS detector are presented. The main topics are dark matter, boosted objects, long-lived particles and classic narrow resonance searches. Most of the analyses were performed with data recorded at at centre-of-mass energy of 8 TeV, but first results obtained at 13 TeV are also shown.

  18. CMS SEES FIRST COLLISIONS

    CERN Multimedia

      A very special moment.  On 23rd November, 19:40 we recorded our first collisions with 450GeV beams well centred in CMS.   If you have any comments / suggestions please contact Karl Aaron GILL (Editor)

  19. New Management for CMS

    CERN Document Server

    CERN Bulletin

    2010-01-01

    As of January 2010, Guido Tonelli becomes the new CMS Spokesperson with a two-year term of office. A Professor of General Physics at the University of Pisa, Italy, and a CERN Staff Member since January 2010, Tonelli had already been appointed as Deputy Spokesperson under the previous management. He has taken over from Jim Virdee, who was CMS Spokesperson from January 2007 to December 2009. Guido Tonelli, new CMS spokesperson At the same time as Tonelli becomes Spokesperson, two new Deputies, Albert De Roeck and Joe Incandela, as well as a whole new set of Coordinators, are also starting their terms of office. ”With the first data-taking run we have shown that CMS is an excellent experiment. The next challenge will be to transform CMS into a discovery machine with a view to making it synonymous with scientific excellence. This will be very tough but, again, the winning element will be the focus and coherent effort of the whole collaboration. On my side I'll do my best but I will need...

  20. CMS Achieves New Milestone

    CERN Multimedia

    2012-01-01

    In a year highlighted by the discovery of a new, Higgs-like boson, we must remember that CMS has had a tremendous year overall, with many physics results that have pushed our envelope of knowledge further. As of this week, we have published 200 papers. Congratulations to everyone involved!

  1. Improving CMS data transfers among its distributed computing facilities

    International Nuclear Information System (INIS)

    Flix, J; Magini, N; Sartirana, A

    2011-01-01

    CMS computing needs reliable, stable and fast connections among multi-tiered computing infrastructures. For data distribution, the CMS experiment relies on a data placement and transfer system, PhEDEx, managing replication operations at each site in the distribution network. PhEDEx uses the File Transfer Service (FTS), a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centres and are used by all computing sites in CMS, according to the established policy. FTS needs to be set up according to the Grid site's policies, and properly configured to satisfy the requirements of all Virtual Organizations making use of the Grid resources at the site. Managing the service efficiently requires good knowledge of the CMS needs for all kinds of transfer workflows. This contribution deals with a revision of FTS servers used by CMS, collecting statistics on their usage, customizing the topologies and improving their setup in order to keep CMS transferring data at the desired levels in a reliable and robust way.

  2. CMS computing upgrade and evolution

    CERN Document Server

    Hernandez Calama, Jose

    2013-01-01

    The distributed Grid computing infrastructure has been instrumental in the successful exploitation of the LHC data leading to the discovery of the Higgs boson. The computing system will need to face new challenges from 2015 on when LHC restarts with an anticipated higher detector output rate and event complexity, but with only a limited increase in the computing resources. A more efficient use of the available resources will be mandatory. CMS is improving the data storage, distribution and access as well as the processing efficiency. Remote access to the data through the WAN, dynamic data replication and deletion based on the data access patterns, and separation of disk and tape storage are some of the areas being actively developed. Multi-core processing and scheduling is being pursued in order to make a better use of the multi-core nodes available at the sites. In addition, CMS is exploring new computing techniques, such as Cloud Computing, to get access to opportunistic resources or as a means of using wit...

  3. CMS General Poster 2009 : to raise awareness of CMS, the CMS detector, its parts and people

    CERN Multimedia

    CMS outreach

    2012-01-01

    A poster which is identical to the two inside pages of the CMS brochure. The poster contains an image of a cross section of the CMS detector, explanation of detector parts, the aims of the CMS experiment and numbers of scientists and institutions associated with the experiment.

  4. CMS computing support at JINR

    International Nuclear Information System (INIS)

    Golutvin, I.; Koren'kov, V.; Lavrent'ev, A.; Pose, R.; Tikhonenko, E.

    1998-01-01

    Participation of JINR specialists in the CMS experiment at LHC requires a wide use of computer resources. In the context of JINR activities in the CMS Project hardware and software resources have been provided for full participation of JINR specialists in the CMS experiment; the JINR computer infrastructure was made closer to the CERN one. JINR also provides the informational support for the CMS experiment (web-server http://sunct2.jinr.dubna.su). Plans for further CMS computing support at JINR are stated

  5. 42 CFR 405.874 - Appeals of CMS or a CMS contractor.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false Appeals of CMS or a CMS contractor. 405.874 Section... Part B Program § 405.874 Appeals of CMS or a CMS contractor. A CMS contractor's (that is, a carrier... supplier enrollment application. If CMS or a CMS contractor denies a provider's or supplier's enrollment...

  6. Recent results from CMS

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    With the increase in center-of-mass energy, a new energy frontier has been opened by the Large Hadron Collider. More than 25 fb^-1 of proton-proton collisions at sqrt(s)=13 TeV have been delivered to both ATLAS and CMS experiments during 2016. This enormous dataset can be used to test the Standard Model in a complete new regime with tremendous precision and it has the potential to unveil new physics or set strong bounds on it. In this talk some of the most recent results made public by the CMS Collaboration will be presented. The focus will mainly be on searches for physics beyond the Standard Model, with particular emphasis on searches for dark matter candidates.

  7. Highlights from CMS

    CERN Document Server

    Autermann, Christian

    2018-01-01

    This article summarizes the latest highlights from the CMS experiment as presented at the Lepton Photon conference 2017 in Guangzhou, China. A selection of the latest physics results, the latest detector upgrades, and the current detector status are discussed. CMS has analyzed the full dataset of proton-proton collision data delivered by the LHC in 2016 at a center-of-mass energy of $13$\\,TeV corresponding to an integrated luminosity of $40$\\,fb$^{-1}$. The leap in center-of-mass energy and in luminosity with respect to the $7$ and $8$\\,TeV runs enabled interesting and relevant new physics results. A new silicon pixel tracking detector was installed during the LHC shutdown 2016/17 and has successfully started operation.

  8. Higgs searches with CMS

    CERN Multimedia

    CERN. Geneva

    2011-01-01

    The excellent performances of the LHC in the 2011 run are setting the grounds for the final chase of the Higgs boson. The CMS experiment is recording high quality data that are being thoroughly scrutinized. Several decay channels are investigated to probe the entire possible Higgs mass spectrum, from 110 to 600 GeV/c^2. The study of the first 1.5/fb of collected data places already tight limits and excludes large fractions of the Higgs mass range, leaving however still open the search in the theoretically favored low mass region. In this seminar we will report on the diverse CMS analyses that yield to such results describing the experimental challenges that each had to meet.

  9. The CMS COLD BOX

    CERN Multimedia

    Brice, Maximilien

    2015-01-01

    The CMS detector is built around a large solenoid magnet. This takes the form of a cylindrical coil of superconducting cable that generates a field of 3.8 Tesla: about 100,000 times the magnetic field of the Earth. To run, this superconducting magnet needs to be cooled down to very low temperature with liquid helium. Providing this is the job of a compressor station and the so-called “cold box”.

  10. Higgs physics at CMS

    Energy Technology Data Exchange (ETDEWEB)

    Holzner, André G., E-mail: andre.georg.holzner@cern.ch [University of California at San Diego (United States); Collaboration: on behalf of the CMS collaboration

    2016-12-15

    This article reviews recent measurements of the properties of the standard model (SM) Higgs boson using data recorded with the CMS detector at the LHC: its mass, width and couplings to other SM particles. We also summarise highlights from searches for new physical phenomena in the Higgs sector as they are proposed in many extensions of the SM: flavour violating and invisible decay modes, resonances decaying into Higgs bosons and searches for additional Higgs bosons.

  11. Dibosons from CMS

    Directory of Open Access Journals (Sweden)

    Martelli Arabella

    2012-06-01

    Full Text Available It is here presented the diboson production cross section measured by the CMS collaboration in pp collisions data at √s = 7 TeV. Wγ and Zγ results from 2010 analyses (36 pb−1 are presented together with 2011 first measurements of WW, WZ and ZZ final states obtained using 1.1 fb−1. Results obtained with 2010 data are also interpreted in term of anomalous triple gauge couplings.

  12. CMS lead tungstate crystals

    CERN Multimedia

    Laurent Guiraud

    2000-01-01

    These crystals are made from lead tungstate, a crystal that is as clear as glass yet with nearly four times the density. They have been produced in Russia to be used as scintillators in the electromagnetic calorimeter on the CMS experiment, part of the LHC project at CERN. When an electron, positron or photon passes through the calorimeter it will cause a cascade of particles that will then be absorbed by these scintillating crystals, allowing the particle's energy to be measured.

  13. The CMS superconducting solenoid

    CERN Multimedia

    Maximilien Brice

    2004-01-01

    The huge solenoid that will generate the magnetic field for the CMS experiment at the LHC is shown stored in the assembly hall above the experimental cavern. The solenoid is made up of five pieces totaling 12.5 m in length and 6 m in diameter. It weighs 220 tonnes and will produce a 4 T magnetic field, 100 000 times the strength of the Earth's magnetic field, storing enough energy to melt 18 tonnes of gold.

  14. The CMS conductor

    CERN Document Server

    Horváth, I L; Marti, H P; Neuenschwander, J; Smith, R P; Fabbricatore, P; Musenich, R; Calvo, A; Campi, D; Curé, B; Desirelli, Alberto; Favre, G; Riboni, P L; Sgobba, Stefano; Tardy, T; Sequeira-Lopes-Tavares, S

    2000-01-01

    The Compact Muon Solenoid (CMS) is one of the experiments, which are being designed in the framework of the Large Hadron Collider (LHC) project at CERN, the design field of the CMS magnet is 4 T, the magnetic length is 13 m and the aperture is 6 m. This high magnetic field is achieved by means of a 4 layer, 5 modules superconducting coil. The coil is wound from an Al-stabilized Rutherford type conductor. The nominal current of the magnet is 20 kA at 4.5 K. In the CMS coil the structural function is ensured, unlike in other existing Al-stabilized thin solenoids, both by the Al-alloy reinforced conductor and the external former. In this paper the retained manufacturing process of the 50-km long reinforced conductor is described. In general the Rutherford type cable is surrounded by high purity aluminium in a continuous co-extrusion process to produce the Insert. Thereafter the reinforcement is joined by Electron Beam Welding to the pure Al of the insert, before being machined to the final dimensions. During the...

  15. Analysing CMS transfers using Machine Learning techniques

    CERN Document Server

    Diotalevi, Tommaso

    2016-01-01

    LHC experiments transfer more than 10 PB/week between all grid sites using the FTS transfer service. In particular, CMS manages almost 5 PB/week of FTS transfers with PhEDEx (Physics Experiment Data Export). FTS sends metrics about each transfer (e.g. transfer rate, duration, size) to a central HDFS storage at CERN. The work done during these three months, here as a Summer Student, involved the usage of ML techniques, using a CMS framework called DCAFPilot, to process this new data and generate predictions of transfer latencies on all links between Grid sites. This analysis will provide, as a future service, the necessary information in order to proactively identify and maybe fix latency issued transfer over the WLCG.

  16. CMS Industries awarded gold, crystal

    CERN Multimedia

    2006-01-01

    The CMS collaboration honoured 10 of its top suppliers in the seventh annual awards ceremony The representatives of the firms that recieved the CMS Gold and Crystal Awards stand with their awards after the ceremony. The seventh annual CMS Awards ceremony was held on Monday 13 March to recognize the industries that have made substantial contributions to the construction of the collaboration's detector. Nine international firms received Gold Awards, and General Tecnica of Italy received the prestigious Crystal Award. Representatives from the companies attended the ceremony during the plenary session of CMS week. 'The role of CERN, its machines and experiments, beyond particle physics is to push the development of equipment technologies related to high-energy physics,'said CMS Awards Coordinator Domenico Campi. 'All of these industries must go beyond the technologies that are currently available.' Without the involvement of good companies over the years, the construction of the CMS detector wouldn't be possible...

  17. Improving CMS data transfers among its distributed computing facilities

    CERN Document Server

    Flix, J; Sartirana, A

    2001-01-01

    CMS computing needs reliable, stable and fast connections among multi-tiered computing infrastructures. For data distribution, the CMS experiment relies on a data placement and transfer system, PhEDEx, managing replication operations at each site in the distribution network. PhEDEx uses the File Transfer Service (FTS), a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centres and are used by all computing sites in CMS, according to the established policy. FTS needs to be set up according to the Grid site's policies, and properly configured to satisfy the requirements of all Virtual Organizations making use of the Grid resources at the site. Managing the service efficiently requires good knowledge of the CMS needs for all kinds of transfer workflows. This contribution deals with a revision of FTS servers used by CMS, collecting statistics on thei...

  18. Improving CMS data transfers among its distributed computing facilities

    CERN Document Server

    Flix, Jose

    2010-01-01

    CMS computing needs reliable, stable and fast connections among multi-tiered computing infrastructures. For data distribution, the CMS experiment relies on a data placement and transfer system, PhEDEx, managing replication operations at each site in the distribution network. PhEDEx uses the File Transfer Service (FTS), a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centres and are used by all computing sites in CMS, according to the established policy. FTS needs to be set up according to the Grid site's policies, and properly configured to satisfy the requirements of all Virtual Organizations making use of the Grid resources at the site. Managing the service efficiently requires good knowledge of the CMS needs for all kinds of transfer workflows. This contribution deals with a revision of FTS servers used by CMS, collecting statistics on the...

  19. Transport of the Hadronic Forward (HF) calorimeter from building 186 (CERN Meyrin site) to the CMS construction hall at point 5, Cessy, France.

    CERN Multimedia

    Florelle Antoine

    2006-01-01

    The two halves of the Forward Hadronic Calorimeter (HF) were transported from the CERN Meyrin site to the surface assembly hall at LHC Point 5 in Cessy, France, during the first part of July. Transporting these 300 tonne objects involved the construction around them of a 65-metre long trailer, simultaneously pushed and pulled by two trucks at either end. The main road between St. Genis and Cessy was closed during these operations and a police escort was provided for the ~5 hour journeys. The two HF halves will be the first major elements to be lowered by the gantry crane into the underground experimental cavern around the end of July or beginning of August.

  20. CMS silicon tracker developments

    International Nuclear Information System (INIS)

    Civinini, C.; Albergo, S.; Angarano, M.; Azzi, P.; Babucci, E.; Bacchetta, N.; Bader, A.; Bagliesi, G.; Basti, A.; Biggeri, U.; Bilei, G.M.; Bisello, D.; Boemi, D.; Bosi, F.; Borrello, L.; Bozzi, C.; Braibant, S.; Breuker, H.; Bruzzi, M.; Buffini, A.; Busoni, S.; Candelori, A.; Caner, A.; Castaldi, R.; Castro, A.; Catacchini, E.; Checcucci, B.; Ciampolini, P.; Creanza, D.; D'Alessandro, R.; Da Rold, M.; Demaria, N.; De Palma, M.; Dell'Orso, R.; Della Marina, R.D.R.; Dutta, S.; Eklund, C.; Feld, L.; Fiore, L.; Focardi, E.; French, M.; Freudenreich, K.; Frey, A.; Fuertjes, A.; Giassi, A.; Giorgi, M.; Giraldo, A.; Glessing, B.; Gu, W.H.; Hall, G.; Hammarstrom, R.; Hebbeker, T.; Honma, A.; Hrubec, J.; Huhtinen, M.; Kaminsky, A.; Karimaki, V.; Koenig, St.; Krammer, M.; Lariccia, P.; Lenzi, M.; Loreti, M.; Luebelsmeyer, K.; Lustermann, W.; Maettig, P.; Maggi, G.; Mannelli, M.; Mantovani, G.; Marchioro, A.; Mariotti, C.; Martignon, G.; Evoy, B. Mc; Meschini, M.; Messineo, A.; Migliore, E.; My, S.; Paccagnella, A.; Palla, F.; Pandoulas, D.; Papi, A.; Parrini, G.; Passeri, D.; Pieri, M.; Piperov, S.; Potenza, R.; Radicci, V.; Raffaelli, F.; Raymond, M.; Santocchia, A.; Schmitt, B.; Selvaggi, G.; Servoli, L.; Sguazzoni, G.; Siedling, R.; Silvestris, L.; Starodumov, A.; Stavitski, I.; Stefanini, G.; Surrow, B.; Tempesta, P.; Tonelli, G.; Tricomi, A.; Tuuva, T.; Vannini, C.; Verdini, P.G.; Viertel, G.; Xie, Z.; Yahong, Li; Watts, S.; Wittmer, B.

    2002-01-01

    The CMS Silicon tracker consists of 70 m 2 of microstrip sensors which design will be finalized at the end of 1999 on the basis of systematic studies of device characteristics as function of the most important parameters. A fundamental constraint comes from the fact that the detector has to be operated in a very hostile radiation environment with full efficiency. We present an overview of the current results and prospects for converging on a final set of parameters for the silicon tracker sensors

  1. The CMS detector magnet

    CERN Document Server

    Hervé, A

    2000-01-01

    CMS (Compact Muon Solenoid) is a general-purpose detector designed to run in mid-2005 at the highest luminosity at the LHC at CERN. Its distinctive features include a 6 m free bore diameter, 12.5 m long, 4 T superconducting solenoid enclosed inside a 10,000 tonne return yoke. The magnet will be assembled and tested on the surface by the end of 2003 before being transferred by heavy lifting means to a 90 m deep underground experimental area. The design and construction of the magnet is a `common project' of the CMS Collaboration. It is organized by a CERN based group with strong technical and contractual participation by CEA Saclay, ETH Zurich, Fermilab Batavia IL, INFN Geneva, ITEP Moscow, University of Wisconsin and CERN. The return yoke, 21 m long and 14 m in diameter, is equivalent to 1.5 m of saturated iron interleaved with four muon stations. The yoke and the vacuum tank are being manufactured. The indirectly-cooled, pure- aluminium-stabilized coil is made up from five modules internally wound with four ...

  2. Hadron correlations in CMS

    CERN Document Server

    Maguire, Charles Felix

    2012-01-01

    The measurements of the anisotropic flow of single particles and particle pairs have provided some of the most compelling evidence for the creation of a strongly interacting quark-gluon plasma (sQGP) in relativistic heavy ion collisions, first at RHIC, and more recently at the LHC. Using PbPb collision data taken in the 2010 and 2011 heavy ion runs at the LHC, the CMS experiment has investigated a broad scope of these flow phenomena. The $v_2$ elliptic flow coefficient has been extracted with four different methods to cross-check contributions from initial state fluctuations and non-flow correlations. The measurements of the $v_2$ elliptic anisotropy have been extended to a transverse momentum of 60 GeV/c, which will enable the placement of new quantitative constraints on parton energy loss models as a function of path length in the sQGP medium. Additionally, for the first time at the LHC, the CMS experiment has extracted precise elliptic anisotropy coefficients for the neutral $\\pi$ meson ($\\pi^0$) in the c...

  3. The CMS Event Builder

    CERN Document Server

    Brigljevic, V; Cano, E; Cittolin, Sergio; Csilling, Akos; Gigi, D; Glege, F; Gómez-Reino, Robert; Gulmini, M; Gutleber, J; Jacobs, C; Kozlovszky, Miklos; Larsen, H; Magrans de Abril, Ildefons; Meijers, F; Meschi, E; Murray, S; Oh, A; Orsini, L; Pollet, L; Rácz, A; Samyn, D; Scharff-Hansen, P; Schwick, C; Sphicas, Paris; ODell, V; Suzuki, I; Berti, L; Maron, G; Toniolo, N; Zangrando, L; Ninane, A; Erhan, S; Bhattacharya, S; Branson, J G

    2003-01-01

    The data acquisition system of the CMS experiment at the Large Hadron Collider will employ an event builder which will combine data from about 500 data sources into full events at an aggregate throughput of 100 GByte/s. Several architectures and switch technologies have been evaluated for the DAQ Technical Design Report by measurements with test benches and by simulation. This paper describes studies of an EVB test-bench based on 64 PCs acting as data sources and data consumers and employing both Gigabit Ethernet and Myrinet technologies as the interconnect. In the case of Ethernet, protocols based on Layer-2 frames and on TCP/IP are evaluated. Results from ongoing studies, including measurements on throughput and scaling are presented. The architecture of the baseline CMS event builder will be outlined. The event builder is organised into two stages with intelligent buffers in between. The first stage contains 64 switches performing a first level of data concentration by building super-fragments from fragmen...

  4. CMS tracker observes muons

    CERN Multimedia

    2006-01-01

    A computer image of a cosmic ray traversing the many layers of the TEC+ silicon sensors. The first cosmic muon tracks have been observed in one of the CMS tracker endcaps. On 14 March, a sector on one of the two large tracker endcaps underwent a cosmic muon run. Since then, thousands of tracks have been recorded. These data will be used not only to study the tracking, but also to exercise various track alignment algorithms The endcap tested, called the TEC+, is under construction at RWTH Aachen in Germany. The endcaps have a modular design, with silicon strip modules mounted onto wedge-shaped carbon fibre support plates, so-called petals. Up to 28 modules are arranged in radial rings on both sides of these plates. One eighth of an endcap is populated with 18 petals and called a sector. The next major step is a test of the first sector at CMS operating conditions, with the silicon modules at a temperature below -10°C. Afterwards, the remaining seven sectors have to be integrated. In autumn 2006, TEC+ wil...

  5. CMS Tracker Model

    CERN Multimedia

    Model of the tracking detector for the CMS experiment at the LHC. This object is a mock-up of an early design of the CMS Tracker mechanics. It is a segment of a “Wheel” to support Micro-Strip Gas Chamber (MSGC) detector modules on the outer layers and silicon-strip detector modules in the innermost layers. The particularity of that design is that modules are organised in spirals, along which power and optical cables and cooling pipes were planned to be routed. Some of such spirals are illustrated in the mock-up by the colors of the modules. With the detector development it became, however, evident that the silicon detectors would need to be operated in LHC experiments in cold temperatures, while the MSGC could stay in normal room-temperature. That split in two temperatures lead to separating those two detector types by a thermal barrier and therefore jeopardizing the idea of using common, vertical Wheels with services arranged along spirals.

  6. CMS ready for winding up

    CERN Multimedia

    2003-01-01

    End of October, the last lengths of conductor for the CMS superconducting solenoid have been produced. This is another large sub-project of the CMS Magnet being successfully finished, after completion of the Yoke last year (see Bulletin 43/2002).

  7. CMS Data Analysis School Model

    CERN Document Server

    Malik, Sudhir; Cavanaugh, R; Bloom, K; Chan, Kai-Feng; D'Hondt, J; Klima, B; Narain, M; Palla, F; Rolandi, G; Schörner-Sadenius, T

    2014-01-01

    To impart hands-on training in physics analysis, CMS experiment initiated the  concept of CMS Data Analysis School (CMSDAS). It was born three years ago at the LPC (LHC Physics Center), Fermilab and is based on earlier workshops held at the LPC and CLEO Experiment. As CMS transitioned from construction to the data taking mode, the nature of earlier training also evolved to include more of analysis tools, software tutorials and physics analysis. This effort epitomized as CMSDAS has proven to be a key for the new and young physicists to jump start and contribute to the physics goals of CMS by looking for new physics with the collision data. With over 400 physicists trained in six CMSDAS around the globe , CMS is trying to  engage the collaboration discovery potential and maximize the physics output. As a bigger goal, CMS is striving to nurture and increase engagement of the myriad talents of CMS, in the development of physics, service, upgrade, education of those new to CMS and the caree...

  8. CERN Open Days 2013, Point 5 - CMS: CMS Experiment

    CERN Multimedia

    CERN Photolab

    2013-01-01

    Stand description: Come to LHC's Point 5 and visit the Compact Muon Solenoid (CMS) experiment that discovered the Higgs boson ! Descend 100 metres underground and take a walk in the cathedral-sized cavern housing the 14,000-tonne CMS detector. Ask Higgs hunters and other scientists just about anything, be it questions about their work, particle physics or the engineering challenges of building CMS.  On surface no restricted access  Point 5 will be abuzz all day long with activities for all ages, including literally "cool" cryogenics shows featuring the world's fastest ice-cream maker, dance performances, and much more.

  9. CMS Tracker Visualisation

    CERN Document Server

    Mennea, Maria Santa; Zito, Giuseppe

    2004-01-01

    To provide improvements in the performance of existing tracker data visualization tools in IGUANA, a 2D visualisation software has been developed, using the object oriented paradigm and software engineering techniques. We have designed 2D graphics objects and some of them have been implemented. The access to the new objects is made in ORCA plugin of IGUANA CMS. A new tracker object oriented model has been designed for developing these 2D graphics objects. The model consists of new classes which represent all its components (layers, modules, rings, petals, rods).The new classes are described here. The last part of this document contains a user manual of the software and will be updated with new releases.

  10. The CMS silicon tracker

    International Nuclear Information System (INIS)

    Focardi, E.; Albergo, S.; Angarano, M.; Azzi, P.; Babucci, E.; Bacchetta, N.; Bader, A.; Bagliesi, G.; Basti, A.; Biggeri, U.; Bilei, G.M.; Bisello, D.; Boemi, D.; Bosi, F.; Borrello, L.; Bozzi, C.; Braibant, S.; Breuker, H.; Bruzzi, M.; Buffini, A.; Busoni, S.; Candelori, A.; Caner, A.; Castaldi, R.; Castro, A.; Catacchini, E.; Checcucci, B; Ciampolini, P.; Civinini, C.; Creanza, D.; D'Alessandro, R.; Da Rold, M.; Demaria, N.; De Palma, M.; Dell'Orso, R.; Della Marina, R.; Dutta, S.; Eklund, C.; Feld, L.; Fiore, L.; French, M.; Freudenreich, K.; Frey, A.; Fuertjes, A.; Giassi, A.; Giorgi, M.; Giraldo, A.; Glessing, B.; Gu, W.H.; Hall, G.; Hammarstrom, R.; Hebbeker, T.; Honma, A.; Hrubec, J.; Huhtinen, M.; Kaminsky, A.; Karimaki, V.; Koenig, St.; Krammer, M.; Lariccia, P.; Lenzi, M.; Loreti, M.; Leubelsmeyer, K.; Lustermann, W.; Maettig, P.; Maggi, G.; Mannelli, M.; Mantovani, G.; Marchioro, A.; Mariotti, C.; Martignon, G.; Evoy, B.Mc; Meschini, M.; Messineo, A.; Migliore, E.; My, S.; Paccagnella, A.; Palla, F.; Pandoulas, D.; Papi, A.; Parrini, G.; Passeri, D.; Pieri, M.; Piperov, S.; Potenza, R.; Radicci, V.; Raffaelli, F.; Raymond, M.; Rizzo, F.; Santocchia, A.; Schmitt, B.; Selvaggi, G.; Servoli, L.; Sguazzoni, G.; Siedling, R.; Silvestris, L.; Starodumov, A.; Stavitski, I.; Stefanini, G.; Surrow, B.; Tempesta, P.; Tonelli, G.; Tricomi, A.; Tuuva, T.; Vannini, C.; Verdini, P.G.; Viertel, G.; Xie, Z.; Yahong, Li; Watts, S.; Wittmer, B.

    2000-01-01

    This paper describes the Silicon microstrip Tracker of the CMS experiment at LHC. It consists of a barrel part with 5 layers and two endcaps with 10 disks each. About 10 000 single-sided equivalent modules have to be built, each one carrying two daisy-chained silicon detectors and their front-end electronics. Back-to-back modules are used to read-out the radial coordinate. The tracker will be operated in an environment kept at a temperature of T=-10 deg. C to minimize the Si sensors radiation damage. Heavily irradiated detectors will be safely operated due to the high-voltage capability of the sensors. Full-size mechanical prototypes have been built to check the system aspects before starting the construction

  11. CMS pixel upgrade project

    CERN Document Server

    Kaestli, Hans-Christian

    2010-01-01

    The LHC machine at CERN finished its first year of pp collisions at a center of mass energy of 7~TeV. While the commissioning to exploit its full potential is still ongoing, there are plans to upgrade its components to reach instantaneous luminosities beyond the initial design value after 2016. A corresponding upgrade of the innermost part of the CMS detector, the pixel detector, is needed. A full replacement of the pixel detector is planned in 2016. It will not only address limitations of the present system at higher data rates, but will aggressively lower the amount of material inside the fiducial tracking volume which will lead to better tracking and b-tagging performance. This article gives an overview of the project and illuminates the motivations and expected improvements in the detector performance.

  12. Luminosity measurement at CMS

    CERN Document Server

    Leonard, Jessica Lynn

    2014-01-01

    The measurement of the luminosity delivered by the LHC is pivotal for several key physics analyses. During the first three years of running, tremendous steps forwards have been made in the comprehension of the subtleties related to luminosity monitoring and calibration, which led to an unprecedented accuracy at a hadron collider. The detectors and corresponding algorithms employed to estimate online and offline the luminosity in CMS are described. Details are given concerning the procedure based on the Van der Meer scan technique that allowed a very precise calibration of the luminometers from the determination of the LHC beams parameters. What is being prepared in terms of detector and online software upgrades for the next LHC run is also summarized.

  13. CMS hadronic forward calorimeter

    International Nuclear Information System (INIS)

    Merlo, J.P.

    1998-01-01

    Tests of quartz fiber prototypes, based on the detection of Cherenkov light from showering particles, demonstrate a detector possessing all of the desirable characteristics for a forward calorimeter. A prototype for the CMS experiment consists of 0.3 mm diameter fibers embedded in a copper matrix. The response to high energy (10-375 GeV) electrons, pions, protons and muons, the light yield, energy and position resolutions, and signal uniformity and linearity, are discussed. The signal generation mechanism gives this type of detector unique properties, especially for the detection of hadronic showers: Narrow, shallow shower profiles, hermeticity and extremely fast signals. The implications for measurements in the high-rate, high-radiation LHC environment are discussed. (orig.)

  14. Electroweak Results from CMS

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    We present recent CMS measurements on electroweak boson production including single, double, and triple boson final states. Electroweak processes span many orders of magnitude in production cross section. Measurements of high-rate processes provide stringent tests of the standard model. In addition, rare triboson proceses and final states produced through vector boson scattering are newly accessible with the large integrated luminosity provided by the LHC. If new physics lies just beyond the reach of the LHC, its effects may manifest as enhancements to the high energy kinematics in mulitboson production. We present limits on new physics signatures using an effective field theory which models these modifications as modifications of electroweak gauge couplings. Since electroweak measurements will continue to benefit from the increasing integrated luminosity provided by the LHC, the future prospects of electroweak physics are discussed.

  15. CMS computing model evolution

    International Nuclear Information System (INIS)

    Grandi, C; Bonacorsi, D; Colling, D; Fisk, I; Girone, M

    2014-01-01

    The CMS Computing Model was developed and documented in 2004. Since then the model has evolved to be more flexible and to take advantage of new techniques, but many of the original concepts remain and are in active use. In this presentation we will discuss the changes planned for the restart of the LHC program in 2015. We will discuss the changes planning in the use and definition of the computing tiers that were defined with the MONARC project. We will present how we intend to use new services and infrastructure to provide more efficient and transparent access to the data. We will discuss the computing plans to make better use of the computing capacity by scheduling more of the processor nodes, making better use of the disk storage, and more intelligent use of the networking.

  16. Rivet usage at CMS

    Energy Technology Data Exchange (ETDEWEB)

    Radziej, Markus; Hebbeker, Thomas; Sonnenschein, Lars [III. Phys. Inst. A, RWTH Aachen (Germany)

    2015-07-01

    In this talk an overview of Rivet and its usage at the CMS experiment is presented. Rivet stands for ''Robust Independent Validation of Experiment and Theory'' and is used for optimizing and validating Monte Carlo event generators. By using the results of published analyses, distributions of the simulation can be compared to experimental measurements (corrected for detector effects). This gives insight into the agreement on the particle-level. Starting off with an introduction to the Rivet environment, the purpose of this tool in modern particle physics is explained. Before taking a closer look at the analysis structure, the software necessary to get comparisons is outlined. Analysis implementations are discussed using code examples, showcasing the powerful framework that Rivet provides. A few selected final distributions displaying both Monte Carlo generated events and recorded data are presented, showing the potential to perform particle-level comparisons.

  17. CMS pixel upgrade project

    CERN Document Server

    INSPIRE-00575876

    2011-01-01

    The LHC machine at CERN finished its first year of pp collisions at a center of mass energy of 7 TeV. While the commissioning to exploit its full potential is still ongoing, there are plans to upgrade its components to reach instantaneous luminosities beyond the initial design value after 2016. A corresponding upgrade of the innermost part of the CMS detector, the pixel detector, is needed. A full replacement of the pixel detector is planned in 2016. It will not only address limitations of the present system at higher data rates, but will aggressively lower the amount of material inside the fiducial tracking volume which will lead to better tracking and b-tagging performance. This article gives an overview of the project and illuminates the motivations and expected improvements in the detector performance.

  18. Debugging data transfers in CMS

    International Nuclear Information System (INIS)

    Bagliesi, G; Belforte, S; Bloom, K; Bockelman, B; Bonacorsi, D; Fisk, I; Flix, J; Hernandez, J; D'Hondt, J; Maes, J; Kadastik, M; Klem, J; Kodolova, O; Kuo, C-M; Letts, J; Magini, N; Metson, S; Piedra, J; Pukhaeva, N; Tuura, L

    2010-01-01

    The CMS experiment at CERN is preparing for LHC data taking in several computing preparation activities. In early 2007 a traffic load generator infrastructure for distributed data transfer tests was designed and deployed to equip the WLCG tiers which support the CMS virtual organization with a means for debugging, load-testing and commissioning data transfer routes among CMS computing centres. The LoadTest is based upon PhEDEx as a reliable, scalable data set replication system. The Debugging Data Transfers (DDT) task force was created to coordinate the debugging of the data transfer links. The task force aimed to commission most crucial transfer routes among CMS tiers by designing and enforcing a clear procedure to debug problematic links. Such procedure aimed to move a link from a debugging phase in a separate and independent environment to a production environment when a set of agreed conditions are achieved for that link. The goal was to deliver one by one working transfer routes to the CMS data operations team. The preparation, activities and experience of the DDT task force within the CMS experiment are discussed. Common technical problems and challenges encountered during the lifetime of the taskforce in debugging data transfer links in CMS are explained and summarized.

  19. New CMS detectors under construction at CERN

    CERN Multimedia

    Katarina Anthony

    2012-01-01

    While the LHC will play the starring role in the 2013/2014 Long Shutdown (LS1), the break will also be a chance for its experiments to upgrade their detectors. CMS will be expanding its current muon detection systems, fitting 72 new cathode strip chambers (CSC) and 144 new resistive plate chambers (RPC) to the endcaps of the detector. These new chambers are currently under construction in Building 904.   CMS engineers install side panels on a CSC detector in Building 904. "The original RPC and CSC detectors were constructed in bits and pieces around the world," says Armando Lanaro, CSC construction co-ordinator. "But for the construction of these additional chambers, we decided to unify the assembly and testing into a single facility at CERN. There, CMS technicians, engineers and physicists are taking raw materials and transforming them into installation-ready detectors.” This new facility can be found in Building 904. Once the assembly site for the strai...

  20. Improving collaborative documentation in CMS

    International Nuclear Information System (INIS)

    Lassila-Perini, Kati; Salmi, Leena

    2010-01-01

    Complete and up-to-date documentation is essential for efficient data analysis in a large and complex collaboration like CMS. Good documentation reduces the time spent in problem solving for users and software developers. The scientists in our research environment do not necessarily have the interests or skills of professional technical writers. This results in inconsistencies in the documentation. To improve the quality, we have started a multidisciplinary project involving CMS user support and expertise in technical communication from the University of Turku, Finland. In this paper, we present possible approaches to study the usability of the documentation, for instance, usability tests conducted recently for the CMS software and computing user documentation.

  1. Evolution of CMS Workload Management Towards Multicore Job Support

    Energy Technology Data Exchange (ETDEWEB)

    Perez-Calero Yzquierdo, A. [Madrid, CIEMAT; Hernández, J. M. [Madrid, CIEMAT; Khan, F. A. [Quaid-i-Azam U.; Letts, J. [UC, San Diego; Majewski, K. [Fermilab; Rodrigues, A. M. [Fermilab; McCrea, A. [UC, San Diego; Vaandering, E. [Fermilab

    2015-12-23

    The successful exploitation of multicore processor architectures is a key element of the LHC distributed computing system in the coming era of the LHC Run 2. High-pileup complex-collision events represent a challenge for the traditional sequential programming in terms of memory and processing time budget. The CMS data production and processing framework is introducing the parallel execution of the reconstruction and simulation algorithms to overcome these limitations. CMS plans to execute multicore jobs while still supporting singlecore processing for other tasks difficult to parallelize, such as user analysis. The CMS strategy for job management thus aims at integrating single and multicore job scheduling across the Grid. This is accomplished by employing multicore pilots with internal dynamic partitioning of the allocated resources, capable of running payloads of various core counts simultaneously. An extensive test programme has been conducted to enable multicore scheduling with the various local batch systems available at CMS sites, with the focus on the Tier-0 and Tier-1s, responsible during 2015 of the prompt data reconstruction. Scale tests have been run to analyse the performance of this scheduling strategy and ensure an efficient use of the distributed resources. This paper presents the evolution of the CMS job management and resource provisioning systems in order to support this hybrid scheduling model, as well as its deployment and performance tests, which will enable CMS to transition to a multicore production model for the second LHC run.

  2. Opportunistic usage of the CMS online cluster using a cloud overlay

    CERN Document Server

    Chaze, Olivier; Andronidis, Anastasios; Behrens, Ulf; Branson, James; Brummer, Philipp; Contescu, Alexandru-Cristian; Cittolin, Sergio; Craigs, Benjamin; Darlea, Georgiana-Lavinia; Deldicque, Christian; Demiragli, Zeynep; Dobson, M; Doualot, Nicolas; Erhan, Samim; Fulcher, Jonathan Richard; Gigi, Dominique; Glege, Frank; Gomez-Ceballos, Guillelmo; Hegeman, Jeroen; Holzner, Andre Georg; Jimenez-Estupiñán, Raul; Masetti, Lorenzo; Meijers, Frans; Meschi, Emilio; Mommsen, Remigius; Morovic, Srecko; O'Dell, Vivian; Orsini, Luciano; Paus, Christoph; Pieri, Marco; Racz, Attila; Sakulin, Hannes; Schwick, Christoph; Reis, Thomas; Simelevicius, Dainius; Zejdl, Petr

    2016-01-01

    After two years of maintenance and upgrade, the Large Hadron Collider (LHC), the largest and most powerful particle accelerator in the world, has started its second three year run. Around 1500 computers make up the CMS (Compact Muon Solenoid) Online cluster. This cluster is used for Data Acquisition of the CMS experiment at CERN, selecting and sending to storage around 20 TBytes of data per day that are then analysed by the Worldwide LHC Computing Grid (WLCG) infrastructure that links hundreds of data centres worldwide. 3000 CMS physicists can access and process data, and are always seeking more computing power and data. The backbone of the CMS Online cluster is composed of 16000 cores which provide as much computing power as all CMS WLCG Tier1 sites (352K HEP-SPEC-06 score in the CMS cluster versus 300K across CMS Tier1 sites). The computing power available in the CMS cluster can significantly speed up the processing of data, so an effort has been made to allocate the resources of the CMS Online cluster to t...

  3. The CMS detector before closure

    CERN Multimedia

    Patrice Loiez

    2006-01-01

    The CMS detector before testing using muon cosmic rays that are produced as high-energy particles from space crash into the Earth's atmosphere generating a cascade of energetic particles. After closing CMS, the magnets, calorimeters, trackers and muon chambers were tested on a small section of the detector as part of the magnet test and cosmic challenge. This test checked the alignment and functionality of the detector systems, as well as the magnets.

  4. Recent SUSY Results from CMS

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    We present a summary of the recent results of searches for supersymmetry conducted by the CMS experiment. Several searches are reported using complementary final states and methods. The results presented include searches for stops and sbottoms, production of charginos and neutralinos, and R-parity violating signatures. Several of them are the first results of their kind from CMS, while others increased the mass reach significantly over previously published results from the LHC.

  5. Forward energy measurement with CMS

    CERN Document Server

    Kheyn, Lev

    2016-01-01

    Energy flow is measured in the forward region of CMS at pseudorapidities up to 6.6 in pp interactions at 13 TeV with forward (HF) and very forward (CASTOR) calorimeters. The results are compared to model predictions. The CMS results at different center-of-mass energies are intercompared using pseudorapidity variable shifted by beam rapidity, thus studying applicability of hypothesis of limiting fragmentation.

  6. Inauguration of the CMS solenoid

    CERN Multimedia

    Maximilien Brice

    2005-01-01

    In early 2005 the final piece of the CMS solenoid magnet arrived, marked by this ceremony held in the CMS assembly hall at Cessy, France. The solenoid is made up of five pieces totaling 12.5 m in length and 6 m in diameter. Weighing 220 tonnes, it will produce a 4 T magnetic field, 100 000 times the strength of the Earth's magnetic field and store enough energy to melt 18 tonnes of gold.

  7. CMS Dashboard Task Monitoring: A user-centric monitoring view

    International Nuclear Information System (INIS)

    Karavakis, Edward; Khan, Akram; Andreeva, Julia; Maier, Gerhild; Gaidioz, Benjamin

    2010-01-01

    We are now in a phase change of the CMS experiment where people are turning more intensely to physics analysis and away from construction. This brings a lot of challenging issues with respect to monitoring of the user analysis. The physicists must be able to monitor the execution status, application and grid-level messages of their tasks that may run at any site within the CMS Virtual Organisation. The CMS Dashboard Task Monitoring project provides this information towards individual analysis users by collecting and exposing a user-centric set of information regarding submitted tasks including reason of failure, distribution by site and over time, consumed time and efficiency. The development was user-driven with physicists invited to test the prototype in order to assemble further requirements and identify weaknesses with the application.

  8. The CMS link system

    International Nuclear Information System (INIS)

    Vila, I.

    1999-01-01

    The Compact Muon Solenoid (CMS) is a multi-purpose detector that is going to be installed in the future Large Hadron Collider (LHC) at CERN. Muons are one of the main physical signatures of the expected new physics. The muons are going to be detected by the Central Tracker (CT) and the Muon Spectrometer (MS). Both, the CT and MS can provide an independent muon momentum measurement, but for all η and momentum values the highest precision for muon momentum measurement is achieved when the muon tracks are reconstructed using both tracking detectors. The calorimeters and the solenoid volumes separate about three meters the CT and the MS. It has been shown that the alignment of the CT with respect to the MS can not be guaranteed by a software alignment in a reasonable time scale. Therefore, an opto-mechanical system (the multipoint link system) have been designed to monitor, on-line, the relative position of both sub-detectors providing a common reference frame for both of them. The local alignment of the muon barrel spectrometer determines the relative position of the muon chambers with respect to themselves and also with respect to a carbon fiber rigid structure called MAB (Module for the Alignment of the Barrel). There are a total of 36 MABs distributed in the boundary planes of each muon spectrometer sector. This paper describes all the equipment and presents the principle of measurement. (author)

  9. CMS multicore scheduling strategy

    International Nuclear Information System (INIS)

    Yzquierdo, Antonio Pérez-Calero; Hernández, Jose; Holzman, Burt; Majewski, Krista; McCrea, Alison

    2014-01-01

    In the next years, processor architectures based on much larger numbers of cores will be most likely the model to continue 'Moore's Law' style throughput gains. This not only results in many more jobs in parallel running the LHC Run 1 era monolithic applications, but also the memory requirements of these processes push the workernode architectures to the limit. One solution is parallelizing the application itself, through forking and memory sharing or through threaded frameworks. CMS is following all of these approaches and has a comprehensive strategy to schedule multicore jobs on the GRID based on the glideinWMS submission infrastructure. The main component of the scheduling strategy, a pilot-based model with dynamic partitioning of resources that allows the transition to multicore or whole-node scheduling without disallowing the use of single-core jobs, is described. This contribution also presents the experiences made with the proposed multicore scheduling schema and gives an outlook of further developments working towards the restart of the LHC in 2015.

  10. The CMS crystal calorimeter

    CERN Document Server

    Lustermann, W

    2004-01-01

    The measurement of the energy of electrons and photons with very high accuracy is of primary importance far the study of many physics processes at the Large Hadron Collider (LHC), in particular for the search of the Higgs Boson. The CMS experiment will use a crystal calorimeter with pointing geometry, almost covering 4p, as it offers a very good energy resolution. It is divided into a barrel composed of 61200 lead tungstate crystals, two end-caps with 14648 crystals and a pre-shower detector in front of the end-cap. The challenges of the calorimeter design arise from the high radiation environment, the 4 Tesla magnetic eld, the high bunch crossing rate of 40 MHz and the large dynamic range, requiring the development of fast, radiation hard crystals, photo-detectors and readout electronics. An overview of the construction and design of the calorimeter will be presented, with emphasis on some of the details required to meet the demanding performance goals. 19 Refs.

  11. CMS Trigger Performance

    CERN Document Server

    Donato, Silvio

    2017-01-01

    During its second run of operation (Run 2) which started in 2015, the LHC will deliver a peak instantaneous luminosity that may reach $2 \\cdot 10^{34}$ cm$^{-2}$s$^{-1}$ with an average pile-up of about 55, far larger than the design value. Under these conditions, the online event selection is a very challenging task. In CMS, it is realized by a two-level trigger system the Level-1 (L1) Trigger, implemented in custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the offline reconstruction software running on a computer farm. In order to face this challenge, the L1 trigger has been through a major upgrade compared to Run 1, whereby all electronic boards of the system have been replaced, allowing more sophisticated algorithms to be run online. Its last stage, the global trigger, is now able to perform complex selections and to compute high-level quantities, like invariant masses. Likewise, the algorithms that run in the HLT go through big improvements; in particular, new appr...

  12. A new dawn for CMS

    CERN Multimedia

    2007-01-01

    Supported by a gigantic crane and a factory-size room full of enthusiasm, the central barrel of CMS made its final journey underground on 28 February. The central section of the CMS detector starts its dramatic 10-hour descent underground.Several hours (and 100 metres) later, the massive barrel rests on the cavern floor. CMS scientists, journalists, photographers and members of the transport crew basked in the final rays of the 'solenoid-set' on 28 February as the central barrel of the CMS detector sinks below the horizon and began its ten-hour descent into the cavern 100 metres below. Thirteen metres long and weighing as much as five jumbo jets (1920 tonnes), the barrel is the largest of the 15 chunks of CMS detector that are being lowered one by one into the cavern. 'This is a challenging feat of engineering, as there are just 20 cm of leeway between the detector and the walls of the shaft,' said Austin Ball, Technical Coordinator of CMS. The section of the detector, which contains the solenoid of the magne...

  13. The CMS dataset bookkeeping service

    Science.gov (United States)

    Afaq, A.; Dolgert, A.; Guo, Y.; Jones, C.; Kosyakov, S.; Kuznetsov, V.; Lueking, L.; Riley, D.; Sekhri, V.

    2008-07-01

    The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS is available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems.

  14. The CMS dataset bookkeeping service

    Energy Technology Data Exchange (ETDEWEB)

    Afaq, A; Guo, Y; Kosyakov, S; Lueking, L; Sekhri, V [Fermilab, Batavia, Illinois 60510 (United States); Dolgert, A; Jones, C; Kuznetsov, V; Riley, D [Cornell University, Ithaca, New York 14850 (United States)

    2008-07-15

    The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS is available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems.

  15. The CMS dataset bookkeeping service

    International Nuclear Information System (INIS)

    Afaq, A; Guo, Y; Kosyakov, S; Lueking, L; Sekhri, V; Dolgert, A; Jones, C; Kuznetsov, V; Riley, D

    2008-01-01

    The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS is available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems

  16. The CMS dataset bookkeeping service

    International Nuclear Information System (INIS)

    Afaq, Anzar; Dolgert, Andrew; Guo, Yuyi; Jones, Chris; Kosyakov, Sergey; Kuznetsov, Valentin; Lueking, Lee; Riley, Dan; Sekhri, Vijay

    2007-01-01

    The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS is available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems

  17. Recombination Events Involving the atp9 Gene Are Associated with Male Sterility of CMS PET2 in Sunflower.

    Science.gov (United States)

    Reddemann, Antje; Horn, Renate

    2018-03-11

    Cytoplasmic male sterility (CMS) systems represent ideal mutants to study the role of mitochondria in pollen development. In sunflower, CMS PET2 also has the potential to become an alternative CMS source for commercial sunflower hybrid breeding. CMS PET2 originates from an interspecific cross of H. petiolaris and H. annuus as CMS PET1, but results in a different CMS mechanism. Southern analyses revealed differences for atp6 , atp9 and cob between CMS PET2, CMS PET1 and the male-fertile line HA89. A second identical copy of atp6 was present on an additional CMS PET2-specific fragment. In addition, the atp9 gene was duplicated. However, this duplication was followed by an insertion of 271 bp of unknown origin in the 5' coding region of the atp9 gene in CMS PET2, which led to the creation of two unique open reading frames orf288 and orf231 . The first 53 bp of orf288 are identical to the 5' end of atp9 . Orf231 consists apart from the first 3 bp, being part of the 271-bp-insertion, of the last 228 bp of atp9 . These CMS PET2-specific orfs are co-transcribed. All 11 editing sites of the atp9 gene present in orf231 are fully edited. The anther-specific reduction of the co-transcript in fertility-restored hybrids supports the involvement in male-sterility based on CMS PET2.

  18. Recombination Events Involving the atp9 Gene Are Associated with Male Sterility of CMS PET2 in Sunflower

    Directory of Open Access Journals (Sweden)

    Antje Reddemann

    2018-03-01

    Full Text Available Cytoplasmic male sterility (CMS systems represent ideal mutants to study the role of mitochondria in pollen development. In sunflower, CMS PET2 also has the potential to become an alternative CMS source for commercial sunflower hybrid breeding. CMS PET2 originates from an interspecific cross of H. petiolaris and H. annuus as CMS PET1, but results in a different CMS mechanism. Southern analyses revealed differences for atp6, atp9 and cob between CMS PET2, CMS PET1 and the male-fertile line HA89. A second identical copy of atp6 was present on an additional CMS PET2-specific fragment. In addition, the atp9 gene was duplicated. However, this duplication was followed by an insertion of 271 bp of unknown origin in the 5′ coding region of the atp9 gene in CMS PET2, which led to the creation of two unique open reading frames orf288 and orf231. The first 53 bp of orf288 are identical to the 5′ end of atp9. Orf231 consists apart from the first 3 bp, being part of the 271-bp-insertion, of the last 228 bp of atp9. These CMS PET2-specific orfs are co-transcribed. All 11 editing sites of the atp9 gene present in orf231 are fully edited. The anther-specific reduction of the co-transcript in fertility-restored hybrids supports the involvement in male-sterility based on CMS PET2.

  19. Powerfarm: A power and emergency management thread-based software tool for the ATLAS Napoli Tier2

    International Nuclear Information System (INIS)

    Doria, Alessandra; Carlino, Gianpaolo; Merola, Leonardo; Iengo, Salvatore; Ricciardi, Sergio; Staffa, Mariacarla

    2010-01-01

    The large computing power and the storage systems available in a Grid site or in a computing center are composed of several servers and devices, each with its own specific role in the center. A management and fault recovery system is of primary importance for management operations and to preserve the integrity of the systems in case of emergencies, such as power outages or temperature peaks. We developed Powerfarm, a customizable thread-based software system that monitors several parameters such as, for example, the status of power supplies, room and CPU temperatures and it promptly reacts to values out of range with the appropriate actions. Powerfarm enforces hardware and software dependencies between devices and it is able to switch them on/off in the particular order induced by the dependencies, so it's also useful for the site administrator to speed up the management of scheduled downtimes.

  20. Hanford 1999 Tier 2 Emergency and Hazardous Chemical Inventory Emergency Planning and Community Right-to-Know Act Section 312

    International Nuclear Information System (INIS)

    ZALOUDEK, D.E.

    2000-01-01

    The Hanford Site covers approximately 1,450 square kilometers (560 square miles) of land that is owned by the U.S. Government and managed by the U.S. Department of Energy, Richland Operations Office (DOE-RL). The Hanford Site is located northwest of the city of Richland, Washington. The city of Richland adjoins the southeastern portion of the Hanford Site boundary and is the nearest population center. Activities on the Hanford Site are centralized in numerically designated areas. The 100 Areas, located along the Columbia River, contain deactivated reactors. The processing units are in the 200 Areas, which are on a plateau approximately 11 kilometers (7 miles) from the Columbia River. The 300 Area, located adjacent to and north of Richland, contains research and development laboratories. The 400 Area, 8 kilometers (5 miles) northwest of the 300 Area, contains the Fast Flux Test Facility previously used for testing liquid metal reactor systems. Adjacent to the north of Richland, the 1100 Area contains offices associated with administration, maintenance, transportation, and materials procurement and distribution. The 600 Area covers all locations not specifically given an area designation. This Tier Two Emergency and Hazardous Chemical Inventory report contains information pertaining to hazardous chemicals managed by DOE-RL and its contractors on the Hanford Site. It does not include chemicals maintained in support of activities conducted by others on lands covered by leases, use permits, easements, and other agreements whereby land is used by parties other than DOE-RL. For example, this report does not include chemicals stored on state owned or leased lands (including the burial ground operated by US Ecology, Inc.), lands owned or used by the Bonneville Power Administration (including the Midway Substation and the Ashe Substation), lands used by the National Science Foundation (the Laser Interferometer Gravitational-Wave Observatory), lands leased to the Washington

  1. The CMS Data Quality Monitoring software experience and future improvements

    CERN Document Server

    De Guio, Federico

    2013-01-01

    The Data Quality Monitoring (DQM) Software proved to be a central tool in the CMS experiment. Its flexibility allowed its integration in several environments Online, for real-time detector monitoring; Offline, for the final, fine-grained Data Certification; Release Validation, to constantly validate the functionality and the performance of the reconstruction software; in Monte Carlo productions. The central tool to deliver Data Quality information is a web site for browsing data quality histograms (DQM GUI). In this contribution the usage of the DQM Software in the different environments and its integration in the CMS Reconstruction Software Framework and in all production workflows are presented.

  2. Job life cycle management libraries for CMS workflow management projects

    International Nuclear Information System (INIS)

    Lingen, Frank van; Wilkinson, Rick; Evans, Dave; Foulkes, Stephen; Afaq, Anzar; Vaandering, Eric; Ryu, Seangchan

    2010-01-01

    Scientific analysis and simulation requires the processing and generation of millions of data samples. These tasks are often comprised of multiple smaller tasks divided over multiple (computing) sites. This paper discusses the Compact Muon Solenoid (CMS) workflow infrastructure, and specifically the Python based workflow library which is used for so called task lifecycle management. The CMS workflow infrastructure consists of three layers: high level specification of the various tasks based on input/output data sets, life cycle management of task instances derived from the high level specification and execution management. The workflow library is the result of a convergence of three CMS sub projects that respectively deal with scientific analysis, simulation and real time data aggregation from the experiment. This will reduce duplication and hence development and maintenance costs.

  3. The CMS Outer Hadron Calorimeter

    CERN Document Server

    Acharya, Bannaje Sripathi; Banerjee, Sunanda; Banerjee, Sudeshna; Bawa, Harinder Singh; Beri, Suman Bala; Bhandari, Virender; Bhatnagar, Vipin; Chendvankar, Sanjay; Deshpande, Pandurang Vishnu; Dugad, Shashikant; Ganguli, Som N; Guchait, Monoranjan; Gurtu, Atul; Kalmani, Suresh Devendrappa; Kaur, Manjit; Kohli, Jatinder Mohan; Krishnaswamy, Marthi Ramaswamy; Kumar, Arun; Maity, Manas; Majumder, Gobinda; Mazumdar, Kajari; Mondal, Naba Kumar; Nagaraj, P; Narasimham, Vemuri Syamala; Patil, Mandakini Ravindra; Reddy, L V; Satyanarayana, B; Sharma, Seema; Singh, B; Singh, Jas Bir; Sudhakar, Katta; Tonwar, Suresh C; Verma, Piyush

    2006-01-01

    The CMS hadron calorimeter is a sampling calorimeter with brass absorber and plastic scintillator tiles with wavelength shifting fibres for carrying the light to the readout device. The barrel hadron calorimeter is complemented with a outer calorimeter to ensure high energy shower containment in CMS and thus working as a tail catcher. Fabrication, testing and calibrations of the outer hadron calorimeter are carried out keeping in mind its importance in the energy measurement of jets in view of linearity and resolution. It will provide a net improvement in missing $\\et$ measurements at LHC energies. The outer hadron calorimeter has a very good signal to background ratio even for a minimum ionising particle and can hence be used in coincidence with the Resistive Plate Chambers of the CMS detector for the muon trigger.

  4. EDSP Tier 2 test (T2T) guidances and protocols are delivered, including web-based guidance for diagnosing and scoring, and evaluating EDC-induced pathology in fish and amphibian

    Science.gov (United States)

    The Agency’s Endocrine Disruptor Screening Program (EDSP) consists of two tiers. The first tier provides information regarding whether a chemical may have endocrine disruption properties. Tier 2 tests provide confirmation of ED effects and dose-response information to be us...

  5. Experience in using commercial clouds in CMS

    Energy Technology Data Exchange (ETDEWEB)

    Bauerdick, L. [Fermilab; Bockelman, B. [Nebraska U.; Dykstra, D. [Fermilab; Fuess, S. [Fermilab; Garzoglio, G. [Fermilab; Girone, M. [CERN; Gutsche, O. [Fermilab; Holzman, B. [Fermilab; Hugnagel, D. [Fermilab; Kim, H. [Fermilab; Kennedy, R. [Fermilab; Mason, D. [Fermilab; Spentzouris, P. [Fermilab; Timm, S. [Fermilab; Tiradani, A. [Fermilab; Vaandering, E. [Fermilab

    2017-10-03

    Historically high energy physics computing has been performed on large purposebuilt computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.

  6. Distributing CMS Data between the Florida T2 and T3 Centers using Lustre and Xrootd-fs

    International Nuclear Information System (INIS)

    Kaganas, G; Rodriguez, J L; Cheng, M; Avery, P; Bourilkov, D; Fu, Y; Palencia, J

    2014-01-01

    We have developed remote data access for large volumes of data over the Wide Area Network based on the Lustre filesystem and Kerberos authentication for security. In this paper we explore a prototype for two-step data access from worker nodes at Florida Tier3 centers, located behind a firewall and using a private network, to data hosted on the Lustre filesystem at the University of Florida CMS Tier2 center. At the Tier3 center we use a client which mounts securely the Lustre filesystem and hosts an XrootD server. The worker nodes access the data from the Tier3 client using POSIX compliant tools via the XrootD-fs filesystem. We perform scalability tests with up to 200 jobs running in parallel on the Tier3 worker nodes.

  7. 25th May 2011 - Egyptian Minister for Scientific Research, Science and Technology A. Ezzat Salama signing the guest book with CERN Director-General R. Heuer and visiting CMS control centre with Collaboration Spokesperson G. Tonelli.

    CERN Multimedia

    Maximilien Brice

    2011-01-01

    He visited the CMS control room on the Meyrin site with, from left, CMS spokesperson, Guido Tonelli, Alaa Awad, Fayum University, Hisham Badr, ambassador at the UN Geneva, and Maged Elsherbiny, president of the Scientific Research Academy.

  8. 42 CFR 401.108 - CMS rulings.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false CMS rulings. 401.108 Section 401.108 Public Health... GENERAL ADMINISTRATIVE REQUIREMENTS Confidentiality and Disclosure § 401.108 CMS rulings. (a) After... regulations, but which has been adopted by CMS as having precedent, may be published in the Federal Register...

  9. Search for leptoquarks at CMS

    CERN Document Server

    Morse, David Michael

    2018-01-01

    A summary of the current experimental searches for leptoquarks with the CMS detector at the CERN LHC is presented, along with updates of new results from analyses performed using the full 2016 proton-proton dataset, corresponding to 35.9 fb$^{-1}$.

  10. Machine Learning applications in CMS

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Machine Learning is used in many aspects of CMS data taking, monitoring, processing and analysis. We review a few of these use cases and the most recent developments, with an outlook to future applications in the LHC Run III and for the High-Luminosity phase.

  11. CMS launches new educational tools

    CERN Document Server

    Corinne Pralavorio

    2014-01-01

    On 5 and 11 November, almost 90 pupils from the Fermi scientific high school in Livorno, Italy, took part in two Masterclass sessions organised by CMS.   CMS Masterclass participants.  The pupils took over a hall at CERN for an afternoon to test a new software tool called CIMA (CMS Instrument for Masterclass Analysis) for the first time. The software simplifies the process of recording results and reduces the number of steps required to enter data. During the exercise, each group of pupils had to analyse about a hundred events from the LHC. For each event, the budding physicists determined whether what they saw was a candidate W boson, Z boson or Higgs boson, identified the decay mode and entered key data. At the end of the analysis, they used the results to reconstruct a mass diagram. CIMA was developed by a team of scientists from the University of Aachen, Germany, the University of Notre-Dame, United States, and CERN. CMS has also added yet another educational tool to its already l...

  12. CMS - The Compact Muon Solenoid

    CERN Multimedia

    Bergauer, T; Waltenberger, W; Kratschmer, I; Treberer-treberspurg, W; Escalante del valle, A; Andreeva, I; Innocente, V; Camporesi, T; Malgeri, L; Marchioro, A; Moneta, L; Weingarten, W; Beni, N T; Cimmino, A; Rovere, M; Jafari, A; Lange, C G; Vartak, A P; Gilbert, A J; Pantaleo, F; Reis, T; Cucciati, G; Alipour tehrani, N; Stakia, A; Fallavollita, F; Pizzichemi, M; Rauco, G; Zhang, S; Hu, T; Yazgan, E; Zhang, H; Thomas-wilsker, J; Reithler, H K V; Philipps, B; Merschmeyer, M K; Heidemann, C A; Mukherjee, S; Geenen, H; Kuessel, Y; Weingarten, S; Gallo, E; Schwanenberger, C; Walsh bastos rangel, R; Beernaert, K S; De wit, A M; Elwood, A C; Connor, P; Lelek, A A; Wichmann, K H; Myronenko, V; Kovalchuk, N; Bein, S L; Dreyer, T; Scharf, C; Quast, G; Dierlamm, A H; Barth, C; Mol, X; Kudella, S; Schafer, D; Schimassek, R R; Matorras, F; Calderon tazon, A; Garcia ferrero, J; Bercher, M J; Sirois, Y; Callier, S; Depasse, P; Laktineh, I B; Grenier, G; Boudoul, G; Heath, G P; Hartley, D A; Quinton, S; Tomalin, I R; Harder, K; Francis, V B; Thea, A; Zhang, Z; Loukas, D; Hernath, S T; Naskar, K; Colaleo, A; Maggi, G P; Maggi, M; Loddo, F; Calabria, C; Campanini, R; Cuffiani, M; D'antone, I; Grandi, C; Navarria, F; Guiducci, L; Battilana, C; Tosi, N; Gulmini, M; Meola, S; Longo, E; Meridiani, P; Marzocchi, B; Schizzi, A; Cho, S; Ha, S; Kim, D H; Kim, G N; Md halid, M F B; Yusli, M N B; Dominik, W M; Bunkowski, K; Olszewski, M; Byszuk, A P; Rasteiro da silva, J C; Varela, J; Leong, Q; Sulimov, V; Vorobyev, A; Denisov, A; Murzin, V; Egorov, A; Lukyanenko, S; Postoev, V; Pashenkov, A; Solovey, A; Rubakov, V; Troitsky, S; Kirpichnikov, D; Lychkovskaya, N; Safronov, G; Fedotov, A; Toms, M; Barniakov, M; Olimov, K; Fazilov, M; Umaraliev, A; Dumanoglu, I; Bakirci, N M; Dozen, C; Demiroglu, Z S; Isik, C; Zeyrek, M; Yalvac, M; Ozkorucuklu, S; Chang, Y; Dolgopolov, A; Gottschalk, E E; Maeshima, K; Heavey, A E; Kramer, T; Kwan, S W L; Taylor, L; Tkaczyk, S M; Mokhov, N; Marraffino, J M; Mrenna, S; Yarba, V; Banerjee, B; Elvira, V D; Gray, L A; Holzman, B; Dagenhart, W; Canepa, A; Ryu, S C; Strobbe, N C; Adelman-mc carthy, J K; Contescu, A C; Andre, J O; Wu, J; Dittmer, S J; Bucinskaite, I; Zhang, J; Karchin, P E; Thapa, P; Zaleski, S G; Gran, J L; Wang, S; Zilizi, G; Raics, P P; Bhardwaj, A; Naimuddin, M; Smiljkovic, N; Stojanovic, M; Brandao malbouisson, H; De oliveira martins, C P; Tonelli manganote, E J; Medina jaime, M; Thiel, M; Laurila, S H; Graehling, P; Tonon, N; Blekman, F; Postiau, N J S; Leroux, P J; Van remortel, N; Janssen, X J; Di croce, D; Aleksandrov, A; Shopova, M F; Dogra, S M; Shinoda, A A; Arce, P; Daniel, M; Navarrete marin, J J; Redondo fernandez, I; Guirao elias, A; Cela ruiz, J M; Lottin, J; Gras, P; Kircher, F; Levesy, B; Payn, A; Guilloux, F; Negro, G; Leloup, C; Pasztor, G; Panwar, L; Bhatnagar, V; Bruzzi, M; Sciortino, S; Starodubtsev, O; Azzi, P; Conti, E; Lacaprara, S; Margoni, M; Rossin, R; Tosi, M; Fano', L; Lucaroni, A; Biino, C; Dattola, D; Rotondo, F; Ballestrero, A; Obertino, M M; Kiani, M B; Paterno, A; Magana villalba, R; Ramirez garcia, M; Reyes almanza, R; Gorski, M; Wrochna, G; Bluj, M J; Zarubin, A; Nozdrin, M; Ladygin, V; Malakhov, A; Golunov, A; Skrypnik, A; Sotnikov, A; Evdokimov, N; Tiurin, V; Lokhtin, I; Ershov, A; Platonova, M; Tyurin, N; Slabospitskii, S; Talov, V; Belikov, N; Ryazanov, A; Chao, Y; Tsai, J; Foord, A; Wood, D R; Orimoto, T J; Luckey, P D; Jaditz, S H; Stephans, G S; Darlea, G L; Di matteo, L; Maier, B; Trovato, M; Bhattacharya, S; Roberts, J B; Padley, P B; Tu, Z; Rorie, J T; Clarida, W J; Tiras, E; Khristenko, V; Cerizza, G; Pieri, M; Krutelyov, V; Saiz santos, M D; Klein, D S; Derdzinski, M; Murray, M J; Gray, J A; Minafra, N; Castle, J R; Bowen, J L S; Buterbaugh, K; Morrow, S I; Bunn, J; Newman, H; Spiropulu, M; Balcas, J; Lawhorn, J M; Thomas, S D; Panwalkar, S M; Kyriacou, S; Xie, Z; Ojalvo, I R; Salfeld-nebgen, J; Laird, E M; Wimpenny, S J; Yates, B R; Perry, T M; Schiber, C C; Diaz, D C; Uniyal, R; Mesic, B; Kolosova, M; Snow, G R; Lundstedt, C; Johnston, D; Zvada, M; Weitzel, D J; Damgov, J V; Cowden, C S; Giammanco, A; David, P N Y; Zobec, J; Cabrera jamoulle, J B; Daubie, E; Nash, J A; Evans, L; Hall, G; Nikitenko, A; Ryan, M J; Huffman, M A J; Styliaris, E; Evangelou, I; Sharan, M K; Roy, A; Rout, P K; Kalbhor, P N; Bagliesi, G; Braccini, P L; Ligabue, F; Boccali, T; Rizzi, A; Minuti, M; Oh, S; Kim, J; Sen, S; Boz evinay, M; Xiao, M; Hung, W T; Jensen, F O; Mulholland, T D; Kumar, A; Jones, M; Roozbahani, B H; Neu, C C; Thacker, H B; Wolfe, E M; Jabeen, S; Gilmore, J; Winer, B L; Rush, C J; Luo, W; Alimena, J M; Ko, W; Lander, R; Broadley, W H; Shi, M; Furic, I K; Low, J F; Bortignon, P; Alexander, J P; Zientek, M E; Conway, J V; Padilla fuentes, Y L; Florent, A H; Bravo, C B; Crotty, I M; Wenman, D L; Sarangi, T R; Ghabrous larrea, C; Gomber, B; Smith, N C; Long, K D; Roberts, J M; Hildreth, M D; Jessop, C P; Karmgard, D J; Loukas, N; Ferbel, T; Zielinski, M A; Cooper, S I; Jung, A; Van driessche, W G M; Fagot, A; Vermassen, B; Valchkova-georgieva, F K; Dimitrov, D S; Roumenin, T S; Podrasky, V; Re, V; Zucca, S; De canio, F; Romaniuk, R; Teodorescu, L; Krofcheck, D; Anderson, N G; Bell, S T; Salazar ibarguen, H A; Kudinov, V; Onishchenko, S; Naujikas, R; Lyubynskiy, V; Sobolev, O; Khan, M S; Adeel-ur-rehman, A; Hassan, Q U; Ali, I; Kreuzer, P K; Robson, A J; Gadrat, S G; Ivanov, A; Mendis, D; Da silva di calafiori, D R; Zeinali, M; Behnamian, H; Moroni, L; Malvezzi, S; Park, I; Pastika, N J; Oropeza barrera, C; Elkhateeb, E A A; Elmetenawee, W; Mohammed, Y; Tayel, E S A; Mcclatchey, R H; Kovacs, Z; Munir, K; Odeh, M; Magradze, E; Oikashvili, B; Shingade, P; Shukla, R A; Banerjee, S; Kumar, S; Jashal, B K; Grzanka, L; Adam, W; Ero, J; Fabjan, C; Jeitler, M; Rad, N K; Auffray hillemanns, E; Charkiewicz, A; Fartoukh, S; Garcia de enterria adan, D; Girone, M; Glege, F; Loos, R; Mannelli, M; Meijers, F; Sciaba, A; Meschi, E; Ricci, D; Petrucciani, G; Daguin, J; Vazquez velez, C; Karavakis, E; Nourbakhsh, S; Rabady, D S; Ceresa, D; Karacheban, O; Beguin, M; Kilminster, B J; Ke, Z; Meng, X; Zhang, Y; Tao, J; Romeo, F; Spiezia, A; Cheng, L; Zhukov, V; Feld, L W; Autermann, C T; Fischer, R; Erdweg, S; Kress, T H; Dziwok, C; Hansen, K; Schoerner-sadenius, T M; Marfin, I; Keaveney, J M; Diez pardos, C; Muhl, C W; Asawatangtrakuldee, C; Defranchis, M M; Asmuss, J P; Poehlsen, J A; Stober, F M H; Vormwald, B R; Kripas, V; Gonzalez vazquez, D; Kurz, S T; Niemeyer, C; Rieger, J O; Borovkov, A; Shvetsov, I; Sieber, G; Caspart, R; Iqbal, M A; Sander, O; Metzler, M B; Ardila perez, L E; Ruiz jimeno, A; Fernandez garcia, M; Scodellaro, L; Gonzalez sanchez, J F; Curras rivera, E; Semeniouk, I; Ochando, C; Bedjidian, M; Giraud, N A; Mathez, H; Zoccarato, Y D; Ianigro, J; Galbit, G C; Flacher, H U; Shepherd-themistocleous, C H; French, M J; Hill, J A; Jones, L L; Markou, A; Bencze, G L; Mishra, D K; Netrakanti, P K; Jha, V; Chudasama, R; Katta, S; Venditti, R; Cristella, L; Braibant-giacomelli, S; Dallavalle, G; Fabbri, F; Codispoti, G; Borgonovi, L; Caponero, M A; Berti, L; Fienga, F; Dafinei, I; Organtini, G; Del re, D; Pettinacci, V; Park, S K; Lee, K S; Kang, M; Kim, B; Park, H K; Kong, D J; Lee, S; Pak, S I; Zolkapli, Z B; Konecki, M A; Walczak, M B; Bargassa, P; Viegas guerreiro leonardo, N T; Levchenko, P; Orishchin, E; Suvorov, V; Uvarov, L; Gruzinskii, N; Pristavka, A; Kozlov, V; Radovskaia, A; Solovey, A; Kolosov, V; Vlassov, E; Parygin, P; Tumasyan, A; Topakli, H; Boran, F; Akin, I V; Oz, C; Gulmez, E; Atakisi, I O; Bakken, J A; Govi, G M; Lewis, J D; Shaw, T M; Bailleux, D; Stoynev, S E; Sexton-kennedy, E M; Huang, C; Lincoln, D W; Roser, R; Ito, A; Adams, M R; Apanasevich, L; Varelas, N; Sandoval gonzalez, I D; Hangal, D A; Yoo, J H; Ovcharova, A K; Bradmiller-feld, J W; Amin, N J; Miller, M P; Patterson, A S; Sharma, R K; Santoro, A; Lassila-perini, K M; Tuominiemi, J; Voutilainen, M A; Wu, X; Gross, L O; Le bihan, A; Fuks, B; Kieffer, E; Pansanel, J; Jansova, M; D'hondt, J; Abuzeid hassan, S A; Bilin, B; Beghin, D; Soultanov, G; Vankov, I D; Konstantinov, P B; Marra da silva, J; De souza santos, A; Arruda ramalho, L; Renker, D; Erdmann, W; Molinero vela, A; Fernandez bedoya, C; Bachiller perea, I; Chipaux, R; Faure, J D; Hamel de monchenault, G; Mandjavidze, I; Rander, J; Ferri, F; Leroy, C L; Machet, M; Nagy, M I; Felcini, M; Kaur, S; Saizu, M A; Civinini, C; Latino, G; Checchia, P; Ronchese, P; Vanini, S; Fantinel, S; Cecchi, C; Leonardi, R; Arneodo, M; Ruspa, M; Pacher, L; Rabadan trejo, R I; Mondragon herrera, C A; Golutvin, I; Zhiltsov, V; Melnichenko, I; Mjavia, D; Cheremukhin, A; Zubarev, E; Kalagin, V; Alexakhin, V; Mitsyn, V; Shulha, S; Vishnevskiy, A; Gavrilenko, M; Boos, E E; Obraztsov, S; Dubinin, M; Demiyanov, A; Dudko, L; Azhgirey, I; Chikilev, O; Turchanovich, L; Rurua, L; Hou, G W; Wang, M; Chang, P; Kumar, A; Liau, J; Lazic, D; Lawson, P D; Zou, D; Wisecarver, A L; Sumorok, K C; Klute, M; Lee, Y; Iiyama, Y; Velicanu, D A; Mc ginn, C; Abercrombie, D R; Tatar, K; Hahn, K A; Nussbaum, T W; Southwick, D C; Cittolin, S; Martin, T; Welke, C V; Wilson, G W; Baringer, P S; Sanders, S J; Mcbrayer, W J; Engh, D J; Sheldon, P D; Gurrola, A; Velkovska, J A; Melo, A M; Padeken, K O; Johnson, C N; Ni, H; Montalvo, R J; Heindl, M D; Ferguson, T; Vogel, H; Mudholkar, T K; Elmer, P; Tully, C; Luo, J; Hanson, G; Jandir, P S; Askew, A W; Kadija, K; Dimovasili, E; Attikis, A; Vasilas, I; Chen, G; Bockelman, B P; Kamalieddin, R; Barrefors, B P; Farleigh, B S; Akchurin, N; Demin, P; Pavlov, B A; Petkov, P S; Goranova, R; Tomsa, J; Lyons, L; Buchmuller, O; Magnan, A; Laner ogilvy, C; Di maria, R; Dutta, S; Thakur, S; Bettarini, S; Bosi, F; Giassi, A; Massa, M; Calzolari, F; Androsov, K; Lee, H; Komurcu, Y; Kim, D W; Wagner, S R; Perloff, A S; Rappoccio, S R; Harrington, C I; Baden, A R; Ricci-tam, F; Kamon, T; Rathjens, D; Pernie, L; Larsen, D; Ji, W; Pellett, D E; Smith, J; Acosta, D E; Field, R D; Yelton, J M; Kotov, K; Wang, S; Smolenski, K W; Mc coll, N W; Dasu, S R; Lanaro, A; Cook, J R; Gorski, T A; Buchanan, J J; Jain, S; Musienko, Y; Taroni, S; Meng, H; Siddireddy, P K; Xie, W; Rott, C; Benedetti, D; Everett, A A; Schulte, J; Mahakud, B; Ryckbosch, D D E; Crucy, S; Cornelis, T G M; Betev, B; Dimov, H; Raykov, P A; Uzunova, D G; Mihovski, K T; Mechinsky, V; Makarenko, V; Yermak, D; Yevarouskaya, U; Salvini, P; Manghisoni, M; Fontaine, J; Agram, J; Palinkas, J; Reid, I D; Bell, A J; Clyne, M N; Zavodchikov, S; Veelken, C; Kannike, K; Dewanjee, R K; Skarupelov, V; Piibeleht, M; Ehataht, K; Chang, S; Kuchinski, P; Bukauskas, L; Zhmurin, P; Kamal, A; Mubarak, M; Asghar, M I; Ahmad, N; Muhammad, S; Mansoor-ul-islam, S; Saddique, A; Waqas, M; Irshad, A; Veckalns, V; Toda, S; Choi, Y K; Yu, I; Hwang, C; Yumiceva, F X; Djambazov, L; Meinhard, M T; Becker, R J U; Grimm, O; Wallny, R S; Tavolaro, V R; Eller, P D; Meister, D; Paktinat mehdiabadi, S; Chenarani, S; Dini, P; Leporini, R; Dinardo, M; Brianza, L; Hakkarainen, U T; Parashar, N; Malik, S; Ramirez vargas, J E; Dharmaratna, W; Noh, S; Uang, A J; Kim, J H; Lee, J S H; Jeon, D; You, Z; Assran, Y; Elgammal, S; Ellithi kamel, A Y; Nayak, A K; Dash, D; Koca, N; Kothekar, K K; Karnam, R; Patil, M R; Torims, T; Hoch, M; Schieck, J R; Valentan, M; Spitzbart, D; Lucio alves, F L; Blanchot, G; Gill, K A; Orsini, L; Petrilli, A; Sharma, A; Tsirou, A; Deile, M; Hudson, D A; Gutleber, J; Folch, R; Tropea, P; Cerminara, G; Vichoudis, P; Pardo, T; Sabba, H; Selvaggi, M; Verzetti, M; Ngadiuba, J; Kornmayer, A; Niedziela, J; Aarrestad, T K; He, K; Li, B; Huang, Q; Pierschel, G; Esch, T; Louis, D; Quast, T; Nowack, A S; Beissel, F; Borras, K A; Mankel, R; Pitzl, D D; Kemp, Y; Meyer, A B; Krucker, D B; Mittag, G; Burgmeier, A; Lenz, T; Arndt, T M; Pflitsch, S K; Danilov, V; Dominguez damiani, D; Cardini, A; Kogler, R; Troendle, D C; Aggleton, R C; Lange, J; Reimers, A C; De boer, W; Weber, M M; Theel, A; Mozer, M U; Wayand, S; Harrendorf, M A; Harbaum, T R; El morabit, K; Marco, J; Rodrigo, T; Vila alvarez, I; Lopez garcia, A; Rembser, J; Mathieu, A; Kurca, T; Mirabito, L; Verdier, P; Combaret, C; Newbold, D M; Smith, V; Brooke, J J; Metson, S; Coughlan, J A; Torbet, M J; Belyaev, A; Kyriakis, A; Horvath, D; Veszpremi, V; Topkar, A; Selvaggi-maggi, G; Nuzzo, S V; Romano, F; Marangelli, B; Spinoso, V; Lezki, S; Castro, A; Rovelli, T; Brigliadori, L; Bianco, S; Fabbricatore, P; Farinon, S; Musenich, R; Ferro, F; Gozzelino, A; Buontempo, S; Casolaro, P; Paramatti, R; Vignati, M; Belforte, S; Hong, B; Roh, Y J; Choi, S Y; Son, D; Yang, Y C; Butanov, K; Kotobi, A; Krolikowski, J; Pozniak, K T; Misiura, M; Seixas, J C; Jain, A K; Nemallapudi, M V; Shchipunov, L; Lebedev, V; Skorobogatov, V; Klimenko, K; Terkulov, A; Kirakosyan, M; Azarkin, M; Krasnikov, N; Stepanova, L; Gavrilov, V; Spiridonov, A; Semenov, S; Krokhotin, A; Rusinov, V; Chistov, R; Zhemchugov, E; Nishonov, M; Hmayakyan, G; Khachatryan, V; Ozdemir, K; Ozturk, S; Tali, B; Kangal, E E; Turkcapar, S; Zorbakir, I S; Aliyev, T; Demir, D A; Liu, W; Apollinari, G; Osborne, I; Genser, K; Lammel, S; Whitmore, J; Mommsen, R; Apyan, A; Badgett jr, W F; Atac, M; Joshi, U P; Vidal, R A; Giacchetti, L A; Merkel, P; Johnson, M E; Soha, A L; Tran, N V; Rapsevicius, V; Hirschauer, J F; Voirin, E; Altunay cheung, M; Liu, T T; Mosquera morales, J F; Gerber, C E; Chen, X; Clarke, C J; Stuart, D D; Franco sevilla, M; Marsh, B J; Shivpuri, R K; Adzic, P; De almeida pacheco, M A; Matos figueiredo, D; De queiroz franco, A B; Melo de almeida, M; Bernardo valadao, R; Linden, T; Tuovinen, E V; Jarvinen, T T; Siikonen, H J L; Ripp-baudot, I L; Richer, M; Vander velde, C; Randle-conde, A S; Dong, J; Van haevermaet, H J H; Dimitrov, L; De paula bianchini, C; Muller cascadan, A; Kotlinski, B; Alcaraz maestre, J; Josa mutuberria, M I; Gonzalez lopez, O; Marin munoz, J; Puerta pelayo, J; Rodriguez vazquez, J J; Denegri, D; Jarry, P; Rosowsky, A; Tsipolitis, G; Grunewald, M; Singh, J; Chawla, R; Gupta, R; Giordano, F; Parrini, G; Russo, L; Dosselli, U; Mazzucato, M; Verlato, M; Wulzer, A; Traldi, S; Bortolato, D; Biasini, M; Bilei, G M; Movileanu, M; Santocchia, A; Mariani, V; Mariotti, C; Monaco, V; Accomando, E; Pinna angioni, G L; Boimska, B; Yuldashev, B; Kamenev, A; Belotelov, I; Filozova, I; Bunin, P; Golovanov, G; Gribushin, A; Kaminskiy, A; Volkov, P; Vorotnikov, G; Bityukov, S; Kryshkin, V; Petrov, V; Volkov, A; Troshin, S; Levin, A; Sumaneev, O V; Kalinin, A; Kulagin, N; Mandrik, P; Lin, C; Kovalskyi, D; Demiragli, Z; Hsu, D G; Michlin, B A; Fountain, M; Debbins, P A; Durgut, S; Tadel, M; White, A; Molina-perez, J A; Dost, J M; Boren, S S; Klein, A; Bhatti, A; Mesropian, C; Wilkinson, R; Xie, S; Marlow, D R; Jindal, P; Palmer, C A; Narain, M; Berry, E A; Usai, E; Korotkov, A L; Strossman, W; Kennedy, E; Burt, K F; Saha, A; Starodumov, A; Mavromanolakis, G; Nicolaou, C; Mao, Y; Claes, D R; Sill, A F; Lamichhane, K; Antunovic, Z; Piotrzkowski, K; Bondu, O; Dimitrov, A A; Albajar, C; Torga teixeira, R F; Iles, G M; Borg, J; Cripps, N A; Uchida, K; Fayer, S W; Wright, J C; Kokkas, P; Manthos, N; Bhattacharya, S; Nandan, S; Bellazzini, R; Carboni, A; Arezzini, S; Yang, U K; Roskes, J; Corcodilos, L A; Nauenberg, U; Johnson, D; Kharchilava, A; Mc lean, C A; Cox, B B; Hirosky, R J; Cummings, G E; Skuja, A; Bard, R L; Mueller, R D; Puigh, D M; Chertok, M B; Calderon de la barca sanchez, M; Gunion, J F; Vogt, R; Conway, R T; Gearhart, J W; Band, R E; Kukral, O; Korytov, A; Fu, Y; Madorsky, A; Brinkerhoff, A W; Rinkevicius, A; Mcdermott, K P; Tao, Z; Bellis, M; Gronberg, J B; Hauser, J; Bachtis, M; Kubic, J; Nash, W A; Greenler, L S; Caillol, C S; Woods, N; De jesus pardal vicente, M; Trembath-reichert, S; Singovski, A; Wolf, M; Smith, G N; Bucci, R E; Reinsvold, A C; Rupprecht, N C; Taus, R A; Buccilli, A T; Kroeger, R S; Reidy, J J; Barnes, V E; Kress, M K; Thieman, J R; Mccartin, J W; Gul, M; Khvastunov, I; Georgiev, I G; Biselli, A; Berzano, U; Vai, I; Braghieri, A; Cardoso lopes, R; Cuevas maestro, J F; Palencia cortezon, J E; Reucroft, S; Bheesette, S; Butler, A; Ivanov, A; Mizelkov, M; Kashpydai, O; Kim, J; Janulis, M; Zemleris, V; Ali, A; Ahmed, U S; Awan, M I; Lee, J; Dissertori, G; Pauss, F; Musella, P; Gomez espinosa, T A; Pigazzini, S; Vesterbacka olsson, M L; Klijnsma, T; Khakzad, M; Arfaei, H; Bonesini, M; Ciriolo, V; Gomez moreno, B; Linares garcia, L E; Bae, S; Ko, B; Hatakeyama, K; Mahmoud mohammed, M A; Aly, A; Ahmad, A; Bahinipati, S; Kim, T J; Goh, J; Fang, W; Kemularia, O; Melkadze, A; Sharma, S; Rane, A P; Ayala amaya, E R; Akle, B; Palomo pinto, F R; Madlener, T; Spanring, M; Pol, M E; Alda junior, W L; Rodrigues simoes moreira, P; Kloukinas, K; Onnela, A T O; Passardi, G; Perez, E F; Postema, W J; Petagna, P; Gaddi, A; Vieira de castro ferreira da silva, P M; Gastal, M; Dabrowski, A E; Mersi, S; Bianco, M; Alandes pradillo, M; Chen, Y; Kieseler, J; Bawej, T A; Roedne, L T; Hugo, G; Baschiera, M; Loiseau, T L; Donato, S; Wang, Y; Liu, Z; Yue, X; Teng, C; Wang, Z; Liao, H; Zhang, X; Chen, Y; Ahmad, M; Zhao, H; Qi, F; Li, B; Raupach, F; Tonutti, M P; Radziej, M; Fluegge, G; Haj ahmad, W; Kunsken, A; Roy, D M; Ziemons, T; Behrens, U; Henschel, H M; Kleinwort, C H; Dammann, D J; Van onsem, G P; Contreras campana, C J; Penno, M; Haranko, M; Singh, A; Turkot, O; Scheurer, V; Schleper, P; Schwandt, J; Schwarz, D; Hartmann, F; Muller, T; Mallows, S; Funke, D; Baselga bacardit, M; Mitra, S; Martinez rivero, C; Moya martin, D; Hidalgo villena, S; Chazin quero, B; Mine, P M G; Poilleux, P R; Salerno, R A; Martin perez, C; Amendola, C; Caponetto, L; Pugnere, D Y; Giraud, Y A N; Sordini, V; Grimes, M A; Burns, D J P; Harper, S J; Hajdu, C; Vami, T A; Dutta, D; Pant, L M; Kumar, V; Sarin, P; Di florio, A; Giacomelli, P; Montanari, A; Siroli, G P; Robutti, E; Maron, G; Fabozzi, F; Galati, G; Rovelli, C I; Della ricca, G; Vazzoler, F; Oh, Y D; Park, W H; Kwon, K H; Choi, J; Kalinowski, A; Santos amaral, L C; Di francesco, A; Velichko, G; Smirnov, I; Kozlov, V; Vavilov, S; Kirianov, A; Dremin, I; Rusakov, S; Nechitaylo, V; Kovzelev, A; Toropin, A; Anisimov, A; Barniakov, A; Gasanov, E; Eskut, E; Polatoz, A; Karaman, T; Zorbilmez, C; Bat, A; Tok, U G; Dag, H; Kaya, O; Tekten, S; Lin, T; Abdoulline, S; Bauerdick, L; Denisov, D; Gingu, C; Green, D; Nahn, S C; Prokofiev, O E; Strait, J B; Los, S; Bowden, M; Tanenbaum, W M; Guo, Y; Dykstra, D W; Mason, D A; Chlebana, F; Cooper, W E; Anderson, J M K; Weber, H A; Christian, D C; Alyari, M F; Diaz cruz, J A; Wang, M; Berry, D R; Siehl, K F; Poudyal, N; Kyre, S A; Mullin, S D; George, C; Szabo, Z; Malhotra, S; Milosevic, J; Prado da silva, W L; Martins mundim filho, L; Sanchez rosas, L J; Karimaki, V J; Toor, S Z; Karadzhinova, A G; Maazouzi, C; Van hove, P J; Hosselet, J; Goorens, R; Brun, H L; Kalsi, A K; Wang, Q; Vannerom, D; Antchev, G; Iaydjiev, P S; Mitev, G M; Amadio, G; Langenegger, U; Kaestli, H C; Meier, B; Fernandez ramos, J P; Besancon, M; Fabbro, B; Ganjour, S; Locci, E; Gevin, O; Suranyi, O; Bansal, S; Kumar, R; Sharma, S; Tuve, C N; Tricomi, A; Meschini, M; Paoletti, S; Sguazzoni, G; Gori, V; Carlin, R; Dal corso, F; Simonetto, F; Torassa, E; Zumerle, G; Borsato, E; Gonella, F; Dorigo, A; Larsen, H; Peroni, C; Trapani, P P; Buarque franzosi, D; Tamponi, U; Mejia guisao, J A; Zepeda fernandez, C H; Szleper, M; Zalewski, P D; Rybka, D K; Gorbunov, I; Perelygin, V; Kozlov, G; Semenov, R; Khvedelidze, A; Kodolova, O; Klyukhin, V; Snigirev, A; Kryukov, A; Ukhanov, M; Sobol, A; Bayshev, I; Akimenko, S; Lei, Y; Chang, Y; Kao, K; Lin, S; Yu, P; Li, Y; Fantasia, C; Gastler, D E; Paus, C; Wyslouch, B; Knuteson, B O; Azzolini, V; Goncharov, M; Brandt, S; Chen, Z; Liu, J; Chen, Z; Freed, S M; Zhang, A; Nachtman, J M; Penzo, A; Akgun, U; Yi, K; Rahmat, R; Gandrajula, R P; Dilsiz, K; Letts, J; Sharma, V A; Holzner, A G; Wuerthwein, F K; Padhi, S; Suarez silva, I M; Tapia takaki, D J; Stringer, R W; Kropivnitskaya, A; Majumder, D; Al-bataineh, A A; Gabella, W E; Johns, W E; Mora, J G; Shi, Z; Ciesielski, R A; Bornheim, A; Bartz, E H; Doroshenko, J; Halkiadakis, E; Salur, S; Robles, J A; Gray, R C; Saka, H; Osherson, M A; Hughes, E J; Paulini, M G; Russ, J S; Jang, D W; Piroue, P; Olsen, J D; Sands, W; Saluja, S; Cutts, D; Hadley, M H; Hakala, J C; Clare, R; Luthra, A P; Paneva, M I; Seto, R K; Mac intire, D A; Tentindo, S; Wahl, H; Chokheli, D; Micanovic, S; Razis, P; Mousa, J; Pantelides, S; Qian, S; Li, W; Stieger, B B; Lee, S W; Michotte de welle, D; De favereau de jeneret, J; Bakhshiansohi, H; Krintiras, G; Caputo, C; Sabev, C; Batinkov, A I; Zenz, S C; Pesaresi, M F; Summers, S P; Saoulidou, N; Koraka, C K; Ghosh, S; Sikdar, A K; Castaldi, R; Dell'orso, R; Palmonari, F; Rolandi, L; Moggi, A; Fedi, G; Coscetti, S; Seo, S H; Cankocak, K; Cumalat, J P; Smith, J G; Iashvili, I; Gallo, S M; Parker, A M; Ledovskoy, A; Hung, P Q; Vaman, D; Goodell, J D; Gomez, J A; Celik, A; Luo, S; Hill, C S; Francis, B P; Tripathi, S M; Squires, M K; Thomson, J A; Brainerd, C; Tuli, S; Bourilkov, D; Mitselmakher, G; Patterson, J R; Kuznetsov, V Y; Tan, S M; Strohman, C R; Rebassoo, F O; Valouev, V; Zelepukin, S; Lusin, S; Vuosalo, C O U; Ruggles, T H; Rusack, R; Woodard, A E; Meng, F; Dev, N; Vishnevskiy, D; Cremaldi, L M; Oliveros tautiva, S J; Jones, T M; Wang, F; Zaganidis, N; Tytgat, M G; Fedorov, A; Korjik, M; Panov, V; Montagna, P; Vitulo, P; Traversi, G; Gonzalez caballero, I; Eysermans, J; Logatchev, O; Orlov, A; Tikhomirov, A; Kulikova, T; Strumia, A; Nam, S K; Soric, I; Padimanskas, M; Siddiqi, H M; Qazi, S F; Ahmad, M; Makouski, M; Chakaberia, I; Mitchell, T B; Baarmand, M; Hits, D; Theofilatos, K; Mohr, N; Jimenez estupinan, R; Micheli, F; Pata, J; Corrodi, S; Mohammadi najafabadi, M; Menasce, D L; Pedrini, D; Malberti, M; Linn, S L; Mesa, D; Tuuva, T; Carrillo montoya, C A; Roque romero, G A; Suwonjandee, N; Kim, H; Khalil ibrahim, S S; Mahrous mohamed kassem, A M; Trojman, L; Sarkar, U; Bhattacharya, S; Babaev, A; Okhotnikov, V; Nakad, Z S; Fruhwirth, R; Majerotto, W; Mikulec, I; Rohringer, H; Strauss, J; Krammer, N; Hartl, C; Pree, E; Rebello teles, P; Ball, A; Bialas, W; Brachet, S B; Gerwig, H; Lourenco, C; Mulders, M P; Vasey, F; Wilhelmsson, M; Dobson, M; Botta, C; Dunser, M F; Pol, A A; Suthakar, U; Takahashi, Y; De cosa, A; Hreus, T; Chen, G; Chen, H; Jiang, C; Yu, T; Klein, K; Schulz, J; Preuten, M; Millet, P N; Keller, H C; Pistone, C; Eckerlin, G; Jung, J; Mnich, J; Jansen, H; Wissing, C; Savitskyi, M; Eichhorn, T V; Harb, A; Botta, V; Martens, I; Knolle, J; Eren, E; Reichelt, O; Schutze, P J; Saibel, A; Schettler, H H; Schumann, S; Kutzner, V G; Husemann, U; Giffels, M; Akbiyik, M; Friese, R M; Baur, S S; Faltermann, N; Kuhn, E; Gottmann, A I D; Muller, D; Balzer, M N; Maier, S; Schnepf, M J; Wassmer, M; Renner, C W; Tcherniakhovski, D; Piedra gomez, J; Vilar cortabitarte, R; Trevisani, N; Boudry, V; Charlot, C P; Tran, T H; Thiant, F; Lethuillier, M M; Perries, S O; Popov, A; Morrissey, Q; Brummitt, A J; Bell, S J; Assiouras, P; Sikler, F; De palma, M; Fiore, L; Pompili, A; Marzocca, C; Errico, F; Soldani, E; Cavallo, F R; Rossi, A M; Torromeo, G; Masetti, G; Virgilio, S; Thyssen, F D M; Iorio, A O M; Montecchi, M; Santanastasio, F; Bulfon, C; Zanetti, A M; Casarsa, M; Han, D; Song, J; Ibrahim, Z A B; Faccioli, P; Gallinaro, M; Beirao da cruz e silva, C; Kuznetsova, E; Levchuk, L; Andreev, V; Toropin, A; Dermenev, A; Karpikov, I; Epshteyn, V; Uliyanov, A; Polikarpov, S; Markin, O; Cagil, A; Karapinar, G; Isildak, B; Yu, S; Banicz, K B; Cheung, H W K; Butler, J N; Quigg, D E; Hufnagel, D; Rakness, G L; Spalding, W J; Bhat, P; Kreis, B J; Jensen, H B; Chetluru, V; Albert, M; Hu, Z; Mishra, K; Vernieri, C; Larson, K E; Zejdl, P; Matulik, M; Cremonesi, M; Doualot, N; Ye, Z; Wu, Z; Geffert, P B; Dutta, V; Heller, R E; Dorsett, A L; Choudhary, B C; Arora, S; Ranjeet, R; Melo da costa, E; Torres da silva de araujo, F; Da silveira, G G; Alves coelho, E; Belchior batista das chagas, E; Buss, N H; Luukka, P R; Tuominen, E M; Havukainen, J J; Tigerstedt, U B S; Goerlach, U; Patois, Y; Collard, C; Mathieu, C; Lowette, S R J; Python, Q P; Moortgat, S; Vanlaer, P; De lentdecker, G W P; Rugovac, S; Tavernier, F F; Beaumont, W; Van de klundert, M; Vankov, P H; Verguilov, V Z; Hadjiiska, R M; De moraes gregores, E; Iope, R L; Ruiz vargas, J C; Barcala riveira, M J; Hernandez calama, J M; Oller, J C; Flix molina, J; Navarro tobar, A; Sastre alvaro, J; Redondo ferrero, D D; Titov, M; Bausson, P; Major, P; Bala, S; Dhingra, N; Kumari, P; Costa, S; Pelli, S; Meneguzzo, A T; Passaseo, M; Pegoraro, M; Montecassiano, F; Dorigo, T; Silvestrin, L; Del duca, V; Demaria, N; Ferrero, M I; Mussa, R; Cartiglia, N; Mazza, G; Maina, E; Dellacasa, G; Covarelli, R; Cotto, G; Sola, V; Monteil, E; Shchelina, K; Castilla-valdez, H; De la cruz burelo, E; Kazana, M; Gorbunov, N; Kosarev, I; Smirnov, V; Korenkov, V; Savina, M; Lanev, A; Semenyushkin, I; Kashunin, I; Krouglov, N; Markina, A; Bunichev, V; Zotov, N; Miagkov, I; Nazarova, E; Uzunyan, A; Riutin, R; Tsverava, N; Paganis, E; Chen, K; Lu, R; Psallidas, A; Gorodetzky, P P; Hazen, E S; Avetisyan, A; Richardson, C A; Busza, W; Roland, C E; Cali, I A; Marini, A C; Wang, T; Schmitt, M H; Geurts, F; Ecklund, K M; Repond, J O; Schmidt, I; George, N; Ingram, F D; Wetzel, J W; Ogul, H; Spanier, S M; Mrak tadel, A; Zevi della porta, G J; Maguire, C F; Janjam, R K; Chevtchenko, S; Zhu, R; Voicu, B R; Mao, J; Stone, R L; Schnetzer, S R; Nash, K C; Kunnawalkam elayavalli, R; Laflotte, I; Weinberg, M G; Mc cracken, M E; Kalogeropoulos, A; Raval, A H; Cooperstein, S B; Landsberg, G; Kwok, K H M; Ellison, J A; Gary, J W; Si, W; Hagopian, V; Hagopian, S L; Bertoldi, M; Brigljevic, V; Ptochos, F; Ather, M W; Konstantinou, S; Yang, D; Li, Q; Attebury, G; Siado castaneda, J E; Lemaitre, V; Caebergs, T P M; Litov, L B; Fernandez de troconiz, J; Colling, D J; Davies, G J; Raymond, D M; Virdee, T S; Bainbridge, R J; Lewis, P; Rose, A W; Bauer, D U; Sotiropoulos, S; Papadopoulos, I; Triantis, F; Aslanoglou, X; Majumdar, N; Devadula, S; Ciocci, M A; Messineo, A; Palla, F; Grippo, M T; Yu, G B; Willemse, T; Lamsa, J; Blumenfeld, B J; Maksimovic, P; Gritsan, A; Cocoros, A A; Arnold, P; Tonwar, S C; Eno, S C; Mignerey, A L C; Nabili, S; Dalchenko, M; Maghrbi, Y; Huang, T; Sheharyar, A; Durkin, L S; Wang, Z; Tos, K M; Kim, B J; Guo, Y; Ma, P; Rosenzweig, D J; Reeder, D D; Smith, W; Surkov, A; Mohapatra, A K; Maurisset, A; Mans, J M; Kubota, Y; Frahm, E J; Chatterjee, R M; Ruchti, R; Mc cauley, T P; Ivie, P A; Betchart, B A; Hindrichs, O H; Sultana, M; Henderson, C; Sanders, D; Summers, D; Perera, L; Miller, D H; Miyamoto, J; Peng, C; Zahariev, R Z; Peynekov, M M; Ratti, L; Ressegotti, M; Czellar, S; Molnar, J; Khan, A; Morton, A; Vischia, P; Erice cid, C F; Carpinteyro bernardino, S; Chmelev, D; Smetannikov, V; Hektor, A; Kadastik, M; Godinovic, N; Simelevicius, D; Alvi, O I; Hoorani, H U R; Shahzad, H; Shah, M A; Shoaib, M; Rao, M A S; Sidwell, R; Roettger, T J; Corkill, S; Lustermann, W; Roeser, U H; Backhaus, M; Perrin, G L; Naseri, M; Rapuano, F; Redaelli, N; Carbone, L; Spiga, F; Brivio, F; Monti, F; Markowitz, P E; Rodriguez, J L; Morelos pineda, A; Norberg, S R; Ryu, M S; Jeng, Y G; Esteban lallana, M C; Trabelsi, A; Dittmann, J R; Elsayed, E; Khan, Z A; Soomro, K; Janikashvili, M; Kapoor, A; Rastogi, A; Remnev, G; Hrubec, J; Wulz, C; Fichtinger, S K; Abbaneo, D; Janot, P; Racz, A; Roche, J; Ryjov, V; Sphicas, P; Treille, D; Wertelaers, P; Cure, B R; Fulcher, J R; Moortgat, F W; Bocci, A; Giordano, D; Hegeman, J G; Hegner, B; Gallrapp, C; Cepeda hermida, M L; Riahi, H; Chapon, E; Orfanelli, S; Guilbaud, M R J; Seidel, M; Merlin, J A; Heidegger, C; Schneider, M A; Robmann, P W; Salerno, D N; Galloni, C; Neutelings, I W; Shi, J; Li, J; Zhao, J; Pandoulas, D; Rauch, M P; Schael, S; Hoepfner, K; Weber, M K; Teyssier, D F; Thuer, S; Rieger, M; Albert, A; Muller, T; Sert, H; Lohmann, W F; Ntomari, E; Grohsjean, A J; Wen, Y; Ron alvarez, E; Hampe, J; Bin anuar, A A; Blobel, V; Mattig, S; Haller, J; Sonneveld, J M; Malara, A; Rabbertz, K H; Freund, B; Schell, D B; Savoiu, D; Geerebaert, Y; Becheva, E L; Nguyen, M A; Stahl leiton, A G; Magniette, F B; Fay, J; Gascon-shotkin, S M; Ille, B; Viret, S; Finco, L; Brown, R; Cockerill, D; Williams, T S; Markou, C; Anagnostou, G; Mohanty, A K; Creanza, D M; De robertis, G; Verwilligen, P O J; Perrotta, A; Fanfani, A; Ciocca, C; Ravera, F; Toniolo, N; Badoer, S; Paolucci, P; Khan, W A; Voevodina, E; De iorio, A; Cavallari, F; Bellini, F; Cossutti, F; La licata, C; Da rold, A; Lee, K; Go, Y; Park, J; Kim, M S; Wan abdullah, W; Toldaiev, O; Golovtcov, V; Oreshkin, V; Sosnov, D; Soroka, D; Gninenko, S; Pivovarov, G; Erofeeva, M; Pozdnyakov, I; Danilov, M; Tarkovskii, E; Chadeeva, M; Philippov, D; Bychkova, O; Kardapoltsev, L; Onengut, G; Cerci, S; Vergili, M; Dolek, F; Sever, R; Gamsizkan, H; Ocalan, K; Dogan, H; Kaya, M; Kuo, C; Chang, Y; Albrow, M G; Banerjee, S; Berryhill, J W; Chevenier, G; Freeman, J E; Green, C H; O'dell, V R; Wenzel, H; Lukhanin, G; Di luca, S; Spiegel, L G; Deptuch, G W; Ratnikova, N; Paterno, M F; Burkett, K A; Jones, C D; Klima, B; Fagan, D; Hasegawa, S; Thompson, R; Gecse, Z; Liu, M; Pedro, K J; Jindariani, S; Zimmerman, T; Skirvin, T M; Hofman, D J; Evdokimov, O; Jung, K E; Trauger, H C; Gouskos, L; Karancsi, J; Kumar, A; Garg, R B; Keshri, S; Nogima, H; Sznajder, A; Vilela pereira, A; Eerola, P A; Pekkanen, J T K; Guldmyr, J H; Gele, D; Charles, L; Bonnin, C; Bourgatte, G; De clercq, J T; Favart, L; Grebenyuk, A; Yang, Y; Allard, Y; Genchev, V I; Galli mercadante, P; Tomei fernandez, T R; Ahuja, S; Ingram, Q; Rohe, T V; Colino, N; Ferrando, A; Garcia-abia, P; Calvo alamillo, E; Goy lopez, S; Delgado peris, A; Alvarez fernandez, A; Couderc, F; Moudden, Y; Potenza, R; D'alessandro, R; Landi, G; Viliani, L; Bisello, D; Gasparini, F; Michelotto, M; Benettoni, M; Bellato, M A; Fanzago, F; De castro manzano, P; Mantovani, G; Menichelli, M; Passeri, D; Placidi, P; Manoni, E; Storchi, L; Cirio, R; Romero, A; Staiano, A; Pastrone, N; Solano, A M; Argiro, S; Bellan, R; Duran osuna, M C; Ershov, Y; Zamyatin, N; Palchik, V; Afanasyev, S; Nikonov, E; Miller, M; Baranov, A; Ivanov, V; Petrushanko, S; Perfilov, M; Eyyubova, G; Baskakov, A; Kachanov, V; Korablev, A; Bordanovskiy, A; Kepuladze, Z; Hsiung, Y B; Wu, S; Rankin, D S; Jacob, C J; Alverson, G; Hortiangtham, A; Roland, G M; Gomez ceballos retuerto, G; Innocenti, G M; Allen, B L; Baty, A A; Narayanan, S M; Hu, M; Bi, R; Sung, K K H; Gunter, T K; Bueghly, J D; Yepes stork, P P; Mestvirishvili, A; Miller, M J; Norbeck, J E; Snyder, C M; Branson, J G; Sfiligoi, I; Rogan, C S; Edwards-bruner, C R; Young, R W; Verweij, M; Goulianos, K; Galvez, P D; Zhu, K; Lapadatescu, V; Dutta, I; Somalwar, S V; Park, M; Kaplan, S M; Feld, D B; Vorobiev, I; Lange, D; Zuranski, A M; Mei, K; Knight iii, R R; Spencer, E; Hogan, J M; Syarif, R; Olmedo negrete, M A; Ghiasi shirazi, S; Erodotou, E; Ban, Y; Xue, Z; Kravchenko, I; Keller, J D; Knowlton, D P; Wigmans, M E J; Volobouev, I; Peltola, T H T; Kovac, M; Bruno, G L; Gregoire, G; Delaere, C; Bodlak, M; Della negra, M J; James, T O; Shtipliyski, A M; Tziaferi, E; Karageorgos, V W; Karasavvas, D; Fountas, K; Mukhopadhyay, S; Basti, A; Raffaelli, F; Spandre, G; Mazzoni, E; Manca, E; Mandorli, G; Yoo, H D; Aerts, A; Eminizer, N C; Amram, O; Stenson, K M; Ford, W T; Green, M L; Kellogg, R; Jeng, G; Kunkle, J M; Baron, O; Feng, Y; Wong, K; Toufique, Y; Sehgal, V; Breedon, R E; Cox, P T; Mulhearn, M J; Gerhard, R M; Taylor, D N; Konigsberg, J; Sperka, D M; Lo, K H; Carnes, A M; Quach, D M; Li, T; Andreev, V; Herve, L A M; Klabbers, P R; Svetek, A; Hussain, U; Evans, A C; Lannon, K P; Fedorov, S; Bodek, A; Demina, R; Khukhunaishvili, A; West, C A; Perez, C U; Godang, R; Meier, M; Neumeister, N; Gruchala, M M; Zagurski, K B; Prosolovich, V; Kuhn, J; Ratti, S P; Riccardi, C M; Vacchi, C; Szekely, G; Hobson, P R; Fernandez menendez, J; Rodriguez bouza, V; Butler, P; Pedraza morales, M I; Barakat, N; Sakharov, V; Lavrenov, P; Ahmed, I; Kim, T Y; Pac, M Y; Sculac, T; Gajdosik, T; Tamosiunas, K; Juodagalvis, A; Dudenas, V; Barannik, S; Bashir, A; Khan, F; Saeed, F; Khan, M T; Maravin, Y; Mohammadi, A; Noonan, D C; Saunders, M D; Dittmar, M; Donega, M; Perrozzi, L; Nageli, C; Dorfer, C; Zhu, D H; Spirig, Y A; Ruini, D; Alishahiha, M; Ardalan, F; Saramad, S; Mansouri, R; Eskandari tadavani, E; Ragazzi, S; Tabarelli de fatis, T; Govoni, P; Ghezzi, A; Stringhini, G; Sevilla moreno, A C; Smith, C J; Abdelalim, A A; Hassan, A F A; Swain, S K; Sahoo, D K; Carrera jarrin, E F; Chauhan, S; Munoz chavero, F; Ambrogi, F; Hensel, C; Alves, G A; Baechler, J; Christiansen, J; De roeck, A; Gayde, J; Hansen, M; Kienzle, W; Reynaud, S; Schwick, C; Troska, J; Zeuner, W D; Osborne, J A; Moll, M; Franzoni, G; Tinoco mendes, A D; Milenovic, P; Garai, Z; Bendavid, J L; Dupont, N A; Gulhan, D C; Daponte, V; Martinez turtos, R; Giuffredi, R; Rapacz, K J; Otiougova, P; Zhu, G; Leggat, D A; Kiesel, M K; Lipinski, M; Wallraff, W; Meyer, A; Pook, T; Pooth, O; Behnke, O; Eckstein, D; Fischer, D J; Garay garcia, J; Vagnerini, A; Klanner, R; Stadie, H; Perieanu, A; Benecke, A; Abbas, S M; Schroeder, M; Lobelle pardo, P; Chwalek, T; Heidecker, C; Floh, K M; Gomez, G; Cabrillo bartolome, I J; Orviz fernandez, P; Duarte campderros, J; Busson, P; Dobrzynski, L; Fontaine, G R R; Granier de cassagnac, R; Paganini, P R J; Arleo, F P; Balagura, V; Martin blanco, J; Ortona, G; Kucher, I; Contardo, D C; Lumb, N; Baulieu, G; Lagarde, F; Shchablo, K; Heath, H F; Kreczko, L; Clement, E J; Paramesvaran, S; Bologna, S; Bell, K W; Petyt, D A; Moretti, S; Durkin, T J; Daskalakis, G; Kataria, S K; Iaselli, G; Pugliese, G; My, S; Sharma, A; Abbiendi, G; Taneja, S; Benussi, L; Fabbri, F; Calvelli, V; Frizziero, E; Barone, L M; De notaristefani, F; D'imperio, G; Gobbo, B; Yusupov, H; Liew, C S; Zabolotny, W M; Sobolev, S; Gavrikov, Y; Kozlov, I; Golubev, N; Andreev, Y; Tlisov, D; Zaytsev, V; Stepennov, A; Popova, E; Kolchanova, A; Shtol, D; Sirunyan, A; Gokbulut, G; Kara, O; Damarseckin, S; Guler, A M; Ozpineci, A; Hayreter, A; Li, S; Gruenendahl, S; Yarba, J; Para, A; Ristori, L F; Rubinov, P M; Reichanadter, M A; Churin, I; Beretvas, A; Muzaffar, S M; Lykken, J D; Gutsche, O; Baldin, B; Uplegger, L A; Lei, C M; Wu, W; Derylo, G E; Ruschman, M K; Lipton, R J; Whitbeck, A J; Schmitt, R; Contreras pasuy, L C; Olsen, J T; Cavanaugh, R J; Betts, R R; Wang, H; Sturdy, J T; Gutierrez jr, A; Campagnari, C F; White, D T; Brewer, F D; Qu, H; Ranjan, K; Lalwani, K; Md, H; Shah, A H; Fonseca de souza, S; De jesus damiao, D; Revoredo, E A; Chinellato, J A; Amadei marques da costa, C; Lampen, P T; Wendland, L A; Brom, J; Andrea, J; Tavernier, S; Van doninck, W K; Van mulders, P K A; Clerbaux, B; Rougny, R; Rashevski, G D; Rodozov, M N; Padula, S; Bernardes, C A; Dias maciel, C; Deiters, K; Feichtinger, D; Wiederkehr, S A; Cerrada, M; Fouz iglesias, M; Senghi soares, M; Pasquetto, E; Ferry, S C; Georgette, Z; Malcles, J; Csanad, M; Lal, M K; Walia, G; Kaur, A; Ciulli, V; Lenzi, P; Zanetti, M; Costa, M; Dughera, G; Bartosik, N; Ramirez sanchez, G; Frueboes, T M; Karjavine, V; Skachkov, N; Litvinenko, A; Petrosyan, A; Teryaev, O; Trofimov, V; Makankin, A; Golunov, A; Savrin, V; Korotkikh, V; Vardanyan, I; Lukina, O; Belyaev, A; Korneeva, N; Petukhov, V; Skvortsov, V; Konstantinov, D; Efremov, V; Smirnov, N; Shiu, J; Chen, P; Rohlf, J; Sulak, L R; St john, J M; Morse, D M; Krajczar, K F; Mironov, C M; Niu, X; Wang, J; Charaf, O; Matveev, M; Eppley, G W; Mccliment, E R; Ozok, F; Bilki, B; Zieser, A J; Olivito, D J; Wood, J G; Hashemi, B T; Bean, A L; Wang, Q; Tuo, S; Xu, Q; Roberts, J W; Anderson, D J; Lath, A; Jacques, P; Sun, M; Andrews, M B; Svyatkovskiy, A; Hardenbrook, J R; Heintz, U; Lee, J; Wang, L; Prosper, H B; Adams, J R; Liu, S; Wang, D; Swanson, D; Thiltges, J F; Undleeb, S; Finger, M; Beuselinck, R; Rand, D T; Tapper, A D; Malik, S A; Lane, R C; Panagiotou, A; Diamantopoulou, M; Vourliotis, E; Mallios, S; Mondal, K; Bhattacharya, R; Bhowmik, D; Libby, J F; Azzurri, P; Foa, L; Tenchini, R; Verdini, P G; Ciampa, A; Radburn-smith, B C; Park, J; Swartz, M L; Sarica, U; Borcherding, F O; Barria, P; Goadhouse, S D; Xia, F; Joyce, M L; Belloni, A; Bouhali, O; Toback, D; Osipenkov, I L; Almes, G T; Walker, J W; Bylsma, B G; Lefeld, A J; Conway, J S; Flores, C S; Avery, P R; Terentyev, N; Barashko, V; Ryd, A P E; Tucker, J M; Heltsley, B K; Wittich, P; Riley, D S; Skinnari, L A; Chu, J Y; Ignatenko, M; Lindgren, M A; Saltzberg, D P; Peck, A N; Herve, A A M; Savin, A; Herndon, M F; Mason, W P; Martirosyan, S; Grahl, J; Hansen, P D; Saradhy, R; Mueller, C N; Planer, M D; Suh, I S; Hurtado anampa, K P; De barbaro, P J; Garcia-bellido alvarez de miranda, A A; Korjenevski, S K; Moolekamp, F E; Fallon, C T; Acosta castillo, J G; Gutay, L; Barker, A W; Gough, E; Poyraz, D; Verbeke, W L M; Beniozef, I S; Krasteva, R L; Winn, D R; Fenyvesi, A C; Makovec, A; Munro, C G; Sanchez cruz, S; Bernardino rodrigues, N A; Lokhovitskiy, A; Uribe estrada, C; Rebane, L; Racioppi, A; Kim, H; Kim, T; Puljak, I; Boyaryntsev, A; Saeed, M; Tanwir, S; Butt, U; Hussain, A; Nawaz, A; Khurshid, T; Imran, M; Sultan, A; Naeem, M; Kaadze, K; Modak, A; Taylor, R D; Kim, D; Grab, C; Nessi-tedaldi, F; Fischer, J; Manzoni, R A; Zagozdzinska-bochenek, A A; Berger, P; Reichmann, M P; Hashemi, M; Rezaei hosseinabadi, F; Paganoni, M; Farina, F M; Joshi, Y R; Avila bernal, C A; Cabrera mora, A L; Segura delgado, M A; Gonzalez hernandez, C F; Asavapibhop, B; U-ruekolan, S; Kim, G; Choi, M; Aly, S; El sawy, M; Castaneda hernandez, A M; Pinna, D; Shamdasani, J; Tavkhelidze, D; Hegde, V; Aziz, T; Sur, N; Sutar, B J; Karmakar, S; Ghete, V M; Dragicevic, M G; Brandstetter, J; Marques moraes, A; Molina insfran, J A; Aspell, P; Baillon, P; Barney, D; Honma, A; Pape, L; Sakulin, H; Macpherson, A L; Bangert, N; Guida, R; Steggemann, J; Voutsinas, G G; Da silva gomes, D; Ben mimoun bel hadj, F; Bonnaud, J Y R; Canelli, F M; Bai, J; Qiu, J; Bian, J; Cheng, Y; Kukulies, C; Teroerde, M; Erdmann, M; Hebbeker, T; Zantis, F; Scheuch, F; Erdogan, Y; Campbell, A J; Kasemann, M; Lange, W; Raspiareza, A; Melzer-pellmann, I; Aldaya martin, M; Lewendel, B; Schmidt, R S; Lipka, E; Missiroli, M; Grados luyando, J M; Shevchenko, R; Babounikau, I; Steinbrueck, G; Vanhoefer, A; Ebrahimi, A; Pena rodriguez, K J; Niedziela, M A; Eich, M M; Froehlich, A; Simonis, H J; Katkov, I; Wozniewski, S; Marco de lucas, R J; Lopez virto, A M; Jaramillo echeverria, R W; Hennion, P; Zghiche, A; Chiron, A; Romanteau, T; Beaudette, F; Lobanov, A; Grasseau, G J; Pierre-emile, T B; El mamouni, H; Gouzevitch, M; Goldstein, J; Cussans, D G; Seif el nasr, S A; Titterton, A S; Ford, P J W; Olaiya, E O; Salisbury, J G; Paspalaki, G; Asenov, P; Hidas, P; Kiss, T N; Zalan, P; Shukla, P; Abbrescia, M; De filippis, N; Donvito, G; Radogna, R; Miniello, G; Gelmi, A; Capiluppi, P; Marcellini, S; Odorici, F; Bonacorsi, D; Genta, C; Ferri, G; Saviano, G; Ferrini, M; Minutoli, S; Tosi, S; Lista, L; Passeggio, G; Breglio, G; Merola, M; Diemoz, M; Rahatlou, S; Baccaro, S; Bartoloni, A; Talamo, I G; Cipriani, M; Kim, J Y; Oh, G; Lim, J H; Lee, J; Mohamad idris, F B; Gani, A B; Cwiok, M; Doroba, K; Martins galinhas, B E; Kim, V; Krivshich, A; Vorobyev, A; Ivanov, Y; Tarakanov, V; Lobodenko, A; Obikhod, T; Isayev, O; Kurov, O; Leonidov, A; Lvova, N; Kirsanov, M; Suvorova, O; Karneyeu, A; Demidov, S; Konoplyannikov, A; Popov, V; Pakhlov, P; Vinogradov, S; Klemin, S; Blinov, V; Skovpen, I; Chatrchyan, S; Grigorian, N; Kayis topaksu, A; Sunar cerci, D; Hos, I; Guler, Y; Kiminsu, U; Serin, M; Deniz, M; Turan, I; Eryol, F; Pozdnyakov, A; Liu, Z; Doan, T H; Hanlon, J E; Mcbride, P L; Pal, I; Garren, L; Oleynik, G; Harris, R M; Bolla, G; Kowalkowski, J B; Evans, D E; Vaandering, E W; Patrick, J F; Rechenmacher, R; Prosser, A G; Messer, T A; Tiradani, A R; Rivera, R A; Jayatilaka, B A; Duarte, J M; Todri, A; Harr, R F; Richman, J D; Bhandari, R; Dordevic, M; Cirkovic, P; Mora herrera, C; Rosa lopes zachi, A; De paula carvalho, W; Kinnunen, R L A; Lehti, S T; Maeenpaeae, T H; Bloch, D; Chabert, E C; Rudolf, N G; Devroede, O; Skovpen, K; Lontkovskyi, D; De wolf, E A; Van mechelen, P; Van spilbeeck, A B E; Georgiev, L S; Novaes, S F; Costa, M A; Costa leal, B; Horisberger, R P; De la cruz, B; Willmott, C; Perez-calero yzquierdo, A M; Dejardin, M M; Mehta, A; Barbagli, G; Focardi, E; Bacchetta, N; Gasparini, U; Pantano, D; Sgaravatto, M; Ventura, S; Zotto, P; Candelori, A; Pozzobon, N; Boletti, A; Servoli, L; Postolache, V; Rossi, A; Ciangottini, D; Alunni solestizi, L; Maselli, S; Migliore, E; Amapane, N C; Lopez fernandez, R; Sanchez hernandez, A; Heredia de la cruz, I; Matveev, V; Kracikova, T; Shmatov, S; Vasilev, S; Kurenkov, A; Oleynik, D; Verkheev, A; Voytishin, N; Proskuryakov, A; Bogdanova, G; Petrova, E; Bagaturia, I; Tsamalaidze, Z; Zhao, Z; Arcaro, D J; Barberis, E; Wamorkar, T; Wang, B; Ralph, D K; Velasco, M M; Odell, N J; Sevova, S; Li, W; Merlo, J; Onel, Y; Mermerkaya, H; Moeller, A R; Haytmyradov, M; Dong, R; Bugg, W M; Ragghianti, G C; Delannoy sotomayor, A G; Thapa, K; Yagil, A; Gerosa, R A; Masciovecchio, M; Schmitz, E J; Kapustinsky, J S; Greene, S V; Zhang, L; Vlimant, J V; Mughal, A; Cury siqueira, S; Gershtein, Y; Arora, S R R; Lin, W X; Stickland, D P; Mc donald, K T; Pivarski, J M C; Lucchini, M T; Higginbotham, S L; Rosenfield, M; Long, O R; Johnson, K F; Adams, T; Susa, T; Rykaczewski, H; Ioannou, A; Ge, Y; Levin, A M; Li, J; Li, L; Bloom, K A; Monroy montanez, J A; Kunori, S; Wang, Z; Favart, D; Maltoni, F; Vidal marono, M; Delcourt, M; Markov, S I; Seez, C; Richards, A J; Ferguson, W; Chatziangelou, M; Karathanasis, G; Kontaxakis, P; Jones, J A; Strologas, J; Katsoulis, P; Dutt, S; Roy chowdhury, S; Bhardwaj, R; Purohit, A; Singh, B; Behera, P K; Sharma, A; Spagnolo, P; Tonelli, G E; Giannini, L; Poulios, S; Groote, J F; Untuc, B; Oztirpan, F O; Koseoglu, I; Luiggi lopez, E E; Hadley, N J; Shin, Y H; Safonov, A; Eusebi, R; Rose, A K; Overton, D A; Erbacher, R D; Funk, G N; Pilot, J R; Regnery, B J; Klimenko, S; Matchev, K; Gleyzer, S; Wang, J; Cadamuro, L; Sun, W M; Soffi, L; Lantz, S R; Wright, D; Cline, D; Cousins jr, R D; Erhan, S; Yang, X; Schnaible, C J; Dasgupta, A; Loveless, R; Bradley, D C; Monzat, D; Dodd, L M; Tikalsky, J L; Kapusta, J; Gilbert, W J; Lesko, Z J; Marinelli, N; Wayne, M R; Heering, A H; Galanti, M; Duh, Y; Roy, A; Arabgol, M; Hacker, T J; Salva, S; Petrov, V; Barychevski, V; Drobychev, G; Lobko, A; Gabusi, M; Fabris, L; Conte, E R E; Kasprowicz, G H; Kyberd, P; Cole, J E; Lopez, J M; Salazar gonzalez, C A; Benzon, A M; Pelagio, L; Walsh, M F; Postnov, A; Lelas, D; Vaitkus, J V; Jurciukonis, D; Sulmanas, B; Ahmad, A; Ahmed, W; Jalil, S H; Kahl, W E; Taylor, D R; Choi, Y I; Jeong, Y; Roy, T; Schoenenberger, M A; Khateri, P; Etesami, S M; Fiorini, E; Pullia, A; Magni, S; Gennai, S; Fiorendi, S; Zuolo, D; Sanabria arenas, J C; Florez bustos, C A; Holguin coral, A; Mendez, H; Srimanobhas, N; Jaikar, A H; Arteche gonzalez, F J; Call, K R; Vazquez valencia, E F; Calderon monroy, M A; Abdelmaguid, A; Mal, P K; Yuan, L; Lomidze, I; Prangishvili, I; Adamov, G; Dube, S S; Dugad, S; Mohanty, G B; Bhat, M A; Bheesette, S; Malawski, M L; Abou kors, D J

    CMS is a general purpose proton-proton detector designed to run at the highest luminosity at the LHC. It is also well adapted for studies at the initially lower luminosities. The CMS Collaboration consists of over 1800 scientists and engineers from 151 institutes in 31 countries. The main design goals of CMS are: \\begin{enumerate} \\item a highly performant muon system, \\item the best possible electromagnetic calorimeter \\item high quality central tracking \\item hermetic calorimetry \\item a detector costing less than 475 MCHF. \\end{enumerate} All detector sub-systems have started construction. Engineering Design Reviews of parts of these sub-systems have been successfully carried-out. These are held prior to granting authorization for purchase. The schedule for the LHC machine and the experiments has been revised and CMS will be ready for first collisions now expected in April 2006. \\\\\\\\ ~~~~$\\bullet$ Magnet \\\\ The detector (see Figure) will be built around a long (13~m) and large bore ($\\phi$=5.9~m) high...

  13. Searches for supersymmetry at CMS

    Energy Technology Data Exchange (ETDEWEB)

    Collaboration: F. Giordano on behalf of the CMS Collaboration

    2017-11-15

    Among the most promising prospects for a theory of physics beyond the standard model is supersymmetry. In this talk, the latest results from the CMS experiment at the LHC on searches for supersymmetry produced through strong production and electroweak production channels are presented using 20/fb of data from the 8 TeV LHC run, with particular focus on gluino and stop searches.

  14. The CMS DBS query language

    International Nuclear Information System (INIS)

    Kuznetsov, Valentin; Riley, Daniel; Afaq, Anzar; Sekhri, Vijay; Guo Yuyi; Lueking, Lee

    2010-01-01

    The CMS experiment has implemented a flexible and powerful system enabling users to find data within the CMS physics data catalog. The Dataset Bookkeeping Service (DBS) comprises a database and the services used to store and access metadata related to CMS physics data. To this, we have added a generalized query system in addition to the existing web and programmatic interfaces to the DBS. This query system is based on a query language that hides the complexity of the underlying database structure by discovering the join conditions between database tables. This provides a way of querying the system that is simple and straightforward for CMS data managers and physicists to use without requiring knowledge of the database tables or keys. The DBS Query Language uses the ANTLR tool to build the input query parser and tokenizer, followed by a query builder that uses a graph representation of the DBS schema to construct the SQL query sent to underlying database. We will describe the design of the query system, provide details of the language components and overview of how this component fits into the overall data discovery system architecture.

  15. 45 CFR 150.203 - Circumstances requiring CMS enforcement.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Circumstances requiring CMS enforcement. 150.203... CARE ACCESS CMS ENFORCEMENT IN GROUP AND INDIVIDUAL INSURANCE MARKETS CMS Enforcement Processes for... requiring CMS enforcement. CMS enforces HIPAA requirements to the extent warranted (as determined by CMS) in...

  16. Private visit to the CMS assembly site of Dr. Fidel Castro Diaz-Balart from the Superior Institute of Sciences and Nuclear Technologies, Havana, accompanied by His Excellency Mr. Emilio Caballero, Ambassador, Permanent Mission of Cuba in Paris.

    CERN Multimedia

    Maximilien Brice

    2002-01-01

    Photo 01: Left to right: His Excellency Mr Emilio Caballero; Prof. Tejinder Virdee, Deputy Spokesman of the CMS experiment; Dr Fidel Castro Diaz-Balart; Dr Matthias Schroeder, physicist, Experimental Physics division; Mrs Noëlle Levy, Casa del Habano, Geneva; Dr John Ellis, Adviser for Non-Member State relations; Dr Christian Roche, Senior Advisor to the Director-General. Photo 02: Left to right: His Excellency Mr Emilio Caballero, Prof. Tejinder Virdee, Dr Fidel Castro Diaz-Balart; Dr Matthias Schroeder; Mrs Noëlle Levy, Prof. Juan Antonio Rubio, Head of the Education and Technology Transfer division; Dr John Ellis.

  17. CMS distributed data analysis with CRAB3

    Science.gov (United States)

    Mascheroni, M.; Balcas, J.; Belforte, S.; Bockelman, B. P.; Hernandez, J. M.; Ciangottini, D.; Konstantinov, P. B.; Silva, J. M. D.; Ali, M. A. B. M.; Melo, A. M.; Riahi, H.; Tanasijczuk, A. J.; Yusli, M. N. B.; Wolf, M.; Woodard, A. E.; Vaandering, E.

    2015-12-01

    The CMS Remote Analysis Builder (CRAB) is a distributed workflow management tool which facilitates analysis tasks by isolating users from the technical details of the Grid infrastructure. Throughout LHC Run 1, CRAB has been successfully employed by an average of 350 distinct users each week executing about 200,000 jobs per day. CRAB has been significantly upgraded in order to face the new challenges posed by LHC Run 2. Components of the new system include 1) a lightweight client, 2) a central primary server which communicates with the clients through a REST interface, 3) secondary servers which manage user analysis tasks and submit jobs to the CMS resource provisioning system, and 4) a central service to asynchronously move user data from temporary storage in the execution site to the desired storage location. The new system improves the robustness, scalability and sustainability of the service. Here we provide an overview of the new system, operation, and user support, report on its current status, and identify lessons learned from the commissioning phase and production roll-out.

  18. CMS Security Handbook The Comprehensive Guide for WordPress, Joomla, Drupal, and Plone

    CERN Document Server

    Canavan, Tom

    2011-01-01

    Learn to secure Web sites built on open source CMSs Web sites built on Joomla!, WordPress, Drupal, or Plone face some unique security threats. If you're responsible for one of them, this comprehensive security guide, the first of its kind, offers detailed guidance to help you prevent attacks, develop secure CMS-site operations, and restore your site if an attack does occur. You'll learn a strong, foundational approach to CMS operations and security from an expert in the field.More and more Web sites are being built on open source CMSs, making them a popular target, thus making you vulnerable t

  19. CMS Computing Software and Analysis Challenge 2006

    Energy Technology Data Exchange (ETDEWEB)

    De Filippis, N. [Dipartimento interateneo di Fisica M. Merlin and INFN Bari, Via Amendola 173, 70126 Bari (Italy)

    2007-10-15

    The CMS (Compact Muon Solenoid) collaboration is making a big effort to test the workflow and the dataflow associated with the data handling model. With this purpose the computing, software and analysis Challenge 2006, namely CSA06, started the 15th of September. It was a 50 million event exercise that included all the steps of the analysis chain, like the prompt reconstruction, the data streaming, calibration and alignment iterative executions, the data distribution to regional sites, up to the end-user analysis. Grid tools provided by the LCG project are also experimented to gain access to the data and the resources by providing a user friendly interface to the physicists submitting the production and the analysis jobs. An overview of the status and results of the CSA06 is presented in this work.

  20. CMS data quality monitoring web service

    International Nuclear Information System (INIS)

    Tuura, L; Eulisse, G; Meyer, A

    2010-01-01

    A central component of the data quality monitoring system of the CMS experiment at the Large Hadron Collider is a web site for browsing data quality histograms. The production servers in data taking provide access to several hundred thousand histograms per run, both live in online as well as for up to several terabytes of archived histograms for the online data taking, Tier-0 prompt reconstruction, prompt calibration and analysis activities, for re-reconstruction at Tier-1s and for release validation. At the present usage level the servers currently handle in total around a million authenticated HTTP requests per day. We describe the main features and components of the system, our implementation for web-based interactive rendering, and the server design. We give an overview of the deployment and maintenance procedures. We discuss the main technical challenges and our solutions to them, with emphasis on functionality, long-term robustness and performance.

  1. CMS data quality monitoring web service

    Energy Technology Data Exchange (ETDEWEB)

    Tuura, L; Eulisse, G [Northeastern University, Boston, MA (United States); Meyer, A, E-mail: lat@cern.c, E-mail: giulio.eulisse@cern.c, E-mail: andreas.meyer@cern.c [DESY, Hamburg (Germany)

    2010-04-01

    A central component of the data quality monitoring system of the CMS experiment at the Large Hadron Collider is a web site for browsing data quality histograms. The production servers in data taking provide access to several hundred thousand histograms per run, both live in online as well as for up to several terabytes of archived histograms for the online data taking, Tier-0 prompt reconstruction, prompt calibration and analysis activities, for re-reconstruction at Tier-1s and for release validation. At the present usage level the servers currently handle in total around a million authenticated HTTP requests per day. We describe the main features and components of the system, our implementation for web-based interactive rendering, and the server design. We give an overview of the deployment and maintenance procedures. We discuss the main technical challenges and our solutions to them, with emphasis on functionality, long-term robustness and performance.

  2. Commissioning the CMS alignment and calibration framework

    International Nuclear Information System (INIS)

    Futyan, David

    2010-01-01

    The CMS experiment has developed a powerful framework to ensure the precise and prompt alignment and calibration of its components, which is a major prerequisite to achieve the optimal performance for physics analysis. The prompt alignment and calibration strategy harnesses computing resources both at the Tier-0 site and the CERN Analysis Facility (CAF) to ensure fast turnaround for updating the corresponding database payloads. An essential element is the creation of dedicated data streams concentrating the specific event information required by the various alignment and calibration workflows. The resulting low latency is required for feeding the resulting constants into the prompt reconstruction process, which is essential for achieving swift physics analysis of the LHC data. This report discusses the implementation and the computational aspects of the alignment and calibration framework. Recent commissioning campaigns with cosmic muons, beam halo and simulated data have been used to gain detailed experience with this framework, and results of this validation are reported.

  3. Commissioning the CMS Alignment and Calibration Framework

    CERN Document Server

    Futyan, David

    2009-01-01

    The CMS experiment has developed a powerful framework to ensure the precise and prompt alignment and calibration of its components, which is a major prerequisite to achieve the optimal performance for physics analysis. The prompt alignment and calibration strategy harnesses computing resources both at the Tier-0 site and the CERN Analysis Facility (CAF) to ensure fast turnaround for updating the corresponding database payloads. An essential element is the creation of dedicated data streams concentrating the specific event information required by the various alignment and calibration workflows. The resulting low latency is required for feeding the resulting constants into the prompt reconstruction process, which is essential for achieving swift physics analysis of the LHC data. This report discusses the implementation and the computational aspects of the alignment and calibration framework. Recent commissioning campaigns with cosmic muons, beam halo and simulated data have been used to gain detailed experience...

  4. CMS Computing Software and Analysis Challenge 2006

    International Nuclear Information System (INIS)

    De Filippis, N.

    2007-01-01

    The CMS (Compact Muon Solenoid) collaboration is making a big effort to test the workflow and the dataflow associated with the data handling model. With this purpose the computing, software and analysis Challenge 2006, namely CSA06, started the 15th of September. It was a 50 million event exercise that included all the steps of the analysis chain, like the prompt reconstruction, the data streaming, calibration and alignment iterative executions, the data distribution to regional sites, up to the end-user analysis. Grid tools provided by the LCG project are also experimented to gain access to the data and the resources by providing a user friendly interface to the physicists submitting the production and the analysis jobs. An overview of the status and results of the CSA06 is presented in this work

  5. DeepFlavour in CMS

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Flavour-tagging of jets is an important task in collider based high energy physics and a field where machine learning tools are applied by all major experiments. A new tagger (DeepFlavour) was developed and commissioned in CMS that is based on an advanced machine learning procedure. A deep neural network is used to do multi-classification of jets that origin from a b-quark, two b-quarks, a c-quark, two c-quarks or light colored particles (u, d, s-quark or gluon). The performance was measured in both, data and simulation. The talk will also include the measured performance of all taggers in CMS. The different taggers and results will be discussed and compared with some focus on details of the newest tagger.

  6. The CMS Muon System Alignment

    CERN Document Server

    Martinez Ruiz-Del-Arbol, P

    2009-01-01

    The alignment of the muon system of CMS is performed using different techniques: photogrammetry measurements, optical alignment and alignment with tracks. For track-based alignment, several methods are employed, ranging from a hit and impact point (HIP) algorithm and a procedure exploiting chamber overlaps to a global fit method based on the Millepede approach. For start-up alignment as long as available integrated luminosity is still significantly limiting the size of the muon sample from collisions, cosmic muon and beam halo signatures play a very strong role. During the last commissioning runs in 2008 the first aligned geometries have been produced and validated with data. The CMS offline computing infrastructure has been used in order to perform improved reconstructions. We present the computational aspects related to the calculation of alignment constants at the CERN Analysis Facility (CAF), the production and population of databases and the validation and performance in the official reconstruction. Also...

  7. Virtual data in CMS production

    International Nuclear Information System (INIS)

    Arbree, A. et al.

    2004-01-01

    Initial applications of the GriPhyN Chimera Virtual Data System have been performed within the context of CMS Production of Monte Carlo Simulated Data. The GriPhyN Chimera system consists of four primary components: (1) a Virtual Data Language, which is used to describe virtual data products, (2) a Virtual Data Catalog, which is used to store virtual data entries, (3) an Abstract Planner, which resolves all dependencies of a particular virtual data product and forms a location and existence independent plan, (4) a Concrete Planner, which maps an abstract, logical plan onto concrete, physical grid resources accounting for staging in/out files and publishing results to a replica location service. A CMS Workflow Planner, MCRunJob, is used to generate virtual data products using the Virtual Data Language. Subsequently, a prototype workflow manager, known as WorkRunner, is used to schedule the instantiation of virtual data products across a grid

  8. CMS Silicon Strip Tracker Performance

    CERN Document Server

    Agram, Jean-Laurent

    2012-01-01

    The CMS Silicon Strip Tracker (SST), consisting of 9.6 million readout channels from 15148 modules and covering an area of 198 square meters, needs to be precisely calibrated in order to correctly reconstruct the events recorded. Calibration constants are derived from different workflows, from promptly reconstructed events with particles as well as from commissioning events gathered just before the acquisition of physics runs. The performance of the SST has been carefully studied since the beginning of data taking: the noise of the detector, data integrity, signal-over-noise ratio, hit reconstruction efficiency and resolution have been all investigated with time and for different conditions. In this paper we describe the reconstruction strategies, the calibration procedures and the detector performance results from the latest CMS operation.

  9. Virtual Data in CMS Production

    CERN Document Server

    Arbree, A; Bourilkov, D; Cavanaugh, R J; Graham, G; Katageri, S; Rodríguez, J; Voeckler, J; Wilde, M

    2003-01-01

    Initial applications of the GriPhyN Chimera Virtual Data System have been performed within the context of CMS Production of Monte Carlo Simulated Data. The GriPhyN Chimera system consists of four primary components: 1) a Virtual Data Language, which is used to describe virtual data products, 2) a Virtual Data Catalog, which is used to store virtual data entries, 3) an Abstract Planner, which resolves all dependencies of a particular virtual data product and forms a location and existence independent plan, 4) a Concrete Planner, which maps an abstract, logical plan onto concrete, physical grid resources accounting for staging in/out files and publishing results to a replica location service. A CMS Workflow Planner, MCRunJob, is used to generate virtual data products using the Virtual Data Language. Subsequently, a prototype workflow manager, known as WorkRunner, is used to schedule the instantiation of virtual data products across a grid.

  10. CMS software deployment on OSG

    International Nuclear Information System (INIS)

    Kim, B; Avery, P; Thomas, M; Wuerthwein, F

    2008-01-01

    A set of software deployment tools has been developed for the installation, verification, and removal of a CMS software release. The tools that are mainly targeted for the deployment on the OSG have the features of instant release deployment, corrective resubmission of the initial installation job, and an independent web-based deployment portal with Grid security infrastructure login mechanism. We have been deploying over 500 installations and found the tools are reliable and adaptable to cope with problems with changes in the Grid computing environment and the software releases. We present the design of the tools, statistics that we gathered during the operation of the tools, and our experience with the CMS software deployment on the OSG Grid computing environment

  11. CMS software deployment on OSG

    Energy Technology Data Exchange (ETDEWEB)

    Kim, B; Avery, P [University of Florida, Gainesville, FL 32611 (United States); Thomas, M [California Institute of Technology, Pasadena, CA 91125 (United States); Wuerthwein, F [University of California at San Diego, La Jolla, CA 92093 (United States)], E-mail: bockjoo@phys.ufl.edu, E-mail: thomas@hep.caltech.edu, E-mail: avery@phys.ufl.edu, E-mail: fkw@fnal.gov

    2008-07-15

    A set of software deployment tools has been developed for the installation, verification, and removal of a CMS software release. The tools that are mainly targeted for the deployment on the OSG have the features of instant release deployment, corrective resubmission of the initial installation job, and an independent web-based deployment portal with Grid security infrastructure login mechanism. We have been deploying over 500 installations and found the tools are reliable and adaptable to cope with problems with changes in the Grid computing environment and the software releases. We present the design of the tools, statistics that we gathered during the operation of the tools, and our experience with the CMS software deployment on the OSG Grid computing environment.

  12. Moment of truth for CMS

    CERN Multimedia

    2006-01-01

    One of the first events reconstructed in the Muon Drift Tubes, the Hadron Calorimeter and elements of the Silicon Tracker (TK) at 3 Tesla. The atmosphere in the CMS control rooms was electric. Everbody was at the helm for the first full-scale testing of the experiment. This was a crunch moment for the entire collaboration. On Tuesday, 22 August the magnet attained almost its nominal power of 4 Tesla! At the same moment, in a tiny improvised control room, the physicists were keyed up to test the entire detector system for the first time. The first cosmic ray tracks appeared on their screens in the week of 15 August. The tests are set to continue for several weeks more until the first CMS components are lowered into their final positions in the cavern.

  13. CMS Web-Based Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Badgett, William [Fermilab; Lopez-Perez, Juan Antonio [Fermilab; Maeshima, Kaori [Fermilab; Soha, Aron [Fermilab; Sulmanas, Balys [Fermilab; Wan, Zongru [Kansas State U.

    2010-01-01

    With the growth in size and complexity of High Energy Physics experiments, and the accompanying increase in the number of collaborators spread across the globe, the importance of widely relaying timely monitoring and status information has grown. To this end, we present online Web Based Monitoring solutions from the CMS experiment at CERN. The web tools developed present data to the user from many underlying heterogeneous sources, from real time messaging system to relational databases. We provide the power to combine and correlate data in both graphical and tabular formats of interest to the experimentalist, with data such as beam conditions, luminosity, trigger rates, detector conditions and many others, allowing for flexibility on the user side. We also present some examples of how this system has been used during CMS commissioning and early beam collision running at the Large Hadron Collider.

  14. CMS results on hard diffraction

    CERN Document Server

    INSPIRE-00107098

    2013-01-01

    In these proceedings we present CMS results on hard diffraction. Diffractive dijet production in pp collisions at $\\sqrt{s}$=7 TeV is discussed. The cross section for dijet production is presented as a function of $\\tilde{\\xi}$, representing the fractional momentum loss of the scattered proton in single-diffractive events. The observation of W and Z boson production in events with a large pseudo-rapidity gap is also presented.

  15. Neutralization of tier-2 viruses and epitope profiling of plasma antibodies from human immunodeficiency virus type 1 infected donors from India.

    Directory of Open Access Journals (Sweden)

    Raiees Andrabi

    Full Text Available Broadly cross neutralizing antibodies (NAbs are generated in a group of HIV-1 infected individuals during the natural infection, but little is known about their prevalence in patients infected with viral subtypes from different geographical regions. We tested here the neutralizing efficiency of plasma antibodies from 80 HIV-1 infected antiretroviral drug naive patients against a panel of subtype-B and C tier 2 viruses. We detected cross-neutralizing antibodies in approximately 19-27% of the plasma, however the subtype-C specific neutralization efficiency predominated (p = 0.004. The neutralizing activity was shown to be exclusively mediated by the immunoglobulin G (IgG fraction in the representative plasma samples. Epitope mapping of three, the most cross-neutralizing plasma (CNP AIIMS206, AIIMS239 and AIIMS249 with consensus-C overlapping envelope peptides revealed ten different binding specificities with only V3 and IDR being common. The V3 and IDR were highly antigenic regions but no correlation between their reciprocal Max50 binding titers and neutralization was observed. In addition, the neutralizing activity of CNP was not substantially reduced by V3 and gp41 peptides except a modest contribution of MPER peptide. The MPER was rarely recognized by plasma antibodies though antibody depletion and competition experiments demonstrated MPER dependent neutralization in two out of three CNP. Interestingly, the binding specificity of one of the CNP (AIIMS206 overlapped with broadly neutralizing mAb 2F5 epitope. Overall, the data suggest that, despite the low immunogenicity of HIV-1 MPER, the antibodies directed to this region may serve as crucial reagents for HIV-1 vaccine design.

  16. CMS results in Electroweak Physics

    CERN Multimedia

    CERN. Geneva

    2011-01-01

    We present the results of electroweak studies performed using data collected in 2010 at a center-of-mass energy of 7 TeV by the CMS experiment at the LHC. Besides their intrinsic interest as unique samples to calibrate and understand the CMS detector response to leptons, jets and missing energy, events containing W and Z bosons appear as dominant components in many Higgs seaches and in most of the searches beyond the Standard Model, either as signal or as background. In addition, the excellent level of theoretical and experimental understanding of these processes allows electroweak tests at the LHC at an unprecendented level of precision. CMS uses a wide range of final states to measure cross sections, asymmetries, polarizations and differential distributions in general. The current integrated luminosity is already sufficient to perform not just inclusive measurements using W and Z decays into muons and electrons, but also precise studies of associated jet production and final states containing taus, as well...

  17. Xrootd Monitoring for the CMS Experiment

    International Nuclear Information System (INIS)

    Bauerdick, L A T; Bloom, K; Bockelman, B; Bradley, D C; Dasu, S; Sfiligoi, I; Tadel, A; Tadel, M; Wuerthwein, F; Yagil, A

    2012-01-01

    During spring and summer of 2011, CMS deployed Xrootd-based access for all US T1 and T2 sites. This allows for remote access to all experiment data on disk in the US. It is used for user analysis, visualization, running of jobs at computing sites when data is not available at local sites, and as a fail-over mechanism for data access in jobs. Monitoring of this Xrootd infrastructure is implemented on three levels. Basic service and data availability checks are performed by Nagios probes. The second level uses Xrootd's “summary data” stream; this data is aggregated from all sites and fed into a MonALISA service providing visualization and storage. The third level uses Xrootd's “detailed monitoring” stream, which includes detailed information about users, opened files and individual data transfers. A custom application was developed to process this information. It currently provides a real-time view of the system usage and can store data into ROOT files for detailed analysis. Detailed monitoring allows us to determine dataset popularity and to detect abuses of the system, including sub-optimal usage of the Xrootd protocol and the ROOT prefetching mechanism.

  18. Xrootd monitoring for the CMS experiment

    Energy Technology Data Exchange (ETDEWEB)

    Bauerdick, L. A.T. [Fermilab; Bloom, K. [Nebraska U.; Bockelman, B. [Nebraska U.; Bradley, D. C. [Wisconsin U., Madison; Dasu, S. [Wisconsin U., Madison; Sfiligoi, I. [UC, San Diego; Tadel, A. [UC, San Diego; Tadel, M. [UC, San Diego; Wuerthwein, F. [UC, San Diego; Yagil, A. [UC, San Diego

    2012-01-01

    During spring and summer of 2011, CMS deployed Xrootd-based access for all US T1 and T2 sites. This allows for remote access to all experiment data on disk in the US. It is used for user analysis, visualization, running of jobs at computing sites when data is not available at local sites, and as a fail-over mechanism for data access in jobs. Monitoring of this Xrootd infrastructure is implemented on three levels. Basic service and data availability checks are performed by Nagios probes. The second level uses Xrootd's summary data stream, this data is aggregated from all sites and fed into a MonALISA service providing visualization and storage. The third level uses Xrootd's detailed monitoring stream, which includes detailed information about users, opened files and individual data transfers. A custom application was developed to process this information. It currently provides a real-time view of the system usage and can store data into ROOT files for detailed analysis. Detailed monitoring allows us to determine dataset popularity and to detect abuses of the system, including sub-optimal usage of the Xrootd protocol and the ROOT prefetching mechanism.

  19. Enabling opportunistic resources for CMS Computing Operations

    Energy Technology Data Exchange (ETDEWEB)

    Hufnagel, Dick [Fermilab

    2015-11-19

    With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize “opportunistic” resources — resources not owned by, or a priori configured for CMS — to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are used to enable access and bring the CMS environment into these non CMS resources. Here we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.

  20. Photos from the CMS Photo Book

    CERN Multimedia

    Boreham, S

    2008-01-01

    Photos from the CMS Photo Book. Activities at Point 5 in Cessy, France, between 1998 - 2008. Images of assembly and Installation of the CMS detector: - Civil Engineering - Assembly in the Surface Building - Lowering of the Heavy Elements - Installing and connecting the CMS detector in the underground experiment These images illustrate the assembly, installation and commissioning of the CMS detector. They cover the activities at Point 5 in Cessy, France, between 1998 and 2008. CMS is one of the most complex scientific instruments ever built. It has taken about 20 years to go from conceptual design to the completion of construction of the CMS detector for the LHC start-up in September 2008. Accomplishing this has required the talents, efforts and resources of over 2500 scientists and engineers from about 180 institutions in 38 countries. caverns Compiled by: S. Cittolin, F. Marcastel and T.S. Virdee

  1. CMS Centres Worldwide - a New Collaborative Infrastructure

    International Nuclear Information System (INIS)

    Taylor, Lucas

    2011-01-01

    The CMS Experiment at the LHC has established a network of more than fifty inter-connected 'CMS Centres' at CERN and in institutes in the Americas, Asia, Australasia, and Europe. These facilities are used by people doing CMS detector and computing grid operations, remote shifts, data quality monitoring and analysis, as well as education and outreach. We present the computing, software, and collaborative tools and videoconferencing systems. These include permanently running 'telepresence' video links (hardware-based H.323, EVO and Vidyo), Webcasts, and generic Web tools such as CMS-TV for broadcasting live monitoring and outreach information. Being Web-based and experiment-independent, these systems could easily be extended to other organizations. We describe the experiences of using CMS Centres Worldwide in the CMS data-taking operations as well as for major media events with several hundred TV channels, radio stations, and many more press journalists simultaneously around the world.

  2. Soft QCD at CMS and ATLAS

    CERN Document Server

    Starovoitov, Pavel; The ATLAS collaboration

    2018-01-01

    A short overview of the recent soft QCD results from the ATLAS and CMS collaborations is presented. The inelastic cross section measurement by CMS at 13 TeV is summarised. The contribution of the diffractive processes to the very forward photon spectra studied by ATLAS and LHCf is discussed. The ATLAS measurements of the exclusive two-photon production of the muon pairs is presented and compared to the previous ATLAS and CMS results.

  3. CMS Centres Worldwide - a New Collaborative Infrastructure

    CERN Document Server

    Taylor, Lucas

    2011-01-01

    Webcasts, and generic Web tools such as CMS-TV for broadcasting live monitoring and outreach information. Being Web-based and experiment-independent, these systems could easily be extended to other organizations. We describe the experiences of using CMS Centres Worldwide in the CMS data-taking operations as well as for major media events with several hundred TV channels, radio stations, and many more press journalists simultaneously around the world.

  4. Readiness of CMS simulation towards LHC startup

    International Nuclear Information System (INIS)

    Banerjee, S

    2008-01-01

    The CMS experiment has used detector simulation software in its conceptual as well as technical design. With the detector construction near its completion, the role of simulation has changed toward understanding collision data to be collected by CMS in near future. CMS simulation software is becoming a data driven, realistic and accurate Monte Carlo programme. The software architecture is described with some detail of the framework as well as detector specific components. Performance issues are discussed as well

  5. 42 CFR 422.510 - Termination of contract by CMS.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Termination of contract by CMS. 422.510 Section 422... Advantage Organizations § 422.510 Termination of contract by CMS. (a) Termination by CMS. CMS may at any time terminate a contract if CMS determines that the MA organization meets any of the following: (1...

  6. 42 CFR 423.509 - Termination of contract by CMS.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Termination of contract by CMS. 423.509 Section 423... Contracts with Part D plan sponsors § 423.509 Termination of contract by CMS. (a) Termination by CMS. CMS may at any time terminate a contract if CMS determines that the Part D plan sponsor meets any of the...

  7. CMS data and workflow management system

    CERN Document Server

    Fanfani, A; Bacchi, W; Codispoti, G; De Filippis, N; Pompili, A; My, S; Abbrescia, M; Maggi, G; Donvito, G; Silvestris, L; Calzolari, F; Sarkar, S; Spiga, D; Cinquili, M; Lacaprara, S; Biasotto, M; Farina, F; Merlo, M; Belforte, S; Kavka, C; Sala, L; Harvey, J; Hufnagel, D; Fanzago, F; Corvo, M; Magini, N; Rehn, J; Toteva, Z; Feichtinger, D; Tuura, L; Eulisse, G; Bockelman, B; Lundstedt, C; Egeland, R; Evans, D; Mason, D; Gutsche, O; Sexton-Kennedy, L; Dagenhart, D W; Afaq, A; Guo, Y; Kosyakov, S; Lueking, L; Sekhri, V; Fisk, I; McBride, P; Bauerdick, L; Bakken, J; Rossman, P; Wicklund, E; Wu, Y; Jones, C; Kuznetsov, V; Riley, D; Dolgert, A; van Lingen, F; Narsky, I; Paus, C; Klute, M; Gomez-Ceballos, G; Piedra-Gomez, J; Miller, M; Mohapatra, A; Lazaridis, C; Bradley, D; Elmer, P; Wildish, T; Wuerthwein, F; Letts, J; Bourilkov, D; Kim, B; Smith, P; Hernandez, J M; Caballero, J; Delgado, A; Flix, J; Cabrillo-Bartolome, I; Kasemann, M; Flossdorf, A; Stadie, H; Kreuzer, P; Khomitch, A; Hof, C; Zeidler, C; Kalini, S; Trunov, A; Saout, C; Felzmann, U; Metson, S; Newbold, D; Geddes, N; Brew, C; Jackson, J; Wakefield, S; De Weirdt, S; Adler, V; Maes, J; Van Mulders, P; Villella, I; Hammad, G; Pukhaeva, N; Kurca, T; Semneniouk, I; Guan, W; Lajas, J A; Teodoro, D; Gregores, E; Baquero, M; Shehzad, A; Kadastik, M; Kodolova, O; Chao, Y; Ming Kuo, C; Filippidis, C; Walzel, G; Han, D; Kalinowski, A; Giro de Almeida, N M; Panyam, N

    2008-01-01

    CMS expects to manage many tens of peta bytes of data to be distributed over several computing centers around the world. The CMS distributed computing and analysis model is designed to serve, process and archive the large number of events that will be generated when the CMS detector starts taking data. The underlying concepts and the overall architecture of the CMS data and workflow management system will be presented. In addition the experience in using the system for MC production, initial detector commissioning activities and data analysis will be summarized.

  8. Site in a box: Improving the Tier 3 experience

    Science.gov (United States)

    Dost, J. M.; Fajardo, E. M.; Jones, T. R.; Martin, T.; Tadel, A.; Tadel, M.; Würthwein, F.

    2017-10-01

    The Pacific Research Platform is an initiative to interconnect Science DMZs between campuses across the West Coast of the United States over a 100 gbps network. The LHC @ UC is a proof of concept pilot project that focuses on interconnecting 6 University of California campuses. It is spearheaded by computing specialists from the UCSD Tier 2 Center in collaboration with the San Diego Supercomputer Center. A machine has been shipped to each campus extending the concept of the Data Transfer Node to a cluster in a box that is fully integrated into the local compute, storage, and networking infrastructure. The node contains a full HTCondor batch system, and also an XRootD proxy cache. User jobs routed to the DTN can run on 40 additional slots provided by the machine, and can also flock to a common GlideinWMS pilot pool, which sends jobs out to any of the participating UCs, as well as to Comet, the new supercomputer at SDSC. In addition, a common XRootD federation has been created to interconnect the UCs and give the ability to arbitrarily export data from the home university, to make it available wherever the jobs run. The UC level federation also statically redirects to either the ATLAS FAX or CMS AAA federation respectively to make globally published datasets available, depending on end user VO membership credentials. XRootD read operations from the federation transfer through the nearest DTN proxy cache located at the site where the jobs run. This reduces wide area network overhead for subsequent accesses, and improves overall read performance. Details on the technical implementation, challenges faced and overcome in setting up the infrastructure, and an analysis of usage patterns and system scalability will be presented.

  9. Electroweak precision measurements in CMS

    CERN Document Server

    Dordevic, Milos

    2017-01-01

    An overview of recent results on electroweak precision measurements from the CMS Collaboration is presented. Studies of the weak boson differential transverse momentum spectra, Z boson angular coefficients, forward-backward asymmetry of Drell-Yan lepton pairs and charge asymmetry of W boson production are made in comparison to the state-of-the-art Monte Carlo generators and theoretical predictions. The results show a good agreement with the Standard Model. As a proof of principle for future W mass measurements, a W-like analysis of the Z boson mass is performed.

  10. Precision proton spectrometers for CMS

    CERN Document Server

    Albrow, Michael

    2013-01-01

    We plan to add high precision tracking- and timing-detectors at z = +/- 240 m to CMS to study exclusive processes p + p -- p + X + p at high luminosity. This enables the LHC to be used as a tagged photon-photon collider, with X = l+l- and W+W-, and as a "tagged" gluon-gluon collider (with a spectator gluon) for QCD studies with jets. A second stage at z = 240 m would allow observations of exclusive Higgs boson production.

  11. CMS Barrel Pixel Detector Overview

    CERN Document Server

    Kästli, H C; Erdmann, W; Gabathuler, K; Hörmann, C; Horisberger, Roland Paul; König, S; Kotlinski, D; Meier, B; Robmann, P; Rohe, T; Streuli, S

    2007-01-01

    The pixel detector is the innermost tracking device of the CMS experiment at the LHC. It is built from two independent sub devices, the pixel barrel and the end disks. The barrel consists of three concentric layers around the beam pipe with mean radii of 4.4, 7.3 and 10.2 cm. There are two end disks on each side of the interaction point at 34.5 cm and 46.5 cm. This article gives an overview of the pixel barrel detector, its mechanical support structure, electronics components, services and its expected performance.

  12. CMS magnet Conference MT17

    CERN Multimedia

    2001-01-01

    The CMS magnet system consists of the superconducting coil, the magnet yoke (barrel and endcap), the vacuum tank and ancillaries such as cryogenics and power supply. The axial magnetic field is 4 Tesla, the yoke diameter is 14 m across flats, the axial yoke length including endcaps is 21.6 m and the total mass is about 12000 tons. It will be the largest superconducting magnet in the world in term of energy stored into it: 2.7 GJ (large enough to melt 18 tonnes of gold).

  13. The CMS workload management system

    Energy Technology Data Exchange (ETDEWEB)

    Cinquilli, M. [CERN; Evans, D. [Fermilab; Foulkes, S. [Fermilab; Hufnagel, D. [Fermilab; Mascheroni, M. [CERN; Norman, M. [UC, San Diego; Maxa, Z. [Caltech; Melo, A. [Vanderbilt U.; Metson, S. [Bristol U.; Riahi, H. [INFN, Perugia; Ryu, S. [Fermilab; Spiga, D. [CERN; Vaandering, E. [Fermilab; Wakefield, Stuart [Imperial Coll., London; Wilkinson, R. [Caltech

    2012-01-01

    CMS has started the process of rolling out a new workload management system. This system is currently used for reprocessing and Monte Carlo production with tests under way using it for user analysis. It was decided to combine, as much as possible, the production/processing, analysis and T0 codebases so as to reduce duplicated functionality and make best use of limited developer and testing resources. This system now includes central request submission and management (Request Manager), a task queue for parcelling up and distributing work (WorkQueue) and agents which process requests by interfacing with disparate batch and storage resources (WMAgent).

  14. The CMS workload management system

    International Nuclear Information System (INIS)

    Cinquilli, M; Mascheroni, M; Spiga, D; Evans, D; Foulkes, S; Hufnagel, D; Ryu, S; Vaandering, E; Norman, M; Maxa, Z; Wilkinson, R; Melo, A; Metson, S; Riahi, H; Wakefield, S

    2012-01-01

    CMS has started the process of rolling out a new workload management system. This system is currently used for reprocessing and Monte Carlo production with tests under way using it for user analysis. It was decided to combine, as much as possible, the production/processing, analysis and T0 codebases so as to reduce duplicated functionality and make best use of limited developer and testing resources. This system now includes central request submission and management (Request Manager); a task queue for parcelling up and distributing work (WorkQueue) and agents which process requests by interfacing with disparate batch and storage resources (WMAgent).

  15. Progress on CMS detector lowering

    CERN Multimedia

    2006-01-01

    It was an amazing engineering challenge - the lowering of the first hugeendcap disc (YE+3) of the CMS detector slowly and carefully 100 metres underground. The spectacular descent took place on 30 November and was documented by a film crew from Reuters news group. The uniquely shaped slice is 16 m high, about 50 cm thick, and weighs 400 tonnes. It is one of 15 sections that make up the complete CMS detector. The solid steel structure of the disc forms part of the magnet return yoke and is equipped on both sides with muon chambers. A special gantry crane lowered the element, with just 20 cm of leeway between the edges of the detector and the walls of the shaft! On 12 December, a further section of the detector (YE+2) containing the cathode strip chamber made the 10-hour journey underground. This piece is 16 m high and weighs 880 tonnes. There are now four sections of the detector in the experimental cavern, with a further 11 to follow. The endcap disc YE+3 (seen in the foreground) begins its journey down the ...

  16. Exotic quarkonium states in CMS

    CERN Document Server

    Cristella, Leonardo

    2017-01-01

    The studies of the production of the $X(3872)$, either prompt or from B hadron decays, and of the $J/\\psi \\phi$ mass spectrum in B hadron decays have been carried out by using $pp$ collisions at $\\sqrt{s}=7$ TeV collected with the CMS detector at the LHC. %The production of the $X(3872)$ is studied in $pp$ collisions at $\\sqrt{s} = 7$ TeV with the CMS detector at LHC, using decays to $J/\\psi\\pi^{+}\\pi^{-}$ where the $J/\\psi$ decays to two muons. The cross-section ratio of the $X(3872)$ with respect to the $\\psi(2S)$ in the $J/\\psi\\pi^{+}\\pi^{-}$ decay channel and the fraction of $X(3872)$ coming from B-hadron decays are measured as a function of transverse momentum ($p\\mathrm{_T}$), covering unprecedentedly high values of $p\\mathrm{_T}$. For the first time, the prompt production cross section for the $X(3872)$ times the unknown branching fraction for the decay of $X(3872) \\rightarrow J/\\psi\\pi^{+}\\pi^{-}$ is extracted differentially in $p\\mathrm{_T}$ and compared to theoretical predictions based on the Non-R...

  17. The CMS tracker control system

    International Nuclear Information System (INIS)

    Dierlamm, A; Dirkes, G H; Fahrer, M; Frey, M; Hartmann, F; Masetti, L; Militaru, O; Shah, S Y; Stringer, R; Tsirou, A

    2008-01-01

    The Tracker Control System (TCS) is a distributed control software to operate about 2000 power supplies for the silicon modules of the CMS Tracker and monitor its environmental sensors. TCS must thus be able to handle about 10 4 power supply parameters, about 10 3 environmental probes from the Programmable Logic Controllers of the Tracker Safety System (TSS), about 10 5 parameters read via DAQ from the DCUs in all front end hybrids and from CCUs in all control groups. TCS is built on top of an industrial SCADA program (PVSS) extended with a framework developed at CERN (JCOP) and used by all LHC experiments. The logical partitioning of the detector is reflected in the hierarchical structure of the TCS, where commands move down to the individual hardware devices, while states are reported up to the root which is interfaced to the broader CMS control system. The system computes and continuously monitors the mean and maximum values of critical parameters and updates the percentage of currently operating hardware. Automatic procedures switch off selected parts of the detector using detailed granularity and avoiding widespread TSS intervention

  18. The CMS tracker control system

    Science.gov (United States)

    Dierlamm, A.; Dirkes, G. H.; Fahrer, M.; Frey, M.; Hartmann, F.; Masetti, L.; Militaru, O.; Shah, S. Y.; Stringer, R.; Tsirou, A.

    2008-07-01

    The Tracker Control System (TCS) is a distributed control software to operate about 2000 power supplies for the silicon modules of the CMS Tracker and monitor its environmental sensors. TCS must thus be able to handle about 104 power supply parameters, about 103 environmental probes from the Programmable Logic Controllers of the Tracker Safety System (TSS), about 105 parameters read via DAQ from the DCUs in all front end hybrids and from CCUs in all control groups. TCS is built on top of an industrial SCADA program (PVSS) extended with a framework developed at CERN (JCOP) and used by all LHC experiments. The logical partitioning of the detector is reflected in the hierarchical structure of the TCS, where commands move down to the individual hardware devices, while states are reported up to the root which is interfaced to the broader CMS control system. The system computes and continuously monitors the mean and maximum values of critical parameters and updates the percentage of currently operating hardware. Automatic procedures switch off selected parts of the detector using detailed granularity and avoiding widespread TSS intervention.

  19. CMS announces new payment model

    Directory of Open Access Journals (Sweden)

    Robbins RA

    2018-01-01

    Full Text Available No abstract available. Article truncated after 150 words. On Tuesday, 1/9/18, the Centers for Medicare and Medicaid (CMS announced a new voluntary bundled-payment model that will be considered an advanced alternative payment model under Medicare Access and CHIP Reauthorization Act of 2015 (MACRA (1. The new model is the first advanced Alternative Payment Model (APM to be introduced by the Trump administration. The Trump administration has been a vocal advocate of reducing administrative burden for clinicians and has touted voluntary models as a solution (2. The new, voluntary model comes less than two months after the CMS officially decided to eliminate two mandatory bundled-payment models created during the Obama administration. Under the model, clinician payment will be based on quality measures during a 90-day episode of care. Participants must select at least one of the 32 clinical episodes to apply to the model. The inpatient clinical episodes are listed in Table 1 (3. Table 1. Clinical inpatient episodes under …

  20. CMS: Beyond all possible expectations

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    After having retraced the entire Standard Model up to the Top, the CMS collaboration is ready to go further and continue the success of what Guido Tonelli – its spokesperson – defines as a ‘magic year’. Things evolve fast at CMS, but scientists have taken up the challenge and are ready for the future.   ‘Enthusiasm’ is the word that best describes the feeling one gets when talking to Guido Tonelli. “In just a few months we have rediscovered the Standard Model and have gone even further by producing new results for cross-sections, placing new limits on the creation of heavy masses, making studies on the excited states of quarks, and seeking new resonances. We could not have expected so much such a short space of time. It’s fantastic”, he says. “We went through the learning phase very smoothly. Our detector was very quickly ready to do real physics and we were able to start to produce results almost ...

  1. CMS inaugurates its high-tech visitor centre

    CERN Multimedia

    Antonella Del Rosso

    2014-01-01

    The new Building SL53 on CERN’s Cessy site in France is ready to welcome the thousands of visitors (30,000 in 2013) who come to learn about CMS each year. It boasts low energy consumption and the possibility, in the future, of being heated by recycling the heat given off by the detector.   The new Building SL53 at CERN’s Cessy site in France will be inaugurated on 24 May 2014. “Constructed by the GS Department and the firm Dimensione, the building meets the operational requirements of the CMS experiment, which require the uninterrupted use of its infrastructure,” explains Martin Gastal, the member of the collaboration in charge of the project. Its 560 m2 surface area features a meeting room, eight offices, an open space for CMS users, a rest area with a kitchen, sanitary facilities including showers, and a conference room in which to receive visitors. “The new conference room on the ground floor can accommodate 50 people,&am...

  2. 23 CFR 500.109 - CMS.

    Science.gov (United States)

    2010-04-01

    ... 23 Highways 1 2010-04-01 2010-04-01 false CMS. 500.109 Section 500.109 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION TRANSPORTATION INFRASTRUCTURE MANAGEMENT MANAGEMENT AND MONITORING SYSTEMS Management Systems § 500.109 CMS. (a) For purposes of this part, congestion means the level at...

  3. Set of CMS posters in Spanish

    CERN Multimedia

    Lapka, Marzena; Rao, Achintya

    2014-01-01

    14 A0 posters in English to be printed locally or displayed online. Purpose: science fairs, exhibitions, preparatory material for the CMS virtual visits, etc. Themes: CMS detector, sub-detectors, construction, lowering and installation, collaboration and physics. Available in many languages.

  4. Set of CMS posters in Greek

    CERN Multimedia

    Lapka, Marzena; Petrilli, Achille

    2015-01-01

    14 A0 posters in English to be printed locally or displayed online. Purpose: science fairs, exhibitions, preparatory material for the CMS virtual visits, etc. Themes: CMS detector, sub-detectors, construction, lowering and installation, collaboration and physics. Available in many languages.

  5. Jim Virdee, the new spokesperson of CMS

    CERN Multimedia

    2006-01-01

    Jim Virdee and Michel Della Negra. On 21 June Tejinder 'Jim'Virdee was elected by the CMS collaboration as its new spokesperson, his 3-year term of office beginning in January 2007. He will take over from Michel Della Negra, who has been CMS spokesperson since its formalization in 1992. Three distinguished physicists stood as candidates for this election: Dan Green from Fermilab, programme manager of the US-CMS collaboration and coordinator of the CMS Hadron Calorimeter project; Jim Virdee from Imperial College London and CERN, deputy spokesperson of CMS since 1993; Gigi Rolandi from the University of Trieste and CERN, ex-Aleph spokesperson and currently involved in the preparations of the physics analyses to be done with CMS. On the early evening of 21 June, 141 of the 142 members of the CMS collaboration board, some represented by proxies, took part in a secret ballot. After two rounds of voting Jim Virdee was elected as spokesperson with a clear majority. Jim thanked the CMS collaboration 'for putting conf...

  6. CMS installations are put to the test

    CERN Multimedia

    2006-01-01

    CMS has just undergone two important tests: a spectacular test of the fire extinguishing system in the underground cavern (photo) and, on the surface, a strength test on the plug over the main shaft, which will bear the weight of the detector components when they are lowered into the CMS hall.

  7. Iron Blocks of CMS Magnet Barrel Yoke.

    CERN Multimedia

    2000-01-01

    On the occasion of presenting the CMS Award 2000 to Deggendorfer Werft und Eisenbau GmbH the delivered blocks were inspected at CERN Point 5. From left to right: H. Gerwig (CERN, CMS Magnet Barrel Yoke Coordinator), G. Waurick (CERN), F. Leher (DWE, Project Engineer) and W. Schuster (DWE, Project Manager).

  8. Set of CMS posters (multiple languages)

    CERN Multimedia

    Lapka, Marzena; Rao, Achintya

    2014-01-01

    14 A0 posters in English to be printed locally or displayed online. Purpose: science fairs, exhibitions, preparatory material for the CMS virtual visits, etc. Themes: CMS detector, sub-detectors, construction, lowering and installation, collaboration and physics. Available in many languages.

  9. CMS: Present status, limitations, and upgrade plans

    International Nuclear Information System (INIS)

    Cheung, H.W.K.

    2011-01-01

    An overview of the CMS upgrade plans will be presented. A brief status of the CMS detector will be given, covering some of the issues we have so far experienced. This will be followed by an overview of the various CMS upgrades planned, covering the main motivations for them, and the various R and D efforts for the possibilities under study. The CMS detector has been working extremely well since the start of data-taking at the LHC as is evidenced by the numerous excellent results published by CMS and presented at this workshop and recent conferences. Less well documented are the various issues that have been encountered with the detector. In the spirit of this workshop I will cover some of these issues with particular emphasis on problems that motivate some of the upgrades to the CMS detector for this decade of data-taking. Though the CMS detector has been working extremely well and expectations are great for making the most of the LHC luminosity, there have been a number of issues encountered so far. Some of these have been described and while none currently presents a problem for physics performance, some of them are expected to become more problematic, especially at the highest Phase 1 luminosities for which the majority of the integrated luminosity will be collected. These motivate upgrades for various parts of the CMS detector so that the current excellent physics performance can be maintained or even surpassed in the realm of the highest Phase 1 luminosities.

  10. Status and commissioning of the CMS experiment

    Science.gov (United States)

    Wulz, C.-E.

    2008-05-01

    The construction status of the CMS experiment at the Large Hadron Collider and strategies for commissioning the subdetectors, the magnet, the trigger and the data acquisition are described. The first operations of CMS as a unified system, using either cosmic rays or test data, and the planned activities until the startup of the LHC are presented.

  11. Status and Commissioning of the CMS Experiment

    CERN Document Server

    Wulz, Claudia-Elisabeth

    2008-01-01

    The construction status of the CMS experiment at the Large Hadron Collider and strategies for commissioning the subdetectors, the magnet, the trigger and the data acquisition are described. The first operations of CMS as a unified system, using either cosmic rays or test data, and the planned activities until the startup of the LHC are presented.

  12. Status and commissioning of the CMS experiment

    International Nuclear Information System (INIS)

    Wulz, C-E

    2008-01-01

    The construction status of the CMS experiment at the Large Hadron Collider and strategies for commissioning the subdetectors, the magnet, the trigger and the data acquisition are described. The first operations of CMS as a unified system, using either cosmic rays or test data, and the planned activities until the startup of the LHC are presented

  13. Integrating Amazon EC2 with the CMS production framework

    International Nuclear Information System (INIS)

    Melo, Andrew; Sheldon, Paul

    2012-01-01

    As cloud middleware and cloud providers have become more robust, various experiments with experience in Grid submission have begun to investigate the possibility of taking previously Grid-Enabled applications and making them compatible with Cloud Computing. Successful implementation will allow for dynamic scaling of the available hardware resources, providing access to peak-load handling capabilities and possibly resulting in lower costs to the experiment. Here we discuss current work within the CMS collaboration at the LHC to both perform computation on EC2, both for production and analysis use-cases. We also discuss break-even points between dedicated and cloud resources using real-world costs derived from a CMS site.

  14. Integrating Amazon EC2 with the CMS Production Framework

    CERN Document Server

    Melo, Andrew Malone

    2011-01-01

    As cloud middleware and cloud providers have become more robust, various experiments with experience in Grid submission have begun to investigate the possibility of taking previously Grid-Enabled applications and making them compatible with Cloud Computing. Successful implementation will allow for dynamic scaling of the available hardware resources, providing access to peak-load handling capabilities and possibly resulting in lower costs to the experiment. Here we discuss current work within the CMS collaboration at the LHC to both perform computation on EC2, both for production and analysis use-cases. We also discuss break-even points between dedicated and cloud resources using real-world costs derived from a CMS site.

  15. Search for supersymmetry with jets, missing transverse momentum, and a single tau at CMS

    Energy Technology Data Exchange (ETDEWEB)

    Nowak, Friederike

    2012-06-15

    This thesis presents a search for physics beyond the Standard Model with jets, missing transverse momentum, and a single tau. It aims especially at a cosmological favored region in Supersymmetry, the stau-LSP co-annihilation region with an enhanced production of taus. It is performed with data taken 2011 by the CMS experiment at the LHC, corresponding to an integrated luminosity of 5 fb{sup -1}. The background was divided in two different contributions, one with real and one with fake taus. Both estimates were derived with data-driven techniques. The final measurement yields 28 events, while the number of background events was predicted to be 28.5 {+-} 2.6 (stat) {+-} 2.4 (syst), and thus, no deviation from the Standard Model could be found. As a result, exclusion limits on the supersymmetric cMSSM have been calculated. Data taking and distribution imposes a challenge to the computing grid. To monitor the stability of the infrastructure, modules for Tier 2 operations within the HappyFace Project have been developed, tested, and taken into usage.

  16. Search for supersymmetry with jets, missing transverse momentum, and a single tau at CMS

    International Nuclear Information System (INIS)

    Nowak, Friederike

    2012-06-01

    This thesis presents a search for physics beyond the Standard Model with jets, missing transverse momentum, and a single tau. It aims especially at a cosmological favored region in Supersymmetry, the stau-LSP co-annihilation region with an enhanced production of taus. It is performed with data taken 2011 by the CMS experiment at the LHC, corresponding to an integrated luminosity of 5 fb -1 . The background was divided in two different contributions, one with real and one with fake taus. Both estimates were derived with data-driven techniques. The final measurement yields 28 events, while the number of background events was predicted to be 28.5 ± 2.6 (stat) ± 2.4 (syst), and thus, no deviation from the Standard Model could be found. As a result, exclusion limits on the supersymmetric cMSSM have been calculated. Data taking and distribution imposes a challenge to the computing grid. To monitor the stability of the infrastructure, modules for Tier 2 operations within the HappyFace Project have been developed, tested, and taken into usage.

  17. Last crystals for the CMS chandelier

    CERN Multimedia

    2008-01-01

    In March, the last crystals for CMS’s electromagnetic calorimeter arrived from Russia and China. Like dedicated jewellers crafting an immense chandelier, the CMS ECAL collaborators are working extremely hard to install all the crystals before the start-up of the LHC. One of the last CMS end-cap crystals, complete with identification bar code. Lead tungstate crystals mounted onto one section of the CMS ECAL end caps. Nearly 10 years after the first production crystal arrived at CERN in September 1998, the very last shipment has arrived. These final crystals will be used to complete the end-caps of the electromagnetic calorimeter (ECAL) at CMS. All in all, there are more than 75,000 crystals in the ECAL. The huge quantity of CMS lead tungstate crystals used in the ECAL corresponds to the highest volume ever produced for a single experiment. The excellent quality of the crystals, both in ter...

  18. Science on Drupal: An evaluation of CMS Technologies

    Science.gov (United States)

    Vinay, S.; Gonzalez, A.; Pinto, A.; Pascuzzi, F.; Gerard, A.

    2011-12-01

    We conducted an extensive evaluation of various Content Management System (CMS) technologies for implementing different websites supporting interdisciplinary science data and information. We chose two products, Drupal and Bluenog/Hippo CMS, to meet our specific needs and requirements. Drupal is an open source product that is quick and easy to setup and use. It is a very mature, stable, and widely used product. It has rich functionality supported by a large and active user base and developer community. There are many plugins available that provide additional features for managing citations, map gallery, semantic search, digital repositories (fedora), scientific workflows, collaborative authoring, social networking, and other functions. All of these work very well within the Drupal framework if minimal customization is needed. We have successfully implemented Drupal for multiple projects such as: 1) the Haiti Regeneration Initiative (http://haitiregeneration.org/); 2) the Consortium on Climate Risk in the Urban Northeast (http://beta.ccrun.org/); and 3) the Africa Soils Information Service (http://africasoils.net/). We are also developing two other websites, the Côte Sud Initiative (CSI) and Emerging Infectious Diseases, using Drupal. We are testing the Drupal multi-site install for managing different websites with one install to streamline the maintenance. In addition, paid support and consultancy for Drupal website development are available at affordable prices. All of these features make Drupal very attractive for implementing state-of-the-art scientific websites that do not have complex requirements. One of our major websites, the NASA Socioeconomic Data and Applications Center (SEDAC), has a very complex set of requirements. It has to easily re-purpose content across multiple web pages and sites with different presentations. It has to serve the content via REST or similar standard interfaces so that external client applications can access content in the CMS

  19. Operational experience with the GEM detector assembly lines for the CMS forward muon upgrade

    CERN Document Server

    Vai, Ilaria

    2017-01-01

    The CMS Collaboration has been developing large-area Triple-GEM detectors to be installed in the muon endcap regions of the CMS experiment in 2019 to maintain forward muon trigger and tracking performance at the HL-LHC. Ten pre-production detectors were built at CERN to commission the first assembly line and the quality controls. These were installed in the CMS detector in early 2017 and are currently participating in the 2017 LHC run. The collaboration has prepared several additional assembly and quality control lines for distributed mass production of 160 GEM detectors at various sites worldwide. During 2017, these additional production sites have been optimizing construction techniques and quality control procedures and validating them against common specifications by constructing additional pre-production detectors. Using the specific experience from one production site as an example, we discuss how the quality controls make use of independent hardware and trained personnel to ensure fast and reliable pro...

  20. Constraining nuclear PDFs with CMS

    CERN Document Server

    Chapon, Emilien

    2017-01-01

    Nuclear parton distribution functions are essential to the understanding of proton-lead collisions. We will review several measurements from CMS that are particularly sensitive to nPDFs. W and Z bosons are medium-blind probes of the initial state of the collisions, and we will present the measurements of their production cross sections in pPb collisions at 5.02 TeV, and as well a asymmetries with an increased sensitivity to nPDFs. We will also report measurements of charmonium production, including the nuclear modification factor of J/$\\psi$ and $\\psi$(2S) in pPb collisions at 5.02 TeV, though other cold nuclear matter effects may also be at play in those processes. At last, we will present measurements of the pseudorapidity of dijets in pPb collisions at 5.02 TeV.

  1. The CMS Data Management System

    Science.gov (United States)

    Giffels, M.; Guo, Y.; Kuznetsov, V.; Magini, N.; Wildish, T.

    2014-06-01

    The data management elements in CMS are scalable, modular, and designed to work together. The main components are PhEDEx, the data transfer and location system; the Data Booking Service (DBS), a metadata catalog; and the Data Aggregation Service (DAS), designed to aggregate views and provide them to users and services. Tens of thousands of samples have been cataloged and petabytes of data have been moved since the run began. The modular system has allowed the optimal use of appropriate underlying technologies. In this contribution we will discuss the use of both Oracle and NoSQL databases to implement the data management elements as well as the individual architectures chosen. We will discuss how the data management system functioned during the first run, and what improvements are planned in preparation for 2015.

  2. The CMS data management system

    International Nuclear Information System (INIS)

    Giffels, M; Magini, N; Guo, Y; Kuznetsov, V; Wildish, T

    2014-01-01

    The data management elements in CMS are scalable, modular, and designed to work together. The main components are PhEDEx, the data transfer and location system; the Data Booking Service (DBS), a metadata catalog; and the Data Aggregation Service (DAS), designed to aggregate views and provide them to users and services. Tens of thousands of samples have been cataloged and petabytes of data have been moved since the run began. The modular system has allowed the optimal use of appropriate underlying technologies. In this contribution we will discuss the use of both Oracle and NoSQL databases to implement the data management elements as well as the individual architectures chosen. We will discuss how the data management system functioned during the first run, and what improvements are planned in preparation for 2015.

  3. CMS results on multijet correlations

    CERN Document Server

    INSPIRE-00008500

    2015-04-10

    We present recent measurements of multijet correlations using forward and low-$p_{\\mathrm{T}}$ jets performed by the CMS collaboration at the LHC collider. In pp collisions at $\\sqrt{s} = 7$ TeV, azimuthal correlations in dijets separated in rapidity by up to 9.4 units were measured. The results are compared to BFKL- and DGLAP-based Monte Carlo generator and analytic predictions. In pp collisions at $\\sqrt{s} = 8$ TeV, cross sections for jets with $p_{\\mathrm{T}}$ > 21 GeV and |y| 1 GeV (minijets) are presented. The minijet results are sensitive to the bound imposed by the total inelastic cross section, and are compared to various models for taming the growth of the $2 \\rightarrow 2$ cross section at low $p_{\\mathrm{T}}$.

  4. Top Quark Physics with CMS

    CERN Multimedia

    CERN. Geneva

    2011-01-01

    Higgs mechanism. There are various hints at deviations from the Standard Model expectation which have been observed recently by Tevatron experiments in top final states. Several signatures of new physics accessible at the LHC either suffer from top-quark production as a significant background or contain top quarks themselves. In this talk, we present results on top quark physics obtained from the first LHC data collected by the CMS experiment.They include measurements of the top pair production cross section in various channels and their combination, measurements of the top quark mass, the single top cross section, a search for new particles decaying into top pairs, and a first look at the charge asymmetry.

  5. CMS Planning and Scheduling System

    CERN Document Server

    Kotamaki, M

    1998-01-01

    The paper describes the procedures and the system to build and maintain the schedules needed to manage time, resources, and progress of the CMS project. The system is based on the decomposition of the project into work packages, which can be each considered as a complete project with its own structure. The system promotes the distribution of the decision making and responsibilities to lower levels in the organisation by providing a state-of-the-art system to formalise the external commitments of the work packages without limiting their ability to modify their internal schedules to best meet their commitments. The system lets the project management focus on the interfaces between the work packages and alerts the management immediately if a conflict arises. The proposed system simplifies the planning and management process and eliminates the need for a large, centralised project management system.

  6. The CMS silicon strip tracker

    International Nuclear Information System (INIS)

    Focardi, E.; Albergo, S.; Angarano, M.; Azzi, P.; Babucci, E.; Bacchetta, N.; Bader, A.; Bagliesi, G.; Bartalini, P.; Basti, A.; Biggeri, U.; Bilei, G.M.; Bisello, D.; Boemi, D.; Bosi, F.; Borrello, L.; Bozzi, C.; Braibant, S.; Breuker, H.; Bruzzi, M.; Candelori, A.; Caner, A.; Castaldi, R.; Castro, A.; Catacchini, E.; Checcucci, B.; Ciampolini, P.; Civinini, C.; Creanza, D.; D'Alessandro, R.; Da Rold, M.; Demaria, N.; De Palma, M.; Dell'Orso, R.; Marina, R. Della; Dutta, S.; Eklund, C.; Elliott-Peisert, A.; Feld, L.; Fiore, L.; French, M.; Freudenreich, K.; Fuertjes, A.; Giassi, A.; Giraldo, A.; Glessing, B.; Gu, W.H.; Hall, G.; Hammerstrom, R.; Hebbeker, T.; Hrubec, J.; Huhtinen, M.; Kaminsky, A.; Karimaki, V.; Koenig, St.; Krammer, M.; Lariccia, P.; Lenzi, M.; Loreti, M.; Luebelsmeyer, K.; Lustermann, W.; Maettig, P.; Maggi, G.; Mannelli, M.; Mantovani, G.; Marchioro, A.; Mariotti, C.; Martignon, G.; Evoy, B. Mc; Meschini, M.; Messineo, A.; My, S.; Paccagnella, A.; Palla, F.; Pandoulas, D.; Parrini, G.; Passeri, D.; Pieri, M.; Piperov, S.; Potenza, R.; Raffaelli, F.; Raso, G.; Raymond, M.; Santocchia, A.; Schmitt, B.; Selvaggi, G.; Servoli, L.; Sguazzoni, G.; Siedling, R.; Silvestris, L.; Skog, K.; Starodumov, A.; Stavitski, I.; Stefanini, G.; Tempesta, P.; Tonelli, G.; Tricomi, A.; Tuuva, T.; Vannini, C.; Verdini, P.G.; Viertel, G.; Xie, Z.; Wang, Y.; Watts, S.; Wittmer, B.

    1999-01-01

    The Silicon Strip Tracker (SST) is the intermediate part of the CMS Central Tracker System. SST is based on microstrip silicon devices and in combination with pixel detectors and the Microstrip Gas Chambers aims at performing pattern recognition, track reconstruction and momentum measurements for all tracks with p T ≥2 GeV/c originating from high luminosity interactions at √s=14 TeV at LHC. We aim at exploiting the advantages and the physics potential of the precise tracking performance provided by the microstrip silicon detectors on a large scale apparatus and in a much more difficult environment than ever. In this paper we describe the actual SST layout and the readout system. (author)

  7. The CMS Silicon Tracker Alignment

    CERN Document Server

    Castello, R

    2008-01-01

    The alignment of the Strip and Pixel Tracker of the Compact Muon Solenoid experiment, with its large number of independent silicon sensors and its excellent spatial resolution, is a complex and challenging task. Besides high precision mounting, survey measurements and the Laser Alignment System, track-based alignment is needed to reach the envisaged precision.\\\\ Three different algorithms for track-based alignment were successfully tested on a sample of cosmic-ray data collected at the Tracker Integration Facility, where 15\\% of the Tracker was tested. These results, together with those coming from the CMS global run, will provide the basis for the full-scale alignment of the Tracker, which will be carried out with the first \\emph{p-p} collisions.

  8. LHCC COMPREHENSIVE REVIEW OF CMS (JULY 07)

    CERN Multimedia

    Extract from the Draft Report 1. EXECUTIVE SUMMARY The CMS Collaboration has made significant progress towards producing a detector ready for LHC operation in 2008. The past year saw all sub-detector groups success fully produce high-quality components and modules, and integrate them into the final objects to be installed into the CMS magnet. Installation and commissioning of final components in the CMS UXC55 cavern are well-under-way. In particular, the heavy lowering of detector elements into the CMS experiment cavern is a major success. The new CMS master schedule V36 incorporates the revised LHC machine schedule and includes an optimized detector sequencing. In spite of various delays, it remains possible that CMS will have an initial detector ready to exploit the initial LHC run in spring 2008. Installation of the Electromagnetic Calorimeter End-Cap (EE) and Pre-shower (ES) detectors is scheduled to be completed no sooner than July 2008 and CMS now plans to install the complete Pixel Detector for ...

  9. The CMS Masterclass and Particle Physics Outreach

    Energy Technology Data Exchange (ETDEWEB)

    Cecire, Kenneth [Notre Dame U.; Bardeen, Marjorie [Fermilab; McCauley, Thomas [Notre Dame U.

    2014-01-01

    The CMS Masterclass enables high school students to analyse authentic CMS data. Students can draw conclusions on key ratios and particle masses by combining their analyses. In particular, they can use the ratio of W^+ to W^- candidates to probe the structure of the proton, they can find the mass of the Z boson, and they can identify additional particles including, tentatively, the Higgs boson. In the United States, masterclasses are part of QuarkNet, a long-term program that enables students and teachers to use cosmic ray and particle physics data for learning with an emphasis on data from CMS.

  10. CP violation in CMS expected performance

    CERN Document Server

    Stefanescu, J

    1999-01-01

    The CMS experiment can contribute significantly to the measurement of the CP violation asymmetries. A recent evaluation of the expected precision on the CP violation parameter sin 2 beta in the channel B /sub d//sup 0/ to J/ psi $9 K/sub s//sup 0/ has been performed using a simulation of the CMS tracker including full pattern recognition. CMS has also studied the possibility to observe CP violation in the decay channel B/sub s//sup 0/ to J/ psi phi . The $9 results of these studies are reviewed. (7 refs).

  11. SUSY searches in early CMS data

    International Nuclear Information System (INIS)

    Tricomi, A

    2008-01-01

    In the first year of data taking at LHC, the CMS experiment expects to collect about 1 fb -1 of data, which make possible the first searches for new phenomena. All such searches require however the measurement of the SM background and a detailed understanding of the detector performance, reconstruction algorithms and triggering. The CMS efforts are hence addressed to designing a realistic analysis plan in preparation to the data taking. In this paper, the CMS perspectives and analysis strategies for Supersymmetry (SUSY) discovery with early data are presented

  12. 42 CFR 489.53 - Termination by CMS.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Termination by CMS. 489.53 Section 489.53 Public... Reinstatement After Termination § 489.53 Termination by CMS. (a) Basis for termination of agreement with any provider. CMS may terminate the agreement with any provider if CMS finds that any of the following failings...

  13. 42 CFR 426.517 - CMS' statement regarding new evidence.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false CMS' statement regarding new evidence. 426.517... DETERMINATIONS Review of an NCD § 426.517 CMS' statement regarding new evidence. (a) CMS may review any new... experts; and (5) Presented during any hearing. (b) CMS may submit a statement regarding whether the new...

  14. 42 CFR 460.20 - Notice of CMS determination.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Notice of CMS determination. 460.20 Section 460.20... ELDERLY (PACE) PACE Organization Application and Waiver Process § 460.20 Notice of CMS determination. (a... application to CMS, CMS takes one of the following actions: (1) Approves the application. (2) Denies the...

  15. A summary of the CMS Create event

    CERN Multimedia

    CERN. Geneva; GASTAL, Martin

    2016-01-01

    The maiden CMS Create event took place in November 2015 and was a huge success. The output from all the participants was fantastic. As organisers we learnt a lot and hope to build on our experience for the 2016 event!

  16. CMS in historic accord with China

    CERN Multimedia

    Patrice Loiez

    1999-01-01

    Following signature of the CMS Memorandum of Understanding, Research Director of Collider Programmes Roger Cashmore (left) shakes hands with Professor WANG Naiyan, Vice-President of the National Natural Science Foundation of China.

  17. Award for the best CMS thesis

    CERN Multimedia

    2003-01-01

    The 2002 CMS PhD Thesis Award for has been presented to Giacomo Luca Bruno for his thesis defended at the University of Pavia in Italy and entitled "The RPC detectors and the muon system for the CMS experiment at the LHC". His work was supervised by Sergio P. Ratti from the University of Pavia. Since April 2002, Giacomo has been employed as a research fellow by CERN's EP Division. He continues to work on CMS in the areas of data acquisition and physics reconstruction and selection. Last Monday he received a commemorative engraved plaque from Lorenzo Foà, chairman of the CMS Collaboration Board. He will also receive expenses paid to an international physics conference to present his thesis results. Giacomo Luca Bruno with Lorenzo Foà

  18. The CMS "Higgs Boson Goose Game" Poster

    CERN Multimedia

    Davis, Siona Ruth

    Building and operating the CMS Detector is a complicated endeavour! Now, more than 20 years after the detector was conceived, the CMS Bologna group proposes to follow the steps of this challenging project by playing "The Higgs Boson Goose Game", illustrating CMS activities and goals. The concept of the game is inspired by the traditional "Game of the Goose". The underlying idea is that the progress of building and operating a detector at the LHC is similar to the progress of the pawns on the game board: it is fast at times, bringing rewards and satisfaction, while sometimes unexpected problems cause delays or even a step back requiring CMS scientists to use all of their skill and creativity to devise new solutions.

  19. The CMS Higgs Boson Goose Game

    CERN Document Server

    Cavallo, Francesca Romana

    2015-01-01

    Building and operating the CMS Detector is a complicated endeavour! Now, more than 20 years after the detector was conceived, the CMS Bologna group proposes to follow the steps of this challenging project by playing The Higgs Boson Goose Game, illustrating CMS activities and goals.The concept of the game is inspired by the traditional Game of the Goose. The underlying idea is that the progress of building and operating a detector at the LHC is similar to the progress of the pawns on the game board it is fast at times, bringing rewards and satisfaction, while sometimes unexpected problems cause delays or even a step back requiring CMS scientists to use all of their skill and creativity to devise new solutions.

  20. The CMS High Level Trigger System

    CERN Document Server

    Afaq, A; Bauer, G; Biery, K; Boyer, V; Branson, J; Brett, A; Cano, E; Carboni, A; Cheung, H; Ciganek, M; Cittolin, S; Dagenhart, W; Erhan, S; Gigi, D; Glege, F; Gómez-Reino, Robert; Gulmini, M; Gutiérrez-Mlot, E; Gutleber, J; Jacobs, C; Kim, J C; Klute, M; Kowalkowski, J; Lipeles, E; Lopez-Perez, Juan Antonio; Maron, G; Meijers, F; Meschi, E; Moser, R; Murray, S; Oh, A; Orsini, L; Paus, C; Petrucci, A; Pieri, M; Pollet, L; Rácz, A; Sakulin, H; Sani, M; Schieferdecker, P; Schwick, C; Sexton-Kennedy, E; Sumorok, K; Suzuki, I; Tsirigkas, D; Varela, J

    2007-01-01

    The CMS Data Acquisition (DAQ) System relies on a purely software driven High Level Trigger (HLT) to reduce the full Level-1 accept rate of 100 kHz to approximately 100 Hz for archiving and later offline analysis. The HLT operates on the full information of events assembled by an event builder collecting detector data from the CMS front-end systems. The HLT software consists of a sequence of reconstruction and filtering modules executed on a farm of O(1000) CPUs built from commodity hardware. This paper presents the architecture of the CMS HLT, which integrates the CMS reconstruction framework in the online environment. The mechanisms to configure, control, and monitor the Filter Farm and the procedures to validate the filtering code within the DAQ environment are described.

  1. Particle Flow at CMS and the ILC

    CERN Document Server

    Ballin, J A C

    2010-01-01

    This thesis describes hadron reconstruction at the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) at CERN, Geneva. The focus is on the particle flow reconstruction of these objects. This thesis revisits the subject of the CMS calorimeters' non-linear response to hadrons. Data from testbeam experiments conducted in 2006 & 2007 is compared with simulations and substantial differences are found. A particle flow calibration to correct the energy response of the testbeam data is evaluated. The reconstructed jet response is found to change by ~ 5% when a data-driven calibration is used in place of the calibration derived from simulation. Collision data taken at the early stage of CMS' commissioning is also presented. The hadron response in data is determined to be compatible with testbeam results presented in this thesis. This thesis also details the use of neural networks to improve the energy measurement of hadrons at CMS. The networks are implemented in a functional and concurrent ...

  2. CMS Innovation Center - Data and Reports

    Data.gov (United States)

    U.S. Department of Health & Human Services — The CMS Innovation Center maintains an expanding portfolio supporting the development and testing of innovative health care payment and service delivery models. As...

  3. The CMS Data Analysis School Experience

    Energy Technology Data Exchange (ETDEWEB)

    De Filippis, N. [INFN, Bari; Bauerdick, L. [Fermilab; Chen, J. [Taiwan, Natl. Taiwan U.; Gallo, E. [DESY; Klima, B. [Fermilab; Malik, S. [Puerto Rico U., Mayaguez; Mulders, M. [CERN; Palla, F. [INFN, Pisa; Rolandi, G. [Pisa, Scuola Normale Superiore

    2017-11-21

    The CMS Data Analysis School is an official event organized by the CMS Collaboration to teach students and post-docs how to perform a physics analysis. The school is coordinated by the CMS schools committee and was first implemented at the LHC Physics Center at Fermilab in 2010. As part of the training, there are a number of “short” exercises on physics object reconstruction and identification, Monte Carlo simulation, and statistical analysis, which are followed by “long” exercises based on physics analyses. Some of the long exercises go beyond the current state of the art of the corresponding CMS analyses. This paper describes the goals of the school, the preparations for a school, the structure of the training, and student satisfaction with the experience as measured by surveys.

  4. HB+ inserted into the CMS Solenoid

    CERN Multimedia

    Tejinder S. Virdee, CERN

    2006-01-01

    The first half of the barrel hadron calorimeter (HB+) has been inserted into the superconducting solenoid of CMS, in preparation for the magnet test and cosmic challenge. The operation went smoothly, lasting a couple of days.

  5. CMS Virtual Visit - Researchers Night in Portugal

    CERN Multimedia

    Abreu, Pedro

    2016-01-01

    Researchers Night at Planetarium Calouste Gulbenkian - Ciência Viva Centre in Lisbon. Organised by researchers from LIP (Laboratório de Instrumentação e Física Experimental de Partículas) and including CMS Virtual Visit during which André David Tinoco Mendes and José Rasteiro da Silva, based at Cessy, France, "virtually" discussed science and technology behind the CMS detector with the audience in Lisbon.

  6. File Level Provenance Tracking in CMS

    CERN Document Server

    Jones, C D; Paterno, M; Sexton-Kennedy, L; Tanenbaum, W; Riley, D S

    2009-01-01

    The CMS off-line framework stores provenance information within CMS's standard ROOT event data files. The provenance information is used to track how each data product was constructed, including what other data products were read to do the construction. We will present how the framework gathers the provenance information, the efforts necessary to minimise the space used to store the provenance in the file and the tools that will be available to use the provenance.

  7. NEW EDITOR OF THE CMS BULLETIN

    CERN Multimedia

    Walter Van Doninck has been the Editor of the CMS Bulletin since 2000. The Bulletin not only helps disseminate information but also records the progress of CMS. Walter is handing over to Karl Gill. We would like to thank Walter for carrying out this task with enthusiasm and efficiency for so long. We should also thank Karl for accepting to take over and wish him well over the coming exciting period.

  8. The Trigger System of the CMS Experiment

    OpenAIRE

    Felcini, Marta

    2008-01-01

    We give an overview of the main features of the CMS trigger and data acquisition (DAQ) system. Then, we illustrate the strategies and trigger configurations (trigger tables) developed for the detector calibration and physics program of the CMS experiment, at start-up of LHC operations, as well as their possible evolution with increasing luminosity. Finally, we discuss the expected CPU time performance of the trigger algorithms and the CPU requirements for the event filter farm at start-up.

  9. Xenon-Xenon collision events in CMS

    CERN Multimedia

    Mc Cauley, Thomas

    2017-01-01

    One of the first-ever xenon-xenon collision events recorded by CMS during the LHC’s one-day-only heavy-ion run with xenon nuclei. The large number of tracks emerging from the centre of the detector show the many simultaneous nucleon-nucleon interactions that take place when two xenon nuclei, each with 54 protons and 75 neutrons, collide inside CMS.

  10. Giant CMS magnet goes underground at CERN

    CERN Multimedia

    2007-01-01

    "Scientists of the US CMS collaboration joined colleagues around the world in announcing today (February 28) that the heaviest piece of the Compact Muon Solenoid particle detector has begun the momentous journey into its experimental cavern 100 meters underground. A huge gantry crne is slowly lowering the CMS detector's preassembled central section into place in the Large Hadron Collider accelerator at CERN in Geneva, Switzerland." (1 page)

  11. Visualization of the CMS python configuration system

    International Nuclear Information System (INIS)

    Erdmann, M; Fischer, R; Klimkovich, T; Mueller, G; Steggemann, J; Hegner, B; Hinzmann, A

    2010-01-01

    The job configuration system of the CMS experiment is based on the Python programming language. Software modules and their order of execution are both represented by Python objects. In order to investigate and verify configuration parameters and dependencies naturally appearing in modular software, CMS employs a graphical tool. This tool visualizes the configuration objects, their dependencies, and the information flow. Furthermore it can be used for documentation purposes. The underlying software concepts as well as the visualization are presented.

  12. Visualization of the CMS python configuration system

    Energy Technology Data Exchange (ETDEWEB)

    Erdmann, M; Fischer, R; Klimkovich, T; Mueller, G; Steggemann, J [RWTH Aachen University, Physikalisches Institut 3A, 52062 Aachen (Germany); Hegner, B [CERN, CH-1211 Geneva 23 (Switzerland); Hinzmann, A, E-mail: andreas.hinzmann@cern.c

    2010-04-01

    The job configuration system of the CMS experiment is based on the Python programming language. Software modules and their order of execution are both represented by Python objects. In order to investigate and verify configuration parameters and dependencies naturally appearing in modular software, CMS employs a graphical tool. This tool visualizes the configuration objects, their dependencies, and the information flow. Furthermore it can be used for documentation purposes. The underlying software concepts as well as the visualization are presented.

  13. The CMS Journey to LHC Physics

    CERN Document Server

    CERN. Geneva

    2007-01-01

    A history of construction, encompassing the R&D and challenges faced over the last decade and a half, will be recalled using selected examples. CMS is currently in the final stages of installation and commissioning is gathering pace. After a short status report of where CMS stands today some of the expected (great) physics to come will be outlined. * Tea & coffee will be served at 16:00.

  14. First half of CMS inner tracker barrel

    CERN Multimedia

    Maximilien Brice

    2006-01-01

    The first half of the CMS inner tracker barrel is seen in this image consisting of three layers of silicon modules which will be placed at the centre of the CMS experiment at the LHC in CERN. Laying close to the interaction point of the 14 TeV proton-proton collisions, the silicon used here must be able to survive high doses of radiation and a 4 T magnetic field without damage.

  15. A new Information Architecture, Website and Services for the CMS Experiment

    Science.gov (United States)

    Taylor, Lucas; Rusack, Eleanor; Zemleris, Vidmantas

    2012-12-01

    The age and size of the CMS collaboration at the LHC means it now has many hundreds of inhomogeneous web sites and services, and hundreds of thousands of documents. We describe a major initiative to create a single coherent CMS internal and public web site. This uses the Drupal web Content Management System (now supported by CERN/IT) on top of a standard LAMP stack (Linux, Apache, MySQL, and php/perl). The new navigation, content and search services are coherently integrated with numerous existing CERN services (CDS, EDMS, Indico, phonebook, Twiki) as well as many CMS internal Web services. We describe the information architecture; the system design, implementation and monitoring; the document and content database; security aspects; and our deployment strategy, which ensured continual smooth operation of all systems at all times.

  16. A new information architecture, website and services for the CMS experiment

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, Lucas [Fermilab; Rusack, Eleanor [Fermilab; Zemleris, Vidmantas [Vilnius U.

    2012-01-01

    The age and size of the CMS collaboration at the LHC means it now has many hundreds of inhomogeneous web sites and services, and hundreds of thousands of documents. We describe a major initiative to create a single coherent CMS internal and public web site. This uses the Drupal web Content Management System (now supported by CERN/IT) on top of a standard LAMP stack (Linux, Apache, MySQL, and php/perl). The new navigation, content and search services are coherently integrated with numerous existing CERN services (CDS, EDMS, Indico, phonebook, Twiki) as well as many CMS internal Web services. We describe the information architecture, the system design, implementation and monitoring, the document and content database, security aspects, and our deployment strategy, which ensured continual smooth operation of all systems at all times.

  17. A new Information Architecture, Website and Services for the CMS Experiment

    International Nuclear Information System (INIS)

    Taylor, Lucas; Rusack, Eleanor; Zemleris, Vidmantas

    2012-01-01

    The age and size of the CMS collaboration at the LHC means it now has many hundreds of inhomogeneous web sites and services, and hundreds of thousands of documents. We describe a major initiative to create a single coherent CMS internal and public web site. This uses the Drupal web Content Management System (now supported by CERN/IT) on top of a standard LAMP stack (Linux, Apache, MySQL, and php/perl). The new navigation, content and search services are coherently integrated with numerous existing CERN services (CDS, EDMS, Indico, phonebook, Twiki) as well as many CMS internal Web services. We describe the information architecture; the system design, implementation and monitoring; the document and content database; security aspects; and our deployment strategy, which ensured continual smooth operation of all systems at all times.

  18. Virtual Data in CMS Analysis

    CERN Document Server

    Arbree, A; Bourilkov, D; Cavanaugh, R J; Graham, G; Rodríguez, J; Wilde, M; Zhao, Y

    2003-01-01

    The use of virtual data for enhancing the collaboration between large groups of scientists is explored in several ways: - by defining ``virtual'' parameter spaces which can be searched and shared in an organized way by a collaboration of scientists in the course of their analysis - by providing a mechanism to log the provenance of results and the ability to trace them back to the various stages in the analysis of real or simulated data - by creating ``check points'' in the course of an analysis to permit collaborators to explore their own analysis branches by refining selections, improving the signal to background ratio, varying the estimation of parameters, etc. - by facilitating the audit of an analysis and the reproduction of its results by a different group, or in a peer review context. We describe a prototype for the analysis of data from the CMS experiment based on the virtual data system Chimera and the object-oriented data analysis framework ROOT. The Chimera system is used to chain together several s...

  19. Virtual data in CMS analysis

    International Nuclear Information System (INIS)

    Arbree, A.

    2003-01-01

    The use of virtual data for enhancing the collaboration between large groups of scientists is explored in several ways: by defining ''virtual'' parameter spaces which can be searched and shared in an organized way by a collaboration of scientists in the course of their analysis; by providing a mechanism to log the provenance of results and the ability to trace them back to the various stages in the analysis of real or simulated data; by creating ''check points'' in the course of an analysis to permit collaborators to explore their own analysis branches by refining selections, improving the signal to background ratio, varying the estimation of parameters, etc.; by facilitating the audit of an analysis and the reproduction of its results by a different group, or in a peer review context. We describe a prototype for the analysis of data from the CMS experiment based on the virtual data system Chimera and the object-oriented data analysis framework ROOT. The Chimera system is used to chain together several steps in the analysis process including the Monte Carlo generation of data, the simulation of detector response, the reconstruction of physics objects and their subsequent analysis, histogramming and visualization using the ROOT framework

  20. Heavy ion results from CMS

    CERN Document Server

    Milosevic, Jovan

    2016-01-01

    Two- and multi-particle angular correlations in pp, pPb and PbPb collisions at the LHC energies are presented as a function of centrality, charged-particle multiplicity and transverse momentum ($p_{T}$). The data were collected using the CMS detector. The Fourier coefficents in PbPb collisions are measured over an extended $p_{T}$ range up to 100 GeV/c. These $v_{n}$ measurements at high-$p_{T}$ are complementary to the $R_{AA}$ measurements. The elliptic flow of charged and strange particles and the triangular flow of charged particles in pp collisions is measured using the two-particle correlations. A clear mass ordering effect is observed for low-$p_{T}$ $v_{2}$ values. For the first time, in 13 TeV pp collisions, the $v_{2}$ is extracted from four- and six-particle correlations, and is comparable to the $v_{2}$ from two-particle correlations. This supports the collective nature of the long-range correlations in high-multiplicity pp collisions. A Principle Component Analysis (PCA) of two-particle correlati...

  1. CMS prepares for new challenges

    CERN Multimedia

    Antonella Del Rosso

    2014-01-01

    One of the world’s largest physics experiments has just had a change in leadership. This is a chance for the collaboration to take stock of the tremendous work done for LS1 and to prepare for the challenges that lie ahead.   From left to right: Kerstin Borras, Tiziano Camporesi and Paris Sphicas. “The keyword is teamwork. That’s the only way you can effectively manage a large number of extremely talented and motivated people,” says Tiziano Camporesi who took the reins of the CMS collaboration at the beginning of the year. The recipe might seem easier on paper than in practice. However, given his 28 years at CERN, two of which he spent as the head of the DELPHI collaboration, Camporesi has extensive experience in managing large scientific collaborations and success in this respect is well within his reach: “I have learned many lessons from the past and I believe that building consensus is instrumental to successful leadership.” The C...

  2. CMS kinematic edge from sbottoms

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Peisi; Wagner, Carlos E. M.

    2015-01-01

    We present two scenarios in the Minimal Supersymmetric Extension of the Standard Model (MSSM) that can lead to an explanation of the excess in the invariant mass distribution of two opposite charged, same flavor leptons, and the corresponding edge at an energy of about 78 GeV, recently reported by the CMS Collaboration. In both scenarios, sbottoms are pair produced, and decay to neutralinos and a b-jet. The heavier neutralinos further decay to a pair of leptons and the lightest neutralino through on-shell sleptons or off-shell neutral gauge bosons. These scenarios are consistent with the current limits on the sbottoms, neutralinos, and sleptons. Assuming that the lightest neutralino is stable we discuss the predicted relic density as well as the implications for darkmatter direct detection. We show that consistency between the predicted and the measured value of the muon anomalous magnetic moment may be obtained in both scenarios. Finally, we define the signatures of these models that may be tested at the 13 TeV run of the LHC

  3. Guido Tonelli elected next CMS spokesperson

    CERN Multimedia

    2009-01-01

    Guido Tonelli has been elected as the next CMS spokesperson. He will take over from Jim Virdee on January 1, 2010, and will head the collaboration through the first crucial year of data-taking. Guido Tonelli, CMS spokesperson-elect, into the CMS cavern. "It will be very tough and there will be enormous pressure," explains Guido Tonelli, CMS spokesperson-elect. "It will be the first time that CMS will run for a whole year so it is important to go through the checklist to be able to take good quality data." Tonelli, who is currently CMS Deputy spokesperson, will take over from Jim Virdee on January 1, 2010 – only a few months into CMS’s first full year of data-taking. "The collisions will probably be different to our expectations. So it’s going to take the effort of the entire collaboration worldwide to be ready for this new phase." Born in Italy, Tonelli originally studied at the University of Pisa, where he is now a Professo...

  4. Exploiting Analytics Techniques in CMS Computing Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Bonacorsi, D. [Bologna U.; Kuznetsov, V. [Cornell U.; Magini, N. [Fermilab; Repečka, A. [Vilnius U.; Vaandering, E. [Fermilab

    2017-11-22

    The CMS experiment has collected an enormous volume of metadata about its computing operations in its monitoring systems, describing its experience in operating all of the CMS workflows on all of the Worldwide LHC Computing Grid Tiers. Data mining efforts into all these information have rarely been done, but are of crucial importance for a better understanding of how CMS did successful operations, and to reach an adequate and adaptive modelling of the CMS operations, in order to allow detailed optimizations and eventually a prediction of system behaviours. These data are now streamed into the CERN Hadoop data cluster for further analysis. Specific sets of information (e.g. data on how many replicas of datasets CMS wrote on disks at WLCG Tiers, data on which datasets were primarily requested for analysis, etc) were collected on Hadoop and processed with MapReduce applications profiting of the parallelization on the Hadoop cluster. We present the implementation of new monitoring applications on Hadoop, and discuss the new possibilities in CMS computing monitoring introduced with the ability to quickly process big data sets from mulltiple sources, looking forward to a predictive modeling of the system.

  5. The CMS Magnetic Field Map Performance

    CERN Document Server

    Klyukhin, V.I.; Andreev, V.; Ball, A.; Cure, B.; Herve, A.; Gaddi, A.; Gerwig, H.; Karimaki, V.; Loveless, R.; Mulders, M.; Popescu, S.; Sarycheva, L.I.; Virdee, T.

    2010-04-05

    The Compact Muon Solenoid (CMS) is a general-purpose detector designed to run at the highest luminosity at the CERN Large Hadron Collider (LHC). Its distinctive featuresinclude a 4 T superconducting solenoid with 6 m diameter by 12.5 m long free bore, enclosed inside a 10000-ton return yoke made of construction steel. Accurate characterization of the magnetic field everywhere in the CMS detector is required. During two major tests of the CMS magnet the magnetic flux density was measured inside the coil in a cylinder of 3.448 m diameter and 7 m length with a specially designed field-mapping pneumatic machine as well as in 140 discrete regions of the CMS yoke with NMR probes, 3-D Hall sensors and flux-loops. A TOSCA 3-D model of the CMS magnet has been developed to describe the magnetic field everywhere outside the tracking volume measured with the field-mapping machine. A volume based representation of the magnetic field is used to provide the CMS simulation and reconstruction software with the magnetic field ...

  6. Russian institute receives CMS Gold Award

    CERN Multimedia

    Patrice Loïez

    2003-01-01

    The Snezhinsk All-Russian Institute of Scientific Research for Technical Physics (VNIITF) of the Russian Federal Nuclear Centre (RFNC) is one of twelve CMS suppliers to receive awards for outstanding performance this year. The CMS Collaboration took the opportunity of the visit to CERN of the Director of VNIITF and his deputy to present the CMS Gold Award, which the institute has received for its exceptional performance in the assembly of steel plates for the CMS forward hadronic calorimeter. This calorimeter consists of two sets of 18 wedge-shaped modules arranged concentrically around the beam-pipe at each end of the CMS detector. Each module consists of steel absorber plates with quartz fibres inserted into them. The institute developed a special welding technique to assemble the absorber plates, enabling a high-quality detector to be produced at relatively low cost.RFNC-VNIITF Director Professor Georgy Rykovanov (right), is seen here receiving the Gold Award from Felicitas Pauss, Vice-Chairman of the CMS ...

  7. Distributed computing grid experiences in CMS

    CERN Document Server

    Andreeva, Julia; Barrass, T; Bonacorsi, D; Bunn, Julian; Capiluppi, P; Corvo, M; Darmenov, N; De Filippis, N; Donno, F; Donvito, G; Eulisse, G; Fanfani, A; Fanzago, F; Filine, A; Grandi, C; Hernández, J M; Innocente, V; Jan, A; Lacaprara, S; Legrand, I; Metson, S; Newbold, D; Newman, H; Pierro, A; Silvestris, L; Steenberg, C; Stockinger, H; Taylor, Lucas; Thomas, M; Tuura, L; Van Lingen, F; Wildish, Tony

    2005-01-01

    The CMS experiment is currently developing a computing system capable of serving, processing and archiving the large number of events that will be generated when the CMS detector starts taking data. During 2004 CMS undertook a large scale data challenge to demonstrate the ability of the CMS computing system to cope with a sustained data- taking rate equivalent to 25% of startup rate. Its goals were: to run CMS event reconstruction at CERN for a sustained period at 25 Hz input rate; to distribute the data to several regional centers; and enable data access at those centers for analysis. Grid middleware was utilized to help complete all aspects of the challenge. To continue to provide scalable access from anywhere in the world to the data, CMS is developing a layer of software that uses Grid tools to gain access to data and resources, and that aims to provide physicists with a user friendly interface for submitting their analysis jobs. This paper describes the data challenge experience with Grid infrastructure ...

  8. Quality Assurance Tests of the CMS Endcap RPCs

    CERN Document Server

    Ahmed, Ijaz; Hamid Ansari, M; Irfan Asghar, M; Asghar, Sajjad; Awan, Irfan Ullah; Butt, Jamila; Hoorani, Hafeez R; Hussain, Ishtiaq; Khurshid, Taimoor; Muhammad, Saleh; Shahzad, Hassan; Aftab, Zia; Iftikhar, Mian; Khan, Mohammad Khalid; Saleh, M

    2008-01-01

    In this note, we have described the quality assurance tests performed for endcap Resistive Plate Chambers (RPCs) at two different sites, Pakistan Atomic Energy Commission (PAEC) and National Centre for Physics (NCP), in Pakistan. This paper describes various quality assurance tests both at the level of gas gaps and the chambers. The data has been obtained at different time windows during the large scale production of CMS RPCs of RE2/2 and RE2/3 type. In the quality assurance tests, we have investigated parameters like dark current, strip occupancy, cluster size and efficiency of RPCs.

  9. Dose-ranging pharmacokinetics of colistin methanesulphonate (CMS) and colistin in rats following single intravenous CMS doses.

    Science.gov (United States)

    Marchand, Sandrine; Lamarche, Isabelle; Gobin, Patrice; Couet, William

    2010-08-01

    The aim of this study was to evaluate the effect of colistin methanesulphonate (CMS) dose on CMS and colistin pharmacokinetics in rats. Three rats per group received an intravenous bolus of CMS at a dose of 5, 15, 30, 60 or 120 mg/kg. Arterial blood samples were drawn at 0, 5, 15, 30, 60, 90, 120, 150 and 180 min. CMS and colistin plasma concentrations were determined by liquid chromatography-tandem mass spectrometry (LC-MS/MS). The pharmacokinetic parameters of CMS and colistin were calculated by non-compartmental analysis. Linear relationships were observed between CMS and colistin AUCs to infinity and CMS doses, as well as between CMS and colistin C(max) and CMS doses. CMS and colistin pharmacokinetics were linear for a range of colistin concentrations covering the range of values encountered and recommended in patients even during treatment with higher doses.

  10. CMS Young Researchers Award 2013 and Fundamental Physics Scholars Award from the CMS Experiment

    CERN Multimedia

    Lapka, Marzena

    2014-01-01

    Photo 2: CMS Fundamental Physics Scholars (FPSs) 1st prize: Joosep Pata, from Estonian National Institue of Chemical Physics and Biophysics / Photo 1 and 3: CMS Young Researchers Award. From left to right: Guido Tonelli, Colin Bernet, Andre David, Oliver Gutsche, Dmytro Kovalskyi, Andrea Petrucci, Joe Incandela and Jim Virdee

  11. Detector Alignment Studies for the CMS Experiment

    CERN Document Server

    Lampén, Tapio

    2007-01-01

    This thesis presen ts studies related to trac k-based alignmen t for the future CMS exp erimen t at CERN. Excellen t geometric alignmen t is crucial to fully bene t from the outstanding resolution of individual sensors. The large num ber of sensors mak es it dicult in CMS to utilize computationally demanding alignmen t algorithms. A computationally ligh t alignmen t algorithm, called the Hits and Impact Points algorithm (HIP), is dev elop ed and studied. It is based on minimization of the hit residuals. It can be applied to individual sensors or to comp osite objects. All six alignmen t parameters (three translations and three rotations), or their subgroup can be considered. The algorithm is exp ected to be particularly suitable for the alignmen t of the innermost part of CMS, the pixel detector, during its early operation, but can be easily utilized to align other parts of CMS also. The HIP algorithm is applied to sim ulated CMS data and real data measured with a test-b eam setup. The sim ulation studies dem...

  12. CMS outreach event to close LS1

    CERN Multimedia

    Achintya Rao

    2015-01-01

    CMS opened its doors to about 700 students from schools near CERN, who visited the detector on 16 and 17 February during the last major CMS outreach event of LS1.   Pellentesque sapien mi, pharetra vitae, auctor eu, congue sed, turpis. Enthusiastic CMS guides spent a day and a half showing the equally enthusiastic visitors, aged 10 to 18, the beauty of CMS and particle physics. The recently installed wheelchair lift was called into action and enabled a visitor who arrived on crutches to access the detector cavern unimpeded.  The CMS collaboration had previously devoted a day to school visits after the successful “Neighbourhood Days” in May 2014 and, encouraged by the turnout, decided to extend an invitation to local schools once again. The complement of nearly 40 guides and crowd marshals was aided by a support team that coordinated the transportation of the young guests and received them at Point 5, where a dedicated safety team including first-aiders, security...

  13. The CMS Beam Halo Monitor electronics

    International Nuclear Information System (INIS)

    Tosi, N.; Fabbri, F.; Montanari, A.; Torromeo, G.; Dabrowski, A.E.; Orfanelli, S.; Grassi, T.; Hughes, E.; Mans, J.; Rusack, R.; Stifter, K.; Stickland, D.P.

    2016-01-01

    The CMS Beam Halo Monitor has been successfully installed in the CMS cavern in LHC Long Shutdown 1 for measuring the machine induced background for LHC Run II. The system is based on 40 detector units composed of synthetic quartz Cherenkov radiators coupled to fast photomultiplier tubes (PMTs). The readout electronics chain uses many components developed for the Phase 1 upgrade to the CMS Hadronic Calorimeter electronics, with dedicated firmware and readout adapted to the beam monitoring requirements. The PMT signal is digitized by a charge integrating ASIC (QIE10), providing both the signal rise time, with few nanosecond resolution, and the charge integrated over one bunch crossing. The backend electronics uses microTCA technology and receives data via a high-speed 5 Gbps asynchronous link. It records histograms with sub-bunch crossing timing resolution and is read out via IPbus using the newly designed CMS data acquisition for non-event based data. The data is processed in real time and published to CMS and the LHC, providing online feedback on the beam quality. A dedicated calibration monitoring system has been designed to generate short triggered pulses of light to monitor the efficiency of the system. The electronics has been in operation since the first LHC beams of Run II and has served as the first demonstration of the new QIE10, Microsemi Igloo2 FPGA and high-speed 5 Gbps link with LHC data

  14. Validation of software releases for CMS

    International Nuclear Information System (INIS)

    Gutsche, Oliver

    2010-01-01

    The CMS software stack currently consists of more than 2 Million lines of code developed by over 250 authors with a new version being released every week. CMS has setup a validation process for quality assurance which enables the developers to compare the performance of a release to previous releases and references. The validation process provides the developers with reconstructed datasets of real data and MC samples. The samples span the whole range of detector effects and important physics signatures to benchmark the performance of the software. They are used to investigate interdependency effects of all CMS software components and to find and fix bugs. The release validation process described here is an integral part of CMS software development and contributes significantly to ensure stable production and analysis. It represents a sizable contribution to the overall MC production of CMS. Its success emphasizes the importance of a streamlined release validation process for projects with a large code basis and significant number of developers and can function as a model for future projects.

  15. The CMS Beam Halo Monitor Electronics

    CERN Document Server

    AUTHOR|(CDS)2080684; Fabbri, F.; Grassi, T.; Hughes, E.; Mans, J.; Montanari, A.; Orfanelli, S.; Rusack, R.; Torromeo, G.; Stickland, D.P.; Stifter, K.

    2016-01-01

    The CMS Beam Halo Monitor has been successfully installed in the CMS cavern in LHC Long Shutdown 1 for measuring the machine induced background for LHC Run II. The system is based on 40 detector units composed of synthetic quartz Cherenkov radiators coupled to fast photomultiplier tubes. The readout electronics chain uses many components developed for the Phase 1 upgrade to the CMS Hadronic Calorimeter electronics, with dedicated firmware and readout adapted to the beam monitoring requirements. The PMT signal is digitized by a charge integrating ASIC (QIE10), providing both the signal rise time, with few ns resolution, and the charge integrated over one bunch crossing. The backend electronics uses microTCA technology and receives data via a high-speed 5 Gbps asynchronous link. It records histograms with sub-bunch crossing timing resolution and is readout by IPbus using the newly designed CMS data acquisition for non-event based data. The data is processed in real time and published to CMS and the LHC, providi...

  16. Faces of CMS: Photomosaic (September 2013, low-resolution)

    CERN Multimedia

    Antonelli, Jamie

    2013-01-01

    The "Faces of CMS" photomosaic project aims to show the human element of the CMS Experiment. Most of the images for public outreach show the experimental equipment of CMS or physics results and collision displays. With a collaboration of around 3,000 people scattered around the globe, it's difficult to present the members of CMS in any one image. We asked any interested CMS members to sign up for the project, and allow us to use their photographs. The resulting photo mosaic contains the faces of 1,271 CMS members.

  17. Track reconstruction in CMS high luminosity environment

    CERN Document Server

    AUTHOR|(CDS)2067159

    2016-01-01

    The CMS tracker is the largest silicon detector ever built, covering 200 square meters and providing an average of 14 high-precision measurements per track. Tracking is essential for the reconstruction of objects like jets, muons, electrons and tau leptons starting from the raw data from the silicon pixel and strip detectors. Track reconstruction is widely used also at trigger level as it improves objects tagging and resolution.The CMS tracking code is organized in several levels, known as iterative steps, each optimized to reconstruct a class of particle trajectories, as the ones of particles originating from the primary vertex or displaced tracks from particles resulting from secondary vertices. Each iterative step consists of seeding, pattern recognition and fitting by a kalman filter, and a final filtering and cleaning. Each subsequent step works on hits not yet associated to a reconstructed particle trajectory.The CMS tracking code is continuously evolving to make the reconstruction computing load compat...

  18. Track reconstruction in CMS high luminosity environment

    CERN Document Server

    Goetzmann, Christophe

    2014-01-01

    The CMS tracker is the largest silicon detector ever built, covering 200 square meters and providing an average of 14 high-precision measurements per track. Tracking is essential for the reconstruction of objects like jets, muons, electrons and tau leptons starting from the raw data from the silicon pixel and strip detectors. Track reconstruction is widely used also at trigger level as it improves objects tagging and resolution.The CMS tracking code is organized in several levels, known as iterative steps, each optimized to reconstruct a class of particle trajectories, as the ones of particles originating from the primary vertex or displaced tracks from particles resulting from secondary vertices. Each iterative step consists of seeding, pattern recognition and fitting by a kalman filter, and a final filtering and cleaning. Each subsequent step works on hits not yet associated to a reconstructed particle trajectory.The CMS tracking code is continuously evolving to make the reconstruction computing load compat...

  19. Power distribution studies for CMS forward tracker

    International Nuclear Information System (INIS)

    Todri, A.; Turqueti, M.; Rivera, R.; Kwan, S.

    2009-01-01

    The Electronic Systems Engineering Department of the Computing Division at the Fermi National Accelerator Laboratory is carrying out R and D investigations for the upgrade of the power distribution system of the Compact Muon Solenoid (CMS) Pixel Tracker at the Large Hadron Collider (LHC). Among the goals of this effort is that of analyzing the feasibility of alternative powering schemes for the forward tracker, including DC to DC voltage conversion techniques using commercially available and custom switching regulator circuits. Tests of these approaches are performed using the PSI46 pixel readout chip currently in use at the CMS Tracker. Performance measures of the detector electronics will include pixel noise and threshold dispersion results. Issues related to susceptibility to switching noise will be studied and presented. In this paper, we describe the current power distribution network of the CMS Tracker, study the implications of the proposed upgrade with DC-DC converters powering scheme and perform noise susceptibility analysis.

  20. Calorimeter Simulation with Hadrons in CMS

    Energy Technology Data Exchange (ETDEWEB)

    Piperov, Stefan; /Sofiya, Inst. Nucl. Res. /Fermilab

    2008-11-01

    CMS is using Geant4 to simulate the detector setup for the forthcoming data from the LHC. Validation of physics processes inside Geant4 is a major concern in view of getting a proper description of jets and missing energy for signal and background events. This is done by carrying out an extensive studies with test beam using the prototypes or real detector modules of the CMS calorimeter. These data are matched with Geant4 predictions using the same framework that is used for the entire CMS detector. Tuning of the Geant4 models is carried out and steps to be used in reproducing detector signals are defined in view of measurements of energy response, energy resolution, transverse and longitudinal shower profiles for a variety of hadron beams over a broad energy spectrum between 2 to 300 GeV/c. The tuned Monte Carlo predictions match many of these measurements within systematic uncertainties.

  1. CMS standard model Higgs boson results

    Directory of Open Access Journals (Sweden)

    Garcia-Abia Pablo

    2013-11-01

    Full Text Available In July 2012 CMS announced the discovery of a new boson with properties resembling those of the long-sought Higgs boson. The analysis of the proton-proton collision data recorded by the CMS detector at the LHC, corresponding to integrated luminosities of 5.1 fb−1 at √s = 7 TeV and 19.6 fb−1 at √s = 8 TeV, confirm the Higgs-like nature of the new boson, with a signal strength associated with vector bosons and fermions consistent with the expectations for a standard model (SM Higgs boson, and spin-parity clearly favouring the scalar nature of the new boson. In this note I review the updated results of the CMS experiment.

  2. The CMS all silicon Tracker simulation

    CERN Document Server

    Biasini, Maurizio

    2009-01-01

    The Compact Muon Solenoid (CMS) tracker detector is the world's largest silicon detector with about 201 m$^2$ of silicon strips detectors and 1 m$^2$ of silicon pixel detectors. It contains 66 millions pixels and 10 million individual sensing strips. The quality of the physics analysis is highly correlated with the precision of the Tracker detector simulation which is written on top of the GEANT4 and the CMS object-oriented framework. The hit position resolution in the Tracker detector depends on the ability to correctly model the CMS tracker geometry, the signal digitization and Lorentz drift, the calibration and inefficiency. In order to ensure high performance in track and vertex reconstruction, an accurate knowledge of the material budget is therefore necessary since the passive materials, involved in the readout, cooling or power systems, will create unwanted effects during the particle detection, such as multiple scattering, electron bremsstrahlung and photon conversion. In this paper, we present the CM...

  3. Deep learning in jet reconstruction at CMS

    CERN Document Server

    Stoye, Markus

    2017-01-01

    Deep learning has led to several breakthroughs outside the field of high energy physics, yet in jet reconstruction for the CMS experiment at the CERN LHC it has not been used so far. This report shows results of applying deep learning strategies to jet reconstruction at the stage of identifying the original parton association of the jet (jet tagging), which is crucial for physics analyses at the LHC experiments. We introduce a custom deep neural network architecture for jet tagging. We compare the performance of this novel method with the other established approaches at CMS and show that the proposed strategy provides a significant improvement. The strategy provides the first multi-class classifier, instead of the few binary classifiers that previously were used, and thus yields more information and in a more convenient way. The performance results obtained with simulation imply a significant improvement for a large number of important physics analysis at the CMS experiment.

  4. Interactive Slice of the CMS detector

    CERN Multimedia

    Davis, Siona Ruth

    2016-01-01

    This slice shows a colorful cross-section of the CMS detector with all parts of the detector labelled. Viewers are invited to click on buttons associated with five types of particles to see what happens when each type interacts with the sections of the detector. The five types of particles users can select to send through the slice are muons, electrons, neutral hadrons, charged hadrons and photons. Supplementary information on each type of particles is given. Useful for inclusion into general talks on CMS etc. *Animated CMS "slice" for Powerpoint (Mac & PC) Original version - 2004 Updated version - July 2010 *Six slides required - first is a set of buttons; others are for each particle type (muon, electron, charged/neutral hadron, photon) Recommend putting slide 1 anywhere in your presentation and the rest at the end

  5. Fireworks: A physics event display for CMS

    International Nuclear Information System (INIS)

    Kovalskyi, D.; Tadel, M.; Mrak-Tadel, A.; Bellenot, B.; Kuznetsov, V.; Jones, C.D.; Bauerdick, L.; Case, M.; Mulmenstadt, J.; Yagil, A.

    2010-01-01

    Fireworks is a CMS event display which is specialized for the physics studies case. This specialization allows us to use a stylized rather than 3D-accurate representation when appropriate. Data handling is greatly simplified by using only reconstructed information and ideal geometry. Fireworks provides an easy-to-use interface which allows a physicist to concentrate only on the data in which he is interested. Data is presented via graphical and textual views. Fireworks is built using the Eve subsystem of the CERN ROOT project and CMS's FWLite project. The FWLite project was part of CMS's recent code redesign which separates data classes into libraries separate from algorithms producing the data and uses ROOT directly for C++ object storage, thereby allowing the data classes to be used directly in ROOT.

  6. Fireworks A Physics Event Display for CMS

    CERN Document Server

    Kovalskyi, D; Mrak-Tadel, A; Bellenot, B; Kuznetsov, V; Jones, C D; Bauerdick, L; Case, M; Mülmenstädt, J; Yagil, A

    2010-01-01

    Fireworks is a CMS event display which is specialized for the physics studies case. This specialization allows us to use a stylized rather than 3D-accurate representation when appropriate. Data handling is greatly simplified by using only reconstructed information and ideal geometry. Fireworks provides an easy-to-use interface which allows a physicist to concentrate only on the data in which he is interested. Data is presented via graphical and textual views. Fireworks is built using the Eve subsystem of the CERN ROOT project and CMS's FWLite project. The FWLite project was part of CMS's recent code redesign which separates data classes into libraries separate from algorithms producing the data and uses ROOT directly for C++ object storage, thereby allowing the data classes to be used directly in ROOT.

  7. Pharmacokinetics of Colistin Methansulphonate (CMS) and Colistin after CMS Nebulisation in Baboon Monkeys.

    Science.gov (United States)

    Marchand, Sandrine; Bouchene, Salim; de Monte, Michèle; Guilleminault, Laurent; Montharu, Jérôme; Cabrera, Maria; Grégoire, Nicolas; Gobin, Patrice; Diot, Patrice; Couet, William; Vecellio, Laurent

    2015-10-01

    The objective of this study was to compare two different nebulizers: Eflow rapid® and Pari LC star® by scintigraphy and PK modeling to simulate epithelial lining fluid concentrations from measured plasma concentrations, after nebulization of CMS in baboons. Three baboons received CMS by IV infusion and by 2 types of aerosols generators and colistin by subcutaneous infusion. Gamma imaging was performed after nebulisation to determine colistin distribution in lungs. Blood samples were collected during 9 h and colistin and CMS plasma concentrations were measured by LC-MS/MS. A population pharmacokinetic analysis was conducted and simulations were performed to predict lung concentrations after nebulization. Higher aerosol distribution into lungs was observed by scintigraphy, when CMS was nebulized with Pari LC® star than with Eflow Rapid® nebulizer. This observation was confirmed by the fraction of CMS deposited into the lung (respectively 3.5% versus 1.3%).CMS and colistin simulated concentrations in epithelial lining fluid were higher after using the Pari LC star® than the Eflow rapid® system. A limited fraction of CMS reaches lungs after nebulization, but higher colistin plasma concentrations were measured and higher intrapulmonary colistin concentrations were simulated with the Pari LC Star® than with the Eflow Rapid® system.

  8. The CMS CERN Analysis Facility (CAF)

    Energy Technology Data Exchange (ETDEWEB)

    Buchmueller, O [Imperial College (United Kingdom); Bonacorsi, D [Universita and INFN, Bologna (Italy); Fanzago, F [Universita and INFN, Padova (Italy); Gowdy, S; Malgeri, L; Panzer-Steindel, B; Schwickerath, U; Spiga, D; Toebbicke, Rainer [Conseil Europeen Recherche Nucl. (CERN) Switzerland (Switzerland); Kreuzer, P [Rheinisch-Westfaelische Tech. Hoch. (RWTH) (Germany); Mankel, R [Deutsches Elektronen-Synchrotron (DESY) (Germany); Metson, S [University of Bristol (United Kingdom); Sanches, J Afonso; Teodoro, D, E-mail: Peter.Kreuzer@cern.c [Universidade do Estado do Rio De Janeiro (UERJ) (Brazil)

    2010-04-01

    The CMS CERN Analysis Facility (CAF) was primarily designed to host a large variety of latency-critical workflows. These break down into alignment and calibration, detector commissioning and diagnosis, and high-interest physics analysis requiring fast-turnaround. In addition to the low latency requirement on the batch farm, another mandatory condition is the efficient access to the RAW detector data stored at the CERN Tier-0 facility. The CMS CAF also foresees resources for interactive login by a large number of CMS collaborators located at CERN, as an entry point for their day-by-day analysis. These resources will run on a separate partition in order to protect the high-priority use-cases described above. While the CMS CAF represents only a modest fraction of the overall CMS resources on the WLCG GRID, an appropriately sized user-support service needs to be provided. We will describe the building, commissioning and operation of the CMS CAF during the year 2008. The facility was heavily and routinely used by almost 250 users during multiple commissioning and data challenge periods. It reached a CPU capacity of 1.4MSI2K and a disk capacity at the Peta byte scale. In particular, we will focus on the performances in terms of networking, disk access and job efficiency and extrapolate prospects for the upcoming LHC first year data taking. We will also present the experience gained and the limitations observed in operating such a large facility, in which well controlled workflows are combined with more chaotic type analysis by a large number of physicists.

  9. The CMS CERN Analysis Facility (CAF)

    International Nuclear Information System (INIS)

    Buchmueller, O; Bonacorsi, D; Fanzago, F; Gowdy, S; Malgeri, L; Panzer-Steindel, B; Schwickerath, U; Spiga, D; Toebbicke, Rainer; Kreuzer, P; Mankel, R; Metson, S; Sanches, J Afonso; Teodoro, D

    2010-01-01

    The CMS CERN Analysis Facility (CAF) was primarily designed to host a large variety of latency-critical workflows. These break down into alignment and calibration, detector commissioning and diagnosis, and high-interest physics analysis requiring fast-turnaround. In addition to the low latency requirement on the batch farm, another mandatory condition is the efficient access to the RAW detector data stored at the CERN Tier-0 facility. The CMS CAF also foresees resources for interactive login by a large number of CMS collaborators located at CERN, as an entry point for their day-by-day analysis. These resources will run on a separate partition in order to protect the high-priority use-cases described above. While the CMS CAF represents only a modest fraction of the overall CMS resources on the WLCG GRID, an appropriately sized user-support service needs to be provided. We will describe the building, commissioning and operation of the CMS CAF during the year 2008. The facility was heavily and routinely used by almost 250 users during multiple commissioning and data challenge periods. It reached a CPU capacity of 1.4MSI2K and a disk capacity at the Peta byte scale. In particular, we will focus on the performances in terms of networking, disk access and job efficiency and extrapolate prospects for the upcoming LHC first year data taking. We will also present the experience gained and the limitations observed in operating such a large facility, in which well controlled workflows are combined with more chaotic type analysis by a large number of physicists.

  10. Highlights and Perspectives from the CMS Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Butler, Joel Nathan [Fermilab

    2017-09-09

    In 2016, the Large Hadron Collider provided proton-proton collisions at 13 TeV center-of-mass energy and achieved very high luminosity and reliability. The performance of the CMS Experiment in this running period and a selection of recent physics results are presented. These include precision measurements and searches for new particles. The status and prospects for data-taking in 2017 and a brief summary of the highlights of the High Luminosity (HL-LHC) upgrade of the CMS detector are also presented.

  11. CMS latest results on Higgs measurements

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    Since the discovery of a Higgs boson by the CMS and ATLAS Collaborations in 2012, physicists at the LHC have been making intense efforts to measure this new particle’s properties. Last week, at the 37th International Conference on High Energy Physics, the CMS Collaboration has presented a broad set of results from new studies of the Higgs boson. They are based on the full Run 1 data from pp collisions at centre-of-mass energies of 7 and 8 TeV. The analyses include the final calibration and alignment constants and contains about 25 fb−1 of data. These new results will be summarized here.

  12. Physics with CMS and Electronic Upgrades

    Energy Technology Data Exchange (ETDEWEB)

    Rohlf, James W. [Boston Univ., MA (United States)

    2016-08-01

    The current funding is for continued work on the Compact Muon Solenoid (CMS) at the CERN Large Hadron Collider (LHC) as part of the Energy Frontier experimental program. The current budget year covers the first year of physics running at 13 TeV (Run 2). During this period we have concentrated on commisioning of the μTCA electronics, a new standard for distribution of CMS trigger and timing control signals and high bandwidth data aquistiion as well as participating in Run 2 physics.

  13. Analysis of the CMS visitors feedback Poster

    CERN Multimedia

    Davis, Siona Ruth

    2016-01-01

    CMS welcomed over 5500 visitors underground during the 2013 CERN Open Days and more than 4500 during the Neighbourhood Days of 2014 on the occasion of CERN’s 60th anniversary. During the latter event, visitors gave their feedback on the visit experience by answering three questions: • In one sentence, what will you tell your friends about what you saw today? • What fact or story that you heard today impressed you the most? • Describe the CMS detector in three words. This poster will show the analysis of the answers given by visitors.

  14. Analysis of the CMS visitors feedback Poster

    CERN Multimedia

    Davis, Siona Ruth

    CMS welcomed over 5500 visitors underground during the 2013 CERN Open Days and more than 4500 during the Neighbourhood Days of 2014 on the occasion of CERN’s 60th anniversary. During the latter event, visitors gave their feedback on the visit experience by answering three questions: • In one sentence, what will you tell your friends about what you saw today? • What fact or story that you heard today impressed you the most? • Describe the CMS detector in three words. This poster will show the analysis of the answers given by visitors.

  15. CMS Tracker Alignment Performance Results Summer 2016

    CERN Document Server

    CMS Collaboration

    2016-01-01

    The tracking system of the CMS detector provides excellent resolution for charged particle tracks and an efficient way of tagging jets. In order to reconstruct good quality tracks, the position and orientation of each silicon pixel and strip modules need to be determined with a precision of several micrometers. The performance of the CMS tracker alignment in 2016 using cosmic-ray data recorded at 0 T magnetic field and proton-proton collision data recorded at 3.8 T magnetic field has been studied. The data-driven validation of the results are presented. The time-dependent movement of the pixel detector's large-scale structure is demonstrated.

  16. Upgrade of the CMS Event Builder

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The Data Acquisition (DAQ) system of the Compact Muon Solenoid (CMS) experiment at CERN assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GB/s. By the time the LHC restarts after the 2013/14 shut-down, the current compute nodes and networking infrastructure will have reached the end of their lifetime. We are presenting design studies for an upgrade of the CMS event builder based on advanced networking technologies such as 10 Gb/s Ethernet. We report on tests and performance measurements with small-scale test setups.

  17. Future prospects of Higgs Physics at CMS

    OpenAIRE

    Marono, Miguel Vidal

    2014-01-01

    The Higgs boson physics reach of the CMS detector with 300(0) fb-1 of proton-proton collisions at sqrt{s} = 14 TeV is presented. Precision measurements of the Higgs boson properties, Higgs boson pair production and self-coupling, rare Higgs boson decays, and the potential for additional Higgs bosons are discussed. The Higgs boson physics reach of the CMS detector with 300(0) fb-1 of proton-proton collisions at sqrt{s} = 14 TeV is presented. Precision measurements of the Higgs boson propert...

  18. Website development with PyroCMS

    CERN Document Server

    Vineyard, Zachary

    2013-01-01

    A practical and a fast-paced guide that gives you all the information you need to start developing websites with PyroCMS. The book is an excellent resource for developers and makes website development easy and financially viable for everyone.This book is ideal if you are a PHP developer who is looking for a great content management system or a web developer looking to speed up your development times. If you are a web developer, you will need to have some familiarity with OOP and the MVC programming pattern, especially if you want to extend PyroCMS by building add-ons.

  19. VIP visit to CERN P5 CMS of Pakistan Science Members

    CERN Multimedia

    Hoch, Michael

    2012-01-01

    VIP visit to CERN P5 CMS of PAEC & JCPC Science Members List of PAEC Visitors: Dr. Badar Suleman - Member Science PAEC & Member of JCPC Dr. Waqar M. Butt - Member Engineering (Head of HMC3) Dr. Maqsood Ahmad - Chief Scientist (Head of Accelerator Project) List of CMS participants: Prof. Joseph Incandela, CMS Spokesperson Dr. Austin Ball, CMS Technical Coordinator Mr Andrzej Charkiewicz, CMS Resources Manager Dr. Michael Hoch, CMS Outreach activities, CMS photographer and guide Dr. Achille Petrilli, CMS Team Leader

  20. 42 CFR 422.210 - Assurances to CMS.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Assurances to CMS. 422.210 Section 422.210 Public...) MEDICARE PROGRAM MEDICARE ADVANTAGE PROGRAM Relationships With Providers § 422.210 Assurances to CMS. (a) Assurances to CMS. Each organization will provide assurance satisfactory to the Secretary that the...

  1. 42 CFR 411.379 - When CMS accepts a request.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false When CMS accepts a request. 411.379 Section 411.379... Physicians and Entities Furnishing Designated Health Services § 411.379 When CMS accepts a request. (a) Upon receiving a request for an advisory opinion, CMS promptly makes an initial determination of whether the...

  2. 42 CFR 405.1834 - CMS reviewing official procedure.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false CMS reviewing official procedure. 405.1834 Section... Determinations and Appeals § 405.1834 CMS reviewing official procedure. (a) Scope. A provider that is a party to... Administrator by a designated CMS reviewing official who considers whether the decision of the intermediary...

  3. 42 CFR 423.2264 - Guidelines for CMS review.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Guidelines for CMS review. 423.2264 Section 423....2264 Guidelines for CMS review. In reviewing marketing material or enrollment forms under § 423.2262, CMS determines (unless otherwise specified in additional guidance) that the marketing materials— (a...

  4. 42 CFR 403.248 - Administrative review of CMS determinations.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false Administrative review of CMS determinations. 403... Certification Program: General Provisions § 403.248 Administrative review of CMS determinations. (a) This section provides for administrative review if CMS determines— (1) Not to certify a policy; or (2) That a...

  5. 42 CFR 433.320 - Procedures for refunds to CMS.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Procedures for refunds to CMS. 433.320 Section 433... Overpayments to Providers § 433.320 Procedures for refunds to CMS. (a) Basic requirements. (1) The agency must refund the Federal share of overpayments that are subject to recovery to CMS through a credit on its...

  6. 42 CFR 438.724 - Notice to CMS.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Notice to CMS. 438.724 Section 438.724 Public...) MEDICAL ASSISTANCE PROGRAMS MANAGED CARE Sanctions § 438.724 Notice to CMS. (a) The State must give the CMS Regional Office written notice whenever it imposes or lifts a sanction for one of the violations...

  7. 42 CFR 460.18 - CMS evaluation of applications.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false CMS evaluation of applications. 460.18 Section 460... ELDERLY (PACE) PACE Organization Application and Waiver Process § 460.18 CMS evaluation of applications. CMS evaluates an application for approval as a PACE organization on the basis of the following...

  8. 42 CFR 411.386 - CMS's advisory opinions as exclusive.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false CMS's advisory opinions as exclusive. 411.386... Relationships Between Physicians and Entities Furnishing Designated Health Services § 411.386 CMS's advisory... described in § 411.370. CMS has not and does not issue a binding advisory opinion on the subject matter in...

  9. 42 CFR 457.1003 - CMS review of waiver requests.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false CMS review of waiver requests. 457.1003 Section 457.1003 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES... Waivers: General Provisions § 457.1003 CMS review of waiver requests. CMS will review the waiver requests...

  10. 42 CFR 422.2264 - Guidelines for CMS review.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Guidelines for CMS review. 422.2264 Section 422... Guidelines for CMS review. In reviewing marketing material or election forms under § 422.2262 of this part, CMS determines that the marketing materials— (a) Provide, in a format (and, where appropriate, print...

  11. CMS Virtual Visits @ European Researchers Night, 30 September 2016

    CERN Multimedia

    Lapka, Marzena

    2016-01-01

    CMS hosted four virtual visits during European Researchers Night. Audience from Greece (NCRS Demokritos, Athens), Poland (University of Science and Technology in Krakow), Italy (Psiquadro in Perugia & INFN in Pisa) and Portugal (Planetarium Calouste Gulbenkian, organised by LIP) had an occasion to converse with CMS researchers and "virtually" visit CMS Control Room and underground facilities.

  12. Recent results on SUSY searches from CMS

    CERN Multimedia

    CERN. Geneva

    2013-01-01

    The latest results on searches for Supersymmetry from CMS are reviewed. We present searches for direct stop production, searches in final states with four W bosons and multiple b-quarks, and searches for R-Parity violating SUSY. The results use up to 20/fb of data from the 8 TeV LHC run of 2012.

  13. An overview of CMS central hadron calorimeter

    CERN Document Server

    Katta, S

    2002-01-01

    The central hadron calorimeter for CMS detector is a sampling calorimeter with active medium as scintillator plates interleaved with brass absorber plates. It covers the central pseudorapidity region (¿ eta ¿<3.0). The design and construction aspects are reported. The status of construction and assembly of various subdetectors of HCAL are presented. (5 refs).

  14. CMS : the first barrel ring completed !

    CERN Multimedia

    Laurent Guiraud

    2000-01-01

    On 14 November, the CMS collaboration and the German firm DWE celebrated the successful construction of the detector's first yoke barrel ring. To mark the occasion, those in charge of the construction at CERN and DWE posed for the camera in the middle of the giant component.

  15. Heavy ion measurements at ATLAS and CMS

    CERN Document Server

    Chapon, Emilien

    2018-01-01

    We present an overview of recent results from the ATLAS and CMS collaborations on heavy ion physics. Using data from proton-proton, proton-lead and lead-lead collisions at the LHC, these results help to shed light on the properties of nuclear matter.

  16. The grand descent has begun for CMS

    CERN Multimedia

    2006-01-01

    Until recently, the CMS experimental cavern looked relatively empty; its detector was assembled entirely at ground level, to be lowered underground in 15 sections. On 2 November, the first hadronic forward calorimeter led the way with a grand descent. The first section of the CMS detector (centre of photo) arriving from the vertical shaft, viewed from the cavern floor. There is something unusual about the construction of the CMS detector. Instead of being built in the experimental cavern, like all the other detectors in the LHC experiments, it was constructed at ground level. This was to allow for easy access during the assembly of the detector and to minimise the size of the excavated cavern. The slightly nerve-wracking task of lowering it safely into the cavern in separate sections came after the complete detector was successfully tested with a magnetic field at ground level. In the early morning of 2 November, the first section of the CMS detector began its eagerly awaited descent into the underground ca...

  17. The CMS Beam Halo Monitor Detector System

    CERN Document Server

    CMS Collaboration

    2015-01-01

    A new Beam Halo Monitor (BHM) detector system has been installed in the CMS cavern to measure the machine-induced background (MIB) from the LHC. This background originates from interactions of the LHC beam halo with the final set of collimators before the CMS experiment and from beam gas interactions. The BHM detector uses the directional nature of Cherenkov radiation and event timing to select particles coming from the direction of the beam and to suppress those originating from the interaction point. It consists of 40 quartz rods, placed on each side of the CMS detector, coupled to UV sensitive PMTs. For each bunch crossing the PMT signal is digitized by a charge integrating ASIC and the arrival time of the signal is recorded. The data are processed in real time to yield a precise measurement of per-bunch-crossing background rate. This measurement is made available to CMS and the LHC, to provide real-time feedback on the beam quality and to improve the efficiency of data taking. In this talk we will describ...

  18. Performance of the CMS Beam Halo Monitor

    CERN Document Server

    CMS Collaboration

    2015-01-01

    The CMS Beam Halo Monitor has been successfully installed in the CMS cavern in LHC Long Shutdown 1 for measuring the machine induced background for LHC Run II. The system is based on 40 detector units composed of radiation hard synthetic quartz Cherenkov radiators coupled to fast photomultiplier tubes for a direction sensitive measurement. The readout electronics chain uses many components developed for the Phase 1 upgrade to the CMS Hadronic Calorimeter electronics, with dedicated firmware and readout adapted to the beam monitoring requirements. The PMT signal is digitized by a charge integrating ASIC (QIE10), providing both the signal rise time, with few ns resolution, and the charge integrated over one bunch crossing. The backend electronics uses microTCA technology and received data via a high-speed 5 Gbps asynchronous link. It records histograms with sub-bunch crossing timing resolution and is readout by IPbus using the newly designed CMS data acquisition for non-event based data. The data is processed i...

  19. CMS and ATLAS honour their suppliers

    CERN Multimedia

    2001-01-01

    In order to motivate the hundreds of companies building their detectors, the CMS and ATLAS collaborations have recently been handing out awards of excellence to their top suppliers. At its second ceremony of this kind, CMS honoured four of its suppliers, while ATLAS for the first time paid tribute to two of its contractors. The atmosphere in the Council Chamber was festive rather than formal at the start of CMS week on Monday 5 March. Before embarking upon a long series of seminars and presentations, the Collaboration held its second awards ceremony to honour its top suppliers. By paying tribute to the exceptional efforts of certain suppliers, the Collaboration's aim is to motivate all the firms, some 500 in total, taking part in the experiment's construction. The CMS Awards panel thus singles out contractors who have not only provided full satisfaction in terms of compliance with specifications, quality and deadlines, but have in addition provided original solutions to delicate problems. Four firms came away...

  20. CMS Observes Single Top-Quark

    CERN Multimedia

    2011-01-01

      One of the many excellent results harvested by CMS from 2010 data. (Figure shows events vs cosine of the angle between lepton and light jets in t rest-frame.)   If you have any comments / suggestions please contact the editors: Marzena Lapka (marzena.lapka@cern.ch) and Achintya Rao (achintya.rao@cern.ch)  

  1. Assembly of the CMS hadronic calorimeter

    CERN Multimedia

    Maximilien Brice

    2004-01-01

    The hadronic calorimeter is assembled on the end-cap of the CMS detector in the assembly hall. Hadronic calorimeters measure the energy of particles that interact via the strong force, called hadrons. The detectors are made in a sandwich-like structure where these scintillator tiles are placed between metal sheets.

  2. Monte Carlo Production Management at CMS

    CERN Document Server

    Boudoul, G.; Pol, A; Srimanobhas, P; Vlimant, J R; Franzoni, Giovanni

    2015-01-01

    The analysis of the LHC data at the Compact Muon Solenoid (CMS) experiment requires the production of a large number of simulated events.During the runI of LHC (2010-2012), CMS has produced over 12 Billion simulated events,organized in approximately sixty different campaigns each emulating specific detector conditions and LHC running conditions (pile up).In order toaggregate the information needed for the configuration and prioritization of the events production,assure the book-keeping and of all the processing requests placed by the physics analysis groups,and to interface with the CMS production infrastructure,the web-based service Monte Carlo Management (McM) has been developed and put in production in 2012.McM is based on recent server infrastructure technology (CherryPy + java) and relies on a CouchDB database back-end.This contribution will coverthe one and half year of operational experience managing samples of simulated events for CMS,the evolution of its functionalitiesand the extension of its capabi...

  3. CMS has a heart of pixels

    CERN Multimedia

    2003-01-01

    In the immediate vicinity of the collision point, CMS will be equipped with pixel detectors consisting of no fewer than 50 million pixels measuring 150 microns along each side. Each of the pixels, which receive the signal, is connected to its own electronic circuit by a tiny sphere (seen here in the electron microscope image) measuring 15 to 20 microns in diameter.

  4. CMS: the first barrel ring completed!

    CERN Multimedia

    2000-01-01

    Seven years after design studies began, CERN and the German company DWE have erected the first of the five CMS yoke rings, a giant component weighing 1200 tonnes. The first ring of the CMS magnet yoke, a twelve-sided 15-metre-high colossus, has been erected in the new hall at Point 5 near Cessy. For the last few days it has stood unaided, no longer relying on the central structure required for its assembly. Its construction marks an important milestone in the CMS programme, the culmination of seven years of work at CERN and over two years of manufacturing at DWE. Awarded the contract by the Swiss Federal Institute of Technology (ETH), Zürich, the German manufacturer has produced and assembled the ring components in collaboration with a team from CERN. This feat of mechanical engineering was celebrated two weeks ago at a drink attended by the main protagonists, headed by Franz Kufner, divisional manager at DWE, Franz Leher, production engineer at DWE, Alain Hervé, CMS technical coordinator,...

  5. Building CMS Pixel Barrel Detectur Modules

    CERN Document Server

    König, S; Horisberger, R.; Meier, B.; Rohe, T.; Streuli, S.; Weber, R.; Kastli, H.Chr.; Erdmann, W.

    2007-01-01

    For the barrel part of the CMS pixel tracker about 800 silicon pixel detector modules are required. The modules are bump bonded, assembled and tested at the Paul Scherrer Institute. This article describes the experience acquired during the assembly of the first ~200 modules.

  6. Section of CMS Beam Pipe Removed

    CERN Multimedia

    2013-01-01

    Seven components of the beam pipe located at the heart of the CMS detector were removed in recent weeks. The delicate operations were performed in several stages as the detector was opened. Video of the extraction of one section: http://youtu.be/arGuFgWM7u0

  7. Recent CMS Results on Flavor Physics

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    We present the latest results of the CMS experiment in the field of flavor physics. The observation of a new beauty baryon in decays to Xi(b) and a prompt pion is discussed along with recent measurements Lambda_b baryon and quarkonium production cross sections. Finally, we describe the search for rare decays of charmed mesons to dimuons.

  8. Jet substructure measurements at ATLAS and CMS

    CERN Document Server

    Dattagupta, Aparajita; The ATLAS collaboration

    2017-01-01

    A review is given of recent Run II measurements of jet substructure at CMS and ATLAS, as well of the most relevant measurements from Run I. Quark and gluon discrimination, jet mass and other substructure observable are discussed together with prospects for future measurements with new insight from theory.

  9. The electromagnetic calorimeter of the CMS experiment

    International Nuclear Information System (INIS)

    Diemoza, M.

    2003-01-01

    The Electromagnetic Calorimeter of the CMS experiment is made of about 80000 Lead Tungstate scintillating crystals. This project aims to achieve an extreme precision in photons and electrons energy measurement. General motivations, main technical challenges and key points in energy resolution will be discussed in the following

  10. CMS has a heart of pixels

    CERN Multimedia

    2003-01-01

    At the core of CMS, particles will come into contact with tiny detector components, known as pixels, which are almost invisible to the naked eye. With these elementary cells measuring a mere 150 microns (or about 1/10 of a millimetre) along each side, a real technological leap has been made.

  11. A data Grid prototype for distributed data production in CMS

    International Nuclear Information System (INIS)

    Hafeez, Mehnaz; Samar, Asad; Stockinger, Heinz

    2001-01-01

    The CMS experiment at CERN is setting up a Grid infrastructure required to fulfill the needs imposed by Terabyte scale productions for the next few years. The goal is to automate the production and at the same time allow the users to interact with the system, if required, to make decisions which would optimize performance. We present the architecture, design and functionality of our first working Objectivity file replication prototype. The middle-ware of choice is the Globus toolkit that provides promising functionality. Our results prove the ability of the Globus toolkit to be used as an underlying technology for a world-wide Data Grid. The required data management functionality includes high speed file transfers, secure access to remote files, selection and synchronization of replicas and managing the meta information. The whole system is expected to be flexible enough to incorporate site specific policies. The data management granularity is the file rather than the object level. The first prototype is currently in use for the High Level Trigger (HLT) production (autumn 2000). Owing to these efforts, CMS is one of the pioneers to use the Data Grid functionality in a running production system. The project can be viewed as an evaluator of different strategies, a test for the capabilities of middle-ware tools and a provider of basic Grid functionalities

  12. Quantitative proteomic analysis of CMS-related changes in Honglian CMS rice anther.

    Science.gov (United States)

    Sun, Qingping; Hu, Chaofeng; Hu, Jun; Li, Shaoqing; Zhu, Yingguo

    2009-10-01

    Honglian (HL) cytoplasmic male sterility (CMS) is one of the rice CMS types and has been widely used in hybrid rice production in China. The CMS line (Yuetai A, YTA) has a Yuetai B (maintainer line, YTB) nuclear genome, but has a rearranged mitochondrial (mt) genome consisting of Yuetai B. The fertility of hybrid (HL-6) was restored by restorer gene in nuclear genome of restorer line (9311). We used isotope-code affinity tag (ICAT) technology to perform the protein profiling of uninucleate stage rice anther and identify the CMS-HL related proteins. Two separate ICAT analyses were performed in this study: (1) anthers from YTA versus anthers from YTB, and (2) anthers from YTA versus anthers from HL-6. Based on the two analyses, a total of 97 unique proteins were identified and quantified in uninucleate stage rice anther under the error rate of less than 10%, of which eight proteins showed abundance changes of at least twofold between YTA and YTB. Triosephosphate isomerase, fructokinase II, DNA-binding protein GBP16 and ribosomal protein L3B were over-expressed in YTB, while oligopeptide transporter, floral organ regulator 1, kinase and S-adenosyl-L: -methionine synthetase were over-expressed in YTA. Reduction of the proteins associated with energy production and lesser ATP equivalents detected in CMS anther indicated that the low level of energy production played an important role in inducing CMS-HL.

  13. Using the GlideinWMS System as a Common Resource Provisioning Layer in CMS

    Energy Technology Data Exchange (ETDEWEB)

    Balcas, J. [Vilnius U.; Belforte, S. [Trieste U.; Bockelman, B. [Nebraska U.; Colling, D. [Imperial Coll., London; Gutsche, O. [Fermilab; Hufnagel, D. [Fermilab; Khan, F. [Quaid-i-Azam U.; Larson, K. [Fermilab; Letts, J. [UC, San Diego; Mascheroni, M. [Milan Bicocca U.; Mason, D. [Fermilab; McCrea, A. [UC, San Diego; Piperov, S. [Brown U.; Saiz-Santos, M. [UC, San Diego; Sfiligoi, I. [UC, San Diego; Tanasijczuk, A. [UC, San Diego; Wissing, C. [DESY

    2015-12-23

    CMS will require access to more than 125k processor cores for the beginning of Run 2 in 2015 to carry out its ambitious physics program with more and higher complexity events. During Run1 these resources were predominantly provided by a mix of grid sites and local batch resources. During the long shut down cloud infrastructures, diverse opportunistic resources and HPC supercomputing centers were made available to CMS, which further complicated the operations of the submission infrastructure. In this presentation we will discuss the CMS effort to adopt and deploy the glideinWMS system as a common resource provisioning layer to grid, cloud, local batch, and opportunistic resources and sites. We will address the challenges associated with integrating the various types of resources, the efficiency gains and simplifications associated with using a common resource provisioning layer, and discuss the solutions found. We will finish with an outlook of future plans for how CMS is moving forward on resource provisioning for more heterogenous architectures and services.

  14. CMS users data management service integration and first experiences with its NoSQL data storage

    CERN Document Server

    Riahi, H; Cinquilli, M; Hernandez, J M; Konstantinov, P; Mascheroni, M; Santocchia, A

    2014-01-01

    The distributed data analysis workflow in CMS assumes that jobs run in a different location to where their results are finally stored. Typically the user outputs must be transferred from one site to another by a dedicated CMS service, AsyncStageOut. This new service is originally developed to address the inefficiency in using the CMS computing resources when transferring the analysis job outputs, synchronously, once they are produced in the job execution node to the remote site.The AsyncStageOut is designed as a thin application relying only on the NoSQL database (CouchDB) as input and data storage. It has progressed from a limited prototype to a highly adaptable service which manages and monitors the whole user files steps, namely file transfer and publication. The AsyncStageOut is integrated with the Common CMS/Atlas Analysis Framework. It foresees the management of nearly 200k users files per day of close to 1000 individual users per month with minimal delays, and providing a real time monitoring and repor...

  15. ATLAS, CMS and New Challenges for Public Communication

    Science.gov (United States)

    Taylor, Lucas; Barney, David; Goldfarb, Steven

    2011-12-01

    On 30 March 2010 the first high-energy collisions brought the LHC experiments into the era of research and discovery. Millions of viewers worldwide tuned in to the webcasts and followed the news via Web 2.0 tools, such as blogs, Twitter, and Facebook, with 205,000 unique visitors to CERN's Web site. Media coverage at the experiments and in institutes all over the world yielded more than 2,200 news items including 800 TV broadcasts. We describe the new multimedia communications challenges, due to the massive public interest in the LHC programme, and the corresponding responses of the ATLAS and CMS experiments, in the areas of Web 2.0 tools, multimedia, webcasting, videoconferencing, and collaborative tools. We discuss the strategic convergence of the two experiments' communications services, information systems and public database of outreach material.

  16. ATLAS, CMS and new challenges for public communication

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, Lucas [Fermilab; Barney, David [CERN; Goldfarb, Steven [Michigan U.

    2011-01-01

    On 30 March 2010 the first high-energy collisions brought the LHC experiments into the era of research and discovery. Millions of viewers worldwide tuned in to the webcasts and followed the news via Web 2.0 tools, such as blogs, Twitter, and Facebook, with 205,000 unique visitors to CERN's Web site. Media coverage at the experiments and in institutes all over the world yielded more than 2,200 news items including 800 TV broadcasts. We describe the new multimedia communications challenges, due to the massive public interest in the LHC programme, and the corresponding responses of the ATLAS and CMS experiments, in the areas of Web 2.0 tools, multimedia, webcasting, videoconferencing, and collaborative tools. We discuss the strategic convergence of the two experiments' communications services, information systems and public database of outreach material.

  17. A data grid prototype for distributed data production in CMS

    CERN Document Server

    Hafeez, M; Stockinger, H E

    2001-01-01

    The CMS experiment at CERN is setting up a grid infrastructure required to fulfil the needs imposed by Terabyte scale productions for the next few years. The goal is to automate the production and at the same time allow the users to interact with the system, if required, to make decisions which would optimise performance. We present the architecture, design and functionality of our first working objectivity file replication prototype. The middle-ware of choice is the Globus toolkit that provides promising functionality. Our results prove the ability of the Globus toolkit to be used as an underlying technology for a world-wide Data Grid. The required data management functionality includes high speed file transfers, secure access to remote files, selection and synchronisation of replicas and managing the meta information. The whole system is expected to be flexible enough to incorporate site specific policies. The data management granularity is the file rather than the object level. The first prototype is curre...

  18. ATLAS, CMS and New Challenges for Public Communication

    International Nuclear Information System (INIS)

    Taylor, Lucas; Barney, David; Goldfarb, Steven

    2011-01-01

    On 30 March 2010 the first high-energy collisions brought the LHC experiments into the era of research and discovery. Millions of viewers worldwide tuned in to the webcasts and followed the news via Web 2.0 tools, such as blogs, Twitter, and Facebook, with 205,000 unique visitors to CERN's Web site. Media coverage at the experiments and in institutes all over the world yielded more than 2,200 news items including 800 TV broadcasts. We describe the new multimedia communications challenges, due to the massive public interest in the LHC programme, and the corresponding responses of the ATLAS and CMS experiments, in the areas of Web 2.0 tools, multimedia, webcasting, videoconferencing, and collaborative tools. We discuss the strategic convergence of the two experiments' communications services, information systems and public database of outreach material.

  19. ATLAS, CMS and New Challenges for Public Communication

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, Lucas [Fermilab, PO Box 500, Batavia, IL 60510-5011 (United States); Barney, David [CERN, CH-1211, Geneva 23 (Switzerland); Goldfarb, Steven [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)

    2011-12-23

    On 30 March 2010 the first high-energy collisions brought the LHC experiments into the era of research and discovery. Millions of viewers worldwide tuned in to the webcasts and followed the news via Web 2.0 tools, such as blogs, Twitter, and Facebook, with 205,000 unique visitors to CERN's Web site. Media coverage at the experiments and in institutes all over the world yielded more than 2,200 news items including 800 TV broadcasts. We describe the new multimedia communications challenges, due to the massive public interest in the LHC programme, and the corresponding responses of the ATLAS and CMS experiments, in the areas of Web 2.0 tools, multimedia, webcasting, videoconferencing, and collaborative tools. We discuss the strategic convergence of the two experiments' communications services, information systems and public database of outreach material.

  20. Tau Identification at CMS in Run II

    CERN Document Server

    Ojalvo, Isabel

    2016-01-01

    During LHC Long Shutdown 1 necessary upgrades to the CMS detector were made. CMS also took the opportunity to improve further particle reconstruction. A number of improvements were made to the Hadronic Tau reconstruction and Identification algorithms. In particular, electromag- netic strip reconstruction of the Hadron plus Strips (HPS) algorithm was improved to better model signal of pi0 from tau decays. This modification improves energy response and removes the tau footprint from isolation area. In addition to this, improvement to discriminators combining iso- lation and tau life time variables, and anti-electron in MultiVariate Analysis technique was also developed. The results of these improvements are presented and validation of Tau Identification using a variety of techniques is shown.

  1. Explaining CMS lepton excesses with supersymmetry

    CERN Multimedia

    CERN. Geneva; Prof. Allanach, Benjamin

    2014-01-01

    1) Kostas Theofilatos will give an introduction to CMS result 2) Ben Allanach: Several CMS analyses involving di-leptons have recently reported small 2.4-2.8 sigma local excesses: nothing to get too excited about, but worth keeping an eye on nonetheless. In particular, a search in the $lljj p_T$(miss) channel, a search for $W_R$ in the $lljj$ channel and a di-leptoquark search in the $lljj$ channel and $ljj p_T$(miss) channel have all yielded small excesses. We interpret the first excess in the MSSM, showing that the interpretation is viable in terms of other constraints, despite only having squark masses of around 1 TeV. We can explain the last three excesses with a single R-parity violating coupling that predicts a non-zero contribution to the neutrinoless double beta decay rate.

  2. Online Event Selection at the CMS experiment

    CERN Document Server

    Konecki, M

    2004-01-01

    Triggering in the high-rate environment of the LHC is a challenging task. The CMS experiment has developed a two-stage trigger system. The Level-1 Trigger is based on custom hardware devices and is designed to reduce the 40 MHz LHC bunch-crossing rate to a maximum event rate of ~100 kHz. The further reduction of the event rate to O(100 Hz), suitable for permanent storage, is performed in the High-Level Trigger (HLT) which is based on a farm of commercial processors. The methods used for object identification and reconstruction are presented. The CMS event selection strategy is discussed. The performance of the HLT is also given.

  3. MPPC Photon Sensor Operational Experience in CMS

    CERN Document Server

    Kunsken, Andreas

    2015-01-01

    The CMS Outer Hadron Calorimeter (HO) is the first large scale hadron collider detector to use SIPMs. To build the system we purchased and measured 3000 Hamamatsu MPPCs. 1656 channels of MPPC with 40MHz readout have currently been installed into CMS. We report on comparisons of in situ and vendor supplied measurements. We present results on in-situ working point optimization by IV scanning and temperature vs V scanning. We have developed several techniques for determining the breakdown voltage in situ. We compare the performance of each technique and its success in working point optimization. We present results on gain, noise, and cross talk monitoring. We present results on overall system stability.

  4. CMS penalizes 758 hospitals for safety incidents

    Directory of Open Access Journals (Sweden)

    Robbins RA

    2015-12-01

    Full Text Available No abstract available. Article truncated after 150 words. The Centers for Medicare and Medicaid Services (CMS is penalizing 758 hospitals with higher rates of patient safety incidents, and more than half of those were also fined last year, as reported by Kaiser Health News (1. Among the hospitals being financially punished are some well-known institutions, including Yale New Haven Hospital, Medstar Washington Hospital Center in DC, Grady Memorial Hospital, Northwestern Memorial Hospital in Chicago, Indiana University Health, Brigham and Womens Hospital, Tufts Medical Center, University of North Carolina Hospital, the Cleveland Clinic, Hospital of the University of Pennsylvania, Parkland Health and Hospital, and the University of Virginia Medical Center (Complete List of Hospitals Penalized 2016. In the Southwest the list includes Banner University Medical Center in Tucson, Ronald Reagan UCLA Medical Center, Stanford Health Care, Denver Health Medical Center and the University of New Mexico Medical Center (for list of Southwest hospitals see Appendix 1. In total, CMS ...

  5. Monitoring the CMS Data Acquisition System

    CERN Document Server

    Bauer, Gerry; Biery, K; Branson, J; Cano, E; Cheung, H; Ciganek, M; Cittolin, S; Coarasa, J A; Deldicque, C; Dusinberre, E; Erhan, S; Fortes Rodrigues, F; Gigi, D; Glege, F; Gomez-Reino, R; Gutleber, J; Hatton, D; Laurens, J F; Lopez Perez, J A; Meijers, F; Meschi, E; Meyer, A; Mommsen, R; Moser, R; O'Dell, V; Oh, A; Orsini, L B; Patras, V; Paus, C; Petrucci, A; Pieri, M; Racz, A; Sakulin, H; Sani, M; Schieferdecker, P; Schwick, C; Shpakov, D; Simon, S; Sumorok, K; Zanetti, M.

    2010-01-01

    The CMS data acquisition system comprises O(20000) interdependent services that need to be monitored in near real-time. The ability to monitor a large number of distributed applications accurately and effectively is of paramount importance for robust operations. Application monitoring entails the collection of a large number of simple and composed values made available by the software components and hardware devices. A key aspect is that detection of deviations from a specified behaviour is supported in a timely manner, which is a prerequisite in order to take corrective actions efficiently. Given the size and time constraints of the CMS data acquisition system, efficient application monitoring is an interesting research problem. We propose an approach that uses the emerging paradigm of Web-service based eventing systems in combination with hierarchical data collection and load balancing. Scalability and efficiency are achieved by a decentralized architecture, splitting up data collections into regions of col...

  6. CMS Tracker Alignment Performance Results 2016

    CERN Document Server

    CMS Collaboration

    2017-01-01

    The tracking system of the CMS detector provides excellent resolution for charged particle tracks and an efficient way of tagging jets. In order to reconstruct good quality tracks, the position and orientation of each silicon pixel and strip module needs to be determined with a precision of several micrometers. The presented alignment results are derived following a global (Millepede-II) and a local (HipPy) fit approach. The performance of the CMS tracker alignment in 2016 using cosmic-ray data and the complete set of proton-proton collision data recorded at 3.8 T magnetic field has been studied. The data-driven validation of the results are shown. The time-dependent movement of the pixel detector's large-scale structure is demonstrated.

  7. Top quark mass measurements with CMS

    CERN Document Server

    Kovalchuk, Nataliia

    2017-01-01

    Measurements of the top quark mass are presented, obtained from CMS data collected in proton-proton collisions at the LHC at centre-of-mass energies of 7 TeV and 8 TeV. The mass of the top quark is measured using several methods and channels, including the reconstructed invariant mass distribution of the top quark, an analysis of endpoint spectra as well as measurements from shapes of top quark decay distributions. The dependence of the mass measurement on the kinematic phase space is investigated. The results of the various channels are combined and compared to the world average. The top mass and also $\\alpha_{\\textnormal S}$ are extracted from the top pair cross section measured at CMS.

  8. Data quality monitoring of the CMS tracker

    CERN Document Server

    Potamianos, Karolos

    2009-01-01

    The Physics and Data Quality Monitoring (DQM) framework aims at providing a homogeneous monitoring environment across various applications related to data taking at the CMS experiment. It has been designed to be used during online data taking as well as during offline reconstruction. The goal of the online system is to monitor detector performance and identify problems very efficiently during data collection so that proper actions can be taken. On the other hand the reconstruction or calibration problems can be detected during offline processing using the same tool. The monitoring is performed with histograms, which are filled with information from raw and reconstructed data. All histograms can then be displayed both in the central CMS DQM graphical user interface (GUI), as well as in Tracker specific expert GUIs and socalled Tracker Maps. Applications are in place to further process the information from these basic histograms by summarizing them in overview plots, by evaluating them with automated statistica...

  9. Concept of the CMS Trigger Supervisor

    CERN Document Server

    Magrans de Abril, Ildefons; Varela, Joao

    2006-01-01

    The Trigger Supervisor is an online software system designed for the CMS experiment at CERN. Its purpose is to provide a framework to set up, test, operate and monitor the trigger components on one hand and to manage their interplay and the information exchange with the run control part of the data acquisition system on the other. The Trigger Supervisor is conceived to provide a simple and homogeneous client interface to the online software infrastructure of the trigger subsystems. This document specifies the functional and non-functional requirements, design and operational details, and the components that will be delivered in order to facilitate a smooth integration of the trigger software in the context of CMS.

  10. Performance of the CMS Event Builder

    CERN Document Server

    Andre, Jean-Marc Olivier; Branson, James; Brummer, Philipp Maximilian; Chaze, Olivier; Cittolin, Sergio; Contescu, Cristian; Craigs, Benjamin Gordon; Darlea, Georgiana Lavinia; Deldicque, Christian; Demiragli, Zeynep; Dobson, Marc; Doualot, Nicolas; Erhan, Samim; Fulcher, Jonathan Richard; Gigi, Dominique; Gladki, Maciej Szymon; Glege, Frank; Gomez Ceballos, Guillelmo; Hegeman, Jeroen Guido; Holzner, Andre Georg; Janulis, Mindaugas; Jimenez Estupinan, Raul; Masetti, Lorenzo; Meijers, Franciscus; Meschi, Emilio; Mommsen, Remigius; Morovic, Srecko; O'Dell, Vivian; Orsini, Luciano; Paus, Christoph Maria Ernst; Petrova, Petia; Pieri, Marco; Racz, Attila; Reis, Thomas; Sakulin, Hannes; Schwick, Christoph; Simelevicius, Dainius; Zejdl, Petr

    2017-01-01

    The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider (LHC) assembles events at a rate of 100 kHz. It transports event data at an aggregate throughput of ~100 GB/s to the high-level trigger (HLT) farm. The CMS DAQ system has been completely rebuilt during the first long shutdown of the LHC in 2013/14. The new DAQ architecture is based on state-of-the-art network technologies for the event building. For the data concentration, 10/40 Gb/s Ethernet technologies are used together with a reduced TCP/IP protocol implemented in FPGA for a reliable transport between custom electronics and commercial computing hardware. A 56 Gb/s Infiniband FDR CLOS network has been chosen for the event builder. We report on the performance of the event builder system and the steps taken to exploit the full potential of the network technologies.

  11. b-physics with ATLAS and CMS

    International Nuclear Information System (INIS)

    Oakes, L.

    2014-01-01

    The ATLAS and CMS b-physics programmes are summarized after nearly 2 years of data taking. The data were collected in √(s)=7 TeV proton-proton collision at the LHC. Results presented include B meson lifetime measurements using 40 pb -1 of 2010 data, which demonstrate good agreement with previous measurements, and competitive rare decay studies using the full 2011 data set of up to 5 fb -1 . ATLAS measures a B s 0 meson lifetime of [1.4 ± 0.08(stat) ± 0.05(syst)] ps in the mode B s 0 → J/ψφ. The CMS experiment finds a lifetime of [1.59 ± 0.08(stat)] ps

  12. The CMS Tracker Readout Front End Driver

    CERN Document Server

    Foudas, C.; Ballard, D.; Church, I.; Corrin, E.; Coughlan, J.A.; Day, C.P.; Freeman, E.J.; Fulcher, J.; Gannon, W.J.F.; Hall, G.; Halsall, R.N.J.; Iles, G.; Jones, J.; Leaver, J.; Noy, M.; Pearson, M.; Raymond, M.; Reid, I.; Rogers, G.; Salisbury, J.; Taghavi, S.; Tomalin, I.R.; Zorba, O.

    2004-01-01

    The Front End Driver, FED, is a 9U 400mm VME64x card designed for reading out the Compact Muon Solenoid, CMS, silicon tracker signals transmitted by the APV25 analogue pipeline Application Specific Integrated Circuits. The FED receives the signals via 96 optical fibers at a total input rate of 3.4 GB/sec. The signals are digitized and processed by applying algorithms for pedestal and common mode noise subtraction. Algorithms that search for clusters of hits are used to further reduce the input rate. Only the cluster data along with trigger information of the event are transmitted to the CMS data acquisition system using the S-LINK64 protocol at a maximum rate of 400 MB/sec. All data processing algorithms on the FED are executed in large on-board Field Programmable Gate Arrays. Results on the design, performance, testing and quality control of the FED are presented and discussed.

  13. Status of the CMS magnet (MT17)

    CERN Document Server

    Hervé, A; Campi, D; Cannarsa, P; Fabbricatore, P; Feyzi, F; Gerwig, H; Grillet, J P; Horváth, I L; Kaftanov, V S; Kircher, F; Loveless, R; Maugain, J M; Perinic, G; Rykaczewski, H; Sbrissa, E; Smith, R P; Veillet, L

    2002-01-01

    The CMS experiment (Compact Muon Solenoid) is a general-purpose detector designed to run at the highest luminosity at the CERN Large Hadron Collider (LHC). Its distinctive features include a 4 T superconducting solenoid with a free bore of 6 m diameter and 12.5-m length, enclosed inside a 10 000-ton return yoke. The magnet will be assembled and tested in a surface hall at Point 5 of the LHC at the beginning of 2004 before being transferred by heavy lifting means to an experimental hall 90 m below ground level. The design and construction of the magnet is a common project of the CMS Collaboration. The task is organized by a CERN based group with strong technical and contractual participation from CEA Saclay, ETH Zurich, Fermilab, INFN Genova, ITEP Moscow, University of Wisconsin and CERN. The magnet project will be described, with emphasis on the present status of the fabrication. (15 refs).

  14. Efficient Monitoring of CRAB Jobs at CMS

    Energy Technology Data Exchange (ETDEWEB)

    Silva, J. M.D. [Sao Paulo, IFT; Balcas, J. [Caltech; Belforte, S. [INFN, Trieste; Ciangottini, D. [INFN, Perugia; Mascheroni, M. [Fermilab; Rupeika, E. A. [Vilnius U.; Ivanov, T. T. [Sofiya U.; Hernandez, J. M. [Madrid, CIEMAT; Vaandering, E. [Fermilab

    2017-11-22

    CRAB is a tool used for distributed analysis of CMS data. Users can submit sets of jobs with similar requirements (tasks) with a single request. CRAB uses a client-server architecture, where a lightweight client, a server, and ancillary services work together and are maintained by CMS operators at CERN. As with most complex software, good monitoring tools are crucial for efficient use and longterm maintainability. This work gives an overview of the monitoring tools developed to ensure the CRAB server and infrastructure are functional, help operators debug user problems, and minimize overhead and operating cost. This work also illustrates the design choices and gives a report on our experience with the tools we developed and the external ones we used.

  15. The central part of CMS is lowered

    CERN Multimedia

    Maximilien Brice

    2007-01-01

    On 28 February 2007, the CMS central piece containing the magnet and weighing as much as five Jumbo jets (1920 tonnes) was gently lowered into place. Only 20 cm separated the detector, which was suspended by four huge cables, each with 55 strands and sophisticated monitoring to minimize sway and tilt, from the walls of the shaft. The entire process took about 10 hours to complete.

  16. Heavy ion studies with CMS HF calorimeter

    International Nuclear Information System (INIS)

    Damgov, I.; Genchev, V.; Kolosov, V.A.; Lokhtin, I.P.; Petrushanko, S.V.; Sarycheva, L.I.; Teplov, S.Yu.; Shmatov, S.V.; Zarubin, P.I.

    2001-01-01

    The capability of the very forward (HF) calorimeter of the CMS detector at LHC to be applied to specific studies with heavy ion beams is discussed. The simulated responses of the HF calorimeter to nucleus-nucleus collisions are used for the analysis of different problems: reconstruction of the total energy flow in the forward rapidity region, accuracy of determination of the impact parameter of collision, study of fluctuations of the hadronic-to-electromagnetic energy ratio, fast inelastic event selection

  17. CMS: Simulated Higgs production and decay

    CERN Multimedia

    David Barney

    2005-01-01

    This track is an example of simulated data modelled for the CMS detector on the Large Hadron Collider (LHC) at CERN, which will begin taking data in 2008. These graphics show two possible signatures that a Higgs boson may leave in the detector. As the Higgs will be very short-lived, it cannot be observed directly but rather its production is inferred from the products of its decay.

  18. Cross section of the CMS solenoid

    CERN Multimedia

    Tejinder S. Virdee, CERN

    2005-01-01

    The pictures show a cross section of the CMS solenoid. One can see four layers of the superconducting coil, each of which contains the superconductor (central part, copper coloured - niobium-titanium strands in a copper coating, made into a "Rutherford cable"), surrounded by an ultra-pure aluminium as a magnetic stabilizer, then an aluminium alloy as a mechanical stabilizer. Besides the four layers there is an aluminium mechanical piece that includes pipes that transport the liquid helium.

  19. Regional CMS Modeling: Southwest Florida Gulf Coast

    Science.gov (United States)

    2016-05-01

    District (SAJ) jurisdiction and includes the coastline from Clearwater Beach in Pinellas County, FL, to Venice Beach in Sarasota County, FL (Figure 1...moving into inlet channels/shoals (Legault, in preparation). Mining this resource of beach quality sediment carries inherent risks of disrupting the...adjacent beaches. For Federal projects where it is deemed necessary to mine sediment from an ebb shoal, the CMS (a process-based, morphology-change

  20. Lustre filesystem for CMS storage element (SE)

    International Nuclear Information System (INIS)

    Wu, Y; Kim, B; Avery, P; Fu, Y; Bourilkov, D; Taylor, C; Prescott, C; Rodriguez, J

    2011-01-01

    This paper presents our effort to integrate the Lustre filesystem with BeStMan, GridFTP and Ganglia to make it a fully functional WLCG SE (Storage Element). We first describe the configuration of our Lustre filesystem at the University of Florida and our integration process. We then present benchmark performance figures and IO rates from the CMS analysis jobs and the WAN data transfer performance that are conducted on the Lustre SE.